In considering robotic path planning, we discussed the method of state space search and, in particular, breadth-first search (BFS). State space search is a technique that can be applied to do problem solving more generally.
In AI, our goal is to create systems that handle particular applications, but we're also interested in finding solutions that are general enough for many applications.
To build a problem-solving system (agent), we need to:
The idea behind state space search as a technique is this: We note that our world is in a particular state (in which the problem is unsolved) that we wish to transform into another state (in which the problem is solved). We will consider all possible ways that the current state of the world can be changed. If any of these leads to the desired state, then we'll adopt that sequence of changes as our solution.
*Many (if not all) of the example applications we will look at are games/puzzles. why?
1 |
2 |
3 |
8 |
7 |
|
6 |
5 |
4 |
to a specific goal configuration, such as
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
Each possible puzzle configuration is a state
Operators transform one state to another
Through search we hope to find a sequence of operators that transforms the start state (initial state) to a state that meets the criteria for being a goal, i.e., that satisfies a goal test.
To formulate this problem as a state space search problem, we need to specify:
Say that our start state (the initial configuration of the game) is
1 |
2 |
|
4 |
5 |
3 |
We will consider all possible moves that can be made in one step.
Then we will consider all possible moves that can be made in two steps.
And so on.
We do this until we reach the goal, as follows:
Now all we have to do is follow the path from the initial state to the goal. This gives us the sequence of moves that must be made in order to solve the puzzle!
Note that we arent solving the puzzle during the process of considering possible moves. As with robotic path planning, we use search to find the solution first and then execute the actual plan (path) later.
We will consider this and three other search strategies in more detail.
Begin at the initial state.
Do the following:
If so, then stop.
Otherwise, generate the next steps that could be taken from here. i.e., apply the operators to the current state.
Otherwise, go to the first state on the next level.
Do the following:
If so, then stop.
Otherwise, generate the next steps that could be taken from here. i.e., apply the operators to the current state.
Otherwise, go back up, and as soon as you encounter a state with a
"child", visit that child.
We will employ four criteria to evaluate and compare search strategies:
We can evaluate the time complexity as follows: let b = the branching factor (i.e., the maximum number of next states from any given state),
and let d = length of solution.
Then the time it takes to find the solution is proportional to the number of states that must be explored to find it, i.e.,
1 + b + b2 + b3 + ... + bd
The space complexity is the same because you have to maintain the states that are being explored. In particular, you need to maintain the bottommost level in the search tree.
The time complexity is approximately bm, where m is the maximum depth of the search tree.
The space complexity, however, is much better. It is approximately mb, because only a single path needs to be stored at any given time.
[Text from Rich and Knight, Artificial Intelligence]
[Text from Russell and Norvig, Artificial Intelligence: A Modern Approach]