# The need for planning at lower levels

We've already seen how planning can be used for high level tasks. It is also important at lower levels.

For example, planning is needed for the point-to-point motion problem: to deliver the robot or its end-effector to a designated target location.

In particular, consider a robot that must get to a specific location in a room, as in the figure below. In the figure, the robot is represented by an arrow, which indicates the direction the robot is facing. The goal location is marked with a "G". The filled squares represent obstacles.

Specific locations in the room can be identified by their (x, y) coordinates. So the task is to plan a path from (4, 1) to (0, 5). The only moves allowed are:

• move forward one space(f)
• move backward one space(b)
• turn and move one space to the right (r)
• turn and move one space to the left (l)
i.e., no diagonal motion is allowed.

So, given:

• a description of the current state
• a description of the goal state
find a sequence of operators that takes the robot to the goal.

# An algorithm for path planning

A simple algorithm for path planning is as follows:
```Consider all possible states you can reach in one step.

If the goal is not reached,
Consider all possible states you can reach in two steps.

If the goal is not reached,
Consider all possible states you can reach in three steps.

etc.
```
For example, here is a part of the state space that the algorithm would consider:

Once the goal is reached, the program should output the series of steps that would get you from the initial state to the goal state:

```(4,1) -(f)-> (4,2) -(f)-> (4,3) -(f)-> (4,4) -(l)-> (3,4) -(f)-> (2,4) -(f)->
(1,4) -(f)-> (0,4) -(r)-> (0,5)
```
This technique -- state space search -- is a very important and general AI technique. (A variant of this - A* - was used to perform simple path planning in Shakey.) There are many interesting considerations/issues that arise in the context of state space search, as we will see in a couple of days.

# Challenges for robotic motion planning

While the search technique described for path planning above works well for navigation, it alone isn't quite right for other types of robotic motion planning. Consider, for example, making a plan for a robotic arm to replace a part in a car engine. This task differs from the one above in important ways:

If we assume the robot above is a wheeled robot, we can easily imagine how the navigation plan could be implemented smoothly. The only actions allowed are moving forward, backward, or making a 90-degree turn. On the other hand, a robotic arm will typically have at least 6 degrees of freedom. Moving the end effector to a specific goal location involves the coordination of all of the joints. Furthermore, it involves keeping track of all possible collisions with all parts of the arm.

We can sum up the challenges for robotic motion planning as follows:

• Discretizing the world makes planning easy, but we ultimately want the motion to be continuous (i.e., smooth). As we'll see later, the main distinction between other AI problems and robotics problems is that robotics usually involves continuous state spaces.
• We need to move to a location in 3-dimensional space, but there may be a disconnect between the language we use to describe 3D space (x, y, z coordinates) and the way we naturally think of the motion of the robot (for example, angles of the joints in a robotic arm).
• We need to be very careful to avoid collisions.
We'll consider the representation issue first, but will quickly move to the other two issues as well.

# Workspace representation

Imagine a robot arm like the one below [Russell and Norvig]:

This robot arm has two joints that move independently. Moving these joints alters the coordinates of the elbow and the gripper. So we could represent the position of the arm in terms of the (x, y) coordinate of the elbow and the (x, y) coordinate of the gripper. (Note that while the robot is 3-dimensional, its movement is restricted to a plane.)

This is called a workspace representation: the coordinates of the robot are specified in the same coordinate system of the world it's manipulating.

# Problems with the workspace representation

• Even in the absence of obstacles, not all workspace coordinates are attainable (due to linkage constraints).
• Need to generate paths that adhere to these constraints (paths that don't adhere to this are useless)

# Configuration spaces

Planning can be made easier with a configuration space representation. Instead of using Cartesian coordinates, the state is described by a configuration of the robot's joints. For this robot, there are two joints, which we describe by specifying two angles.

There are constraints on the angles of the joints. For instance, the "shoulder" might be restricted to angles in the range 40 degrees to 320 degrees. Similarly for the "elbow". But in a world with no obstacles, this information is enough to allow us to specify any legal configuration of the robotic arm and to exclude all illegal configurations.

Another example: Robot arm with 6 degrees of freedom:

• Can describe the state or configuration of the robot with a list of 6 joint angles.
• (q 1, q 2, q 3, q 4, q 5, q 6)
• A given configuration of the robot is represented by a point in a 6-dimensional space called the configuration space of the robot.

Generalizing our example. Robot with k degrees of freedom.

• Describe the state or configuration with a list of k joint angles.
• (q 1, q 2, …, q k)
• A given configuration of the robot is represented by a point in k-dimensional space which is the configuration space of the robot.

# Planning in a configuration space with no obstacles

In the absence of obstacles, the robot can take on any position in its configuration space. In our 2-dimensional model: if there are no obstacles, can connect the starting and target configurations by a straight line. The plan, then, is to follow that line.

# Handling obstacles in configuration space

Note that the picture of the robot arm shown earlier has several obstacles in it: the left and right walls, the table, and the vertical bar hanging from the ceiling:

Any plan needs to avoid configurations of the robot arm that will bump into obstacles. To handle this, we mark the parts of the configuration space that are occupied by the various obstacles:

We call the space containing obstacles the occupied space.

We call the space in which the robot is free to move the free space.

In the example above, the white space is the free space.

Note that even if the obstacles are fairly simple, as in our example, the configuration space can look quite complex.

# Path planning in a configuration space with obstacles: cell decomposition

Once there are obstacles in our space, planning is no longer as simple as following a straight line between the current configuration and the goal configuration. Here we discuss one technique for planning a path: cell decomposition.

Cell decomposition works as follows: First, we decompose the free space into cells. That is, we divide the space into a grid (just as we did when doing path planning with state space search). Then we do path planning as in the discrete case. Once we know the series of cells that make up the high level plan, we find a path that travels through the cells. Within any given cell, we simply travel in a straight line!

# Shortcomings of planning with cell decomposition

While this method has many benefits, it isn't without its own challenges:
1. it is only really workable for low-dimension configuration spaces (just gets too complex otherwise)
2. it doesn't necessarily provide safe paths; we really want to maximize clearance while minimizing the length of the path.
3. need to determine how to handle cells that are not entirely in free space or occupied space -> can decompose further.

# Problems with configuration spaces

We have already seen that one problem with configuration spaces is obstacles, and we have seen at least one mechanism for dealing with that problem.

A second issue is that the task is generally expressed in workspace coordinates. How do we then map the workspace coordinates into configuration space? This is a problem of reverse kinematics.

Kinematics: a series of transformations that convert configuration space coordinates into workspace coordinates.

The mapping of configuration space coordinates to workspace coordinates isn't too hard. The inverse direction is much more tricky, especially for robots with many degrees of freedom.