This part of the book brings together much that we have learnt previously: robot kinematics and dynamics for arms and mobile robots; geometric aspects of image formation; and feature extraction.
The motivation is that while it is common to talk about a robot moving to an object, the reality is that the robot is only moving to a pose at which it expects the object to be. This is a subtle but deep distinction. A consequence of this is that the robot will fail to grasp the object if it is not at the expected pose. It will also fail if imperfections in the robot mechanism or controller result in the end-effector not actually achieving the end-effector pose that was specified. In order for this approach to work successfully we need to solve two quite difficult problems: determining the pose of the object and ensuring the robot achieves that pose.
Stepping back for a moment and looking at this problem it is clear that the root cause of the problem is that the robot cannot see what it is doing. If the robot could see the object and its end-effector, it could use that information to guide the end-effector toward the object. This is what humans call hand-eye coordination and what we will call vision-based control or visual servo control — the use of information from one or more cameras to guide a robot in order to achieve a task.
A vision-based control system involves continuous measurement of the target and the robot using vision to create a feedback signal and moves the robot arm until the visually observed error between the robot and the target is zero. Vision-based control is quite different to taking an image, determining where the target is and then reaching for it. The advantage of continuous measurement and feedback is that it provides great robustness with respect to any errors in the system.