Category: Entertainment

Presentation Description

No description available.


Presentation Transcript

Meeting July 8th 2011:

Meeting July 8 th 2011 Aidin Foroughi

Visual Servoing Components:

Visual Servoing Components Controller Visual Component Features Constant and Estimation Parameters Desired Features Gain Interaction Matrix Velocity Controller Parameters Velocity Controller Hybrid/Switching Methods Feature Planning Path Planning

Visual Servoing Constraints:

Visual Servoing Constraints Controller Visual Component Velocity Controller Field of View Image Local Minima Singularities in Image Jacobian Kinematic Const. Dynamic Const. Collision Occlusion

Task Modeling for execution by Visual Servoing:

Task Modeling for execution by Visual Servoing Learning and Execution are usually considered as separate problems. It seems a natural idea to propose task models that are suitable for execution. => To define and learn tasks with respect to Visual Features. PbD Visual Servoing Task Model demonstrations Hybrid Intelligent Supervisory System

Visual Learning and Visual Servoing:

Visual Learning and Visual Servoing The idea is to learn tasks in association with the visual cues that can best explain the tasks. For example: a system watches a demonstration and detects certain edges/points/features in the images that are most probably associated with the tasks. In future, those features can trigger the task or subtask. For the visual input an eye in hand or a separate camera are the options.

Visual Learning and Servoing:

Visual Learning and Servoing Let’s consider an unintelligent case with a an eye in hand case A camera attached to the tool can record the images while the task is being demonstrated. Some local image features can be tracked in the image sequences. Several demonstrations can be made and the average or a generalization method can be applied to the feature trajectories and a generalized smooth sequence for feature trajectories can be extracted. For execution we can use these feature trajectories with an image based visual servoing approach.

Visual Learning and Servoing:

Visual Learning and Servoing But, we can think of more intelligent extensions of the idea. Let’s imagine that several demonstration on different targets are demonstrated. (not only on the same target/object) Before, the differences and variances between different trajectories was used to generalize trajectory, here we can use the differences in visual cues in different cases to realize which visual cues are important to determine the way the task is performed. If this works, the system can learn which visual features are most probably related to the task and it can learn to perform the task on the targets never encountered before. Painting is perhaps the best example.

Visual Learning and Servoing:

Visual Learning and Servoing Execution of the task, however, can’t be done using a classical visual servoing method. There has to be a higher level module that selects visual cues dynamically, does path planning and monitors the process. This is where an intelligent visual servoing method is needed.

Relation to other areas:

Relation to other areas Some ideas are similar to the concept of Object Affordances J. J. Gibson. The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates, Hillsdale, pp.127, 1979. In the affordance framework, the effects of tasks on the objects are learned and then based on the task requirement and knowledge of the effects of tasks, it is applied. But we may not focus the effects of actions and rather learn the causes of actions.


Tools Knowledge of computer vision, specially local feature descriptors o holistic image descriptors is needed. Stochastic tools such as HMM are needed to generalize among different demonstrations. I have been looking into using HMM for tracking and learning visual features for eye in hand task demonstrations. Nothing similar has been done. I’ve also been looking into visual servoing libraries to see how a higher level module can be implemented on top.

authorStream Live Help