Category: Entertainment

Presentation Description

No description available.


Presentation Transcript

Meeting June 10th 2011:

Meeting June 10 th 2011 Aidin Foroughi


Outline In these slides I will try to discuss tangible scenarios for PbD and Visual Servoing problems. But before, I would like to briefly go over some findings during my survey which are necessary for a discussion.

Case of PbD:

Case of PbD We want to improve over PbD. What are the problems with current PbD approaches? What are the PbD approaches in the first place? So I did a little survey on PbD. [1] Brenna D. Argall, Sonia Chernova , Manuela Veloso , Brett Browning, A survey of robot learning from demonstration , Robotics and Autonomous Systems, Volume 57, Issue 5, 31 May 2009 , Pages 469-483, ISSN 0921-8890, DOI: 10.1016/j.robot.2008.10.02 [2] Manuel Lopes, Francisco Melo , Luis Montesano, and Jose Santos-Victor. Abstraction levels for robotic imitation : Overview and computational approaches, 2010. The following is based on these papers.

PbD Approaches:

PbD Approaches Each of these two papers present a different categorization, but they are more or less the same. First we will go over [1].

PbD Approaches:

PbD Approaches [1] goes on to classify them further as follows. This is what I used to think is PbD. Although, I knew that this category also exists (Reinforcement learning) But I didn’t quite know that these exist.

PbD Approaches:

PbD Approaches This takes us to the second reference. [2] explicitly talks about layers of abstraction for PbD. Motor Ressonance Replication of observed motor patterns Object-Mediated Imitation Replication of observed effects Imitation by Goal Inference Replication of actions toward inferred goal

PbD Aproaches:

PbD Aproaches It then defines different behaviors in different levels and presents a terminology.

PbD Approaches:

PbD Approaches [2] goes on and talks about many approaches in each level including some interesting papers that infer the goal of the demonstrator and try to achieve the goal, without necessarily replicating the action (the trajectory etc.) Although if the goal is inferred to be copying the trajectory, it may simply copy the trajectory.

A Discussion on Planning Methods:

A Discussion on Planning Methods Although the planning approaches to PbD are often categorized as a separate approach to the PbD problem, I believe that this categorization is not entirely correct. Because: I believe the planning method and the other two methods are often applied to two totally different classes of problems, which happen to have similar names and similar terminology. Planning approach is mostly applied to problems with multitude of Simple actions where the basic actions are recognizable and simple to replicate (Stacking, pick up, pushing etc) and then, for instance, a goal of “sorting” is inferred and the goal is replicated in different situations with a totally different sequence of actions. The other two methods are applied to tasks were often a Single Task with Complexity are present. For instance, flipping an artificial pancake in a pan. Planning and higher level approaches are hardly useful.

A Discussion on Basic Assumptions for Planning :

A Discussion on Basic Assumptions for Planning In planning the central assumptions are: Actions are performed instantaneously and sequentially. (episodic actions) The effects of actions and the environment are observable and recognizable. This results in (or is necessary for) discretization of actions. There has been little work on planning for continuous domains and there is no general framework for planning for continuous domains. (We do have temporal logic, modal logic etc, but the domain is still discrete.) Therefore contiguous problems are inherently not very well modeled to fit into the planning approaches.

Other Papers (LbI):

Other Papers ( LbI ) There are other papers which can be called, “Learning by Interaction” In these papers, parallel to demonstrations, semantic knowledge in form of special annotations, rules, or natural sentences (which are then translated into a form of knowledge representation), are given to the robot. The robot then processes all this information together. There are many examples of this approach in the literature, and some of them are not new.

Survey Conclusion:

Survey Conclusion Using higher levels of abstraction for PbD has been investigated in the literature and a book is published (in 2010) dedicated to this subject. Hybrid approaches are rare. Issues of lower level methods: I noticed that some issues are mentioned in the literature about lower level methods of learning from demonstrations. Over-Imitation Problems with repeated actions. (if some task has to be repeated several times)

Discussion on a Hybrid Method:

Discussion on a Hybrid Method A supervisory high level controller for low level PbD is a feasible idea. In a discussion I had with Aleks , it came to our attention that lower level parameters can be overwritten or bound with respect to the task knowledge. This task knowledge can be given to the system in two forms: In a PbI framework. (Given during demonstrations) Seperately (Before or after). In any case, the high level reasoner controls how lower level processing should be done. The following is a more tangible preliminary scenario.

PbD Scenario:

PbD Scenario For a painting task, it’s known that while applying the paint, the distance to the target surface should be constant. PbD with lower level methods may result in a trajectory which doesn’t show this property. Because: The human demonstrator is not a robot and can’t have fixed distance to the surface. The resulting trajectory may not have fixed distance because of the low level processing itself. Based on the task knowledge which is given to the system, separately, or during demonstration, the processing parameters are altered to match this criteria.

PbD Scenario:

PbD Scenario Ultimately, all the trajectory demonstration and learning together with lower level processing and their parameters can be automated using a knowledge-based reasoner . For instance, during the demonstration, the knowledge base can control the recording of data together with posing questions when needed. Then later this information is used for tuning or choosing the appropriate low level processes. Note: The difference of this hybrid method with a planning approach to learning is that, in planning approaches, the higher level learns. But here, the higher level controls the lower level learning. (and perhaps later executes these lower level information – which is an extension discussed later)

PbD Scenario Discussion:

PbD Scenario Discussion Extensions could include changing trajectory parameters for execution when encountering new targets. (changing the frequency of wave-form trajectories to cover the surface etc.) Questions for discussion: Do we need a low level learner if we know what a wave-form trajectory is and its parameters? In other words do we need low level information at all? Counter examples FlexPaint fully automatic painting platform. (2005) Or the fully automatic Polishing System. (2005) Any ideas?

Visual Servoing:

Visual Servoing I did a survey on visual servoing using the recent tutorials by Chaumette and Hutchinson. My understanding of the problem is that it’s composed of: A control loop that works on minimizing an error. A vision system which uses a method to come up with an error which represents how close we are to the desired pose, using: Image features (IBVS) Pose estimation (PBVS) Also I know about more advance methods. (Part II of the tutorial) My main focus was to find a higher level reasoner somewhere either in the visual section or in the controller, so I looked for reasoning used in visual servoing .

Higher level visual servoing?:

Higher level visual servoing ? I looked for cognitive and semantic approaches used for visual servoing tasks, and I believe the absence of such works is because they are no longer called visual servoing because perhaps they no longer fit in the visual servoing framework. For example: [3] Wasik , Z.; Saffiotti , A.; , "A hierarchical behavior-based approach to manipulation tasks," Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on , vol.2, no., pp. 2780- 2785 vol.2, 14-19 Sept. 2003

Visual Servoing:

Visual Servoing This is how they describe their framework in comparison with visual servoing :

Visual Servoing:

Visual Servoing Then they define behaviors in form of fuzzy rules:

Visual Servoing:

Visual Servoing Then they define more complex functions in terms of these smaller rules.

Visual Servoing:

Visual Servoing Results:

Visual Servoing:

Visual Servoing So there are (possibly many) higher level approaches to visual servoing tasks. The presented paper can take care of occlusions by searching whenever the object is lost or one can define rules for keeping the target in the frame and try to keep it unclouded.

Situation awareness for visual servoing:

Situation awareness for visual servoing The frameworks that I saw all lacked a form of History of the problem, meaning they were essentially reflexive. A system which keeps the information of the environment together with the task goal can deal with much more complex situations. The situation awareness perhaps calls for a decent 3D reconstruction or representation of the environment together with object recognition etc. There have been attempts to this end, and fortunately a good starting point is available. (The CoVIS project, the are c++ codes and libraries available with documentation for 3D reconstruction of the environment with Early Cognitive Vision primitives and it’s been used for grasping tasks)

Scenario for Visual Servoing:

Scenario for Visual Servoing There are both low level and higher level behavioral platforms for visual servoing and improving over the current systems maybe in form of a general cognitive vision platform for Robotics. Although it’s a complicated project but it’s possible thanks to the already available infrastructure provided by other research teams. This is because most of the low level difficulties have been overcome.

authorStream Live Help