logging in or signing up Mars PI MTG Sept 19 2002 Sarah Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINTLite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Copy Does not support media & animations WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 138 Category: Education License: All Rights Reserved Like it (0) Dislike it (0) Added: January 07, 2008 This Presentation is Public Favorites: 0 Presentation Description No description available. Comments Posting comment... Premium member Presentation Transcript Slide1: A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes (Presenter), A. Peters Vanderbilt University Center for Intelligent Systems http://eecs.vanderbilt.edu/CIS/DARPA/ September 2002 MARS PI Meeting Presentation / Demo: Presentation / Demo Objective Accomplishments Multi-agent based Robot Control Architecture Agent-based Human Robot Interfaces Sensory EgoSphere (SES) SES– and LES– based Navigation SES Knowledge Sharing Dynamic Path Planing through SAN-RL Adaptive Human-Robot Interface Human-Robot Teaming Human-Robot InterfaceObjective: Objective Develop a multi-agent based robot control architecture for humanoid and mobile robots that can: accept high-level commands from a human learn from experience to modify existing behaviors, and share knowledge with other robots Accomplishments: Accomplishments Multi-Agent based robot control architectures - Developed for humanoid and mobile robots Agent-Based Human-Robot Interfaces - Developed for humanoid and mobile robots SES (Sensory EgoSphere) for robot Short-Term Memory - Developed and transferred to NASA/JSC Robonaut group SES- & LES (Landmark EgoSphere)- based navigation - Proof of Concept Demonstrated SES knowledge sharing among mobile robots - Proof of Concept Demonstrated SAN-RL (Spreading Activation Network - Reinforcement Learning) –Integrated and Applied to mobile robots for dynamic path planningSlide5: Multi-Agent Based Robot Control Architecture Humanoid Mobile Novel Approach: Distributed, agent-based architecture that expressly represents human and humanoid internally Novel Approach: Distributed, agent-based architecture to gather mission relevant information from robotsSlide6: Agent-based Human-Robot Interfaces for Humanoids Novel Approach: Modeling the human’s and humanoid’s intent for interaction Human Agent (HA) observes and monitors the communications and actions of people extracts person’s intention for interaction communicates with people Self Agent (SA) monitors humanoid’s activity and performance for self-awareness and reporting to human determines the humanoid’s intention and response and reports to humanAgent-based Human-Robot Interface for Mobile Robots: Agent-based Human-Robot Interface for Mobile Robots Novel Approach: Interface that adapts to the current context of the mission in addition to user preferences by using User Interface Components (UIC) and an agent-based architecture Camera UIC Sonar UICSensory EgoSphere (SES) for Humanoids: Sensory EgoSphere (SES) for Humanoids First proposed by Albus, in 1991 Objects in ISAC’s immediate environment are detected Objects are registered onto the SES at the interface nodes closest to the objects’ perceived locations Information about a sensory object is stored in a database with the node location and other indexSlide9: Sensory EgoSphere (SES) for Robonaut Sensory EgoSphere (SES) for Mobile Robots: Sensory EgoSphere (SES) for Mobile Robots Used to enhance a graphical user interface and increase situational awareness In a GUI, the SES translates mobile robot sensory data from the sensing level to the perception level in a compact form Used for perception-based navigation with a Landmark EgoSphere Used for supervisory control of mobile robots Perceptual and sensory information is mapped on a geodesically tessellated sphere Distance information is not explicitly represented on SES An SES defines a location A sequence of SES’s defines a path SES 2d EgoCentric view Top viewSES- and LES-Based Navigation: Navigation behavior based on EgoCentric representations SES represents the current perception of the robot LES represents the expected state of the world SES and location are tightly bound Comparison of these provide the best estimate direction towards a desired region SES- and LES-Based Navigation Novel Approach: Range-free perception-based navigationSlide12: Human-Robot Teaming: Interactive Perception Correction Mixed-initiative perception correction for robust navigation Supports learning of landmarks Current ResearchSlide13: Navigation Demo With Perception CorrectionSES and LES Knowledge Sharing: Novel Approach: A team of robots that share SES and LES knowledge Robot 1 creates SES Robot 1 finds the object Robot1 shares SES data with Robot 2 Robot 2 calculates heading to the object Robot 2 finds the object Robot 1 has the map of the environment Robot 1 generates LES’s for viapoints Robot 1 shares LES data with Robot 2 Robot 2 navigates to the target using PBN SES and LES Knowledge SharingDynamic Path Planning through SAN-RL(Spreading Activation Network - Reinforcement Learning): Dynamic Path Planning through SAN-RL (Spreading Activation Network - Reinforcement Learning) Novel Approach: Action selection with learning for the mobile robot Behavior Priority : Using the shortest time Avoid enemy Equal priority More… Get initial data from learning mode High level command with multiple goals After finish training send data back to DB SAN-RL activate/deactivate robot’s behaviors Atomic Agents Scooter Current Directions: Current DirectionsAdaptive Human-Robot Interface: Adaptive Human-Robot InterfaceAdaptive Human-Robot InterfaceObjective & Key Features: Adaptive Human-Robot Interface Objective & Key Features Objective Develop a graphic user interface (GUI) that adapts its appearances and functions to the user’s preference and the current mission context Key Features High-Level Mission Planning and Mission Progress Management User/Mission-adaptive Display of Sensory Information User Preference Management Adaptive Human-Robot InterfaceArchitecture: Adaptive Human-Robot Interface Architecture Commander Interface Agent Robot Interface Agent Command UICs Status UICs GUI ManagerAdaptive Human-Robot InterfaceOverall Application: Adaptive Human-Robot Interface Overall ApplicationAdaptive Human-Robot InterfaceMission Planning &Mission Progress Management: Adaptive Human-Robot Interface Mission Planning &Mission Progress Management Mission Task A Task B Task C SAN A SAN B SAN C Mission Task Spreading Activation Network Adaptive Human-Robot InterfaceUser Interface Components (UICs): Adaptive Human-Robot Interface User Interface Components (UICs) Map UIC 2D/3D map Landmark Mapping Sonar/Laser UIC Selectable Appearances Camera UIC Supervisory Target SelectionAdaptive Human-Robot InterfaceDemo: Adaptive Human-Robot Interface Demo Scenario Go to Point A Map-based Navigation Find partner Supervisory Target Selection Follow partnerSlide24: Human-Robot Teaming Scenario Humans and robots cooperate in a perimeter surveillance mission SES / LES based navigation is used Humans provide perception correction for robust navigationSlide25: Human-Robot Teaming: Interactive Perception Correction Mixed-initiative perception correction for robust navigation Supports learning of landmarksSlide26: PDA Interface: Sketching and Linguistic Description (M. Skubic, Univ. Missouri - Columbia) Developed by M. Skubic et al. – Derives a qualitative linguistic description of the robot path. We plan to merge this with our SES/LES based navigation. A route map sketched on a PDA. Robot movements are shown in the table with the linguistic descriptions of the corresponding spatial states.Slide27: H-R Interface: Current Research Extract tri-phasic control parameters from the EMG signal Use tri-phasic control to move ISAC’s arm McKibben Artificial Muscles are well suited for this researchSlide28: Research Roadmap Phase 1 Develop Biologically Inspired Control Architecture that actuates ISAC's arm using simulated tonic and phasic components derived from EMG signals Phase 2 (Current Research) Map neuro-muscular junction signals to tri-phasic control parameters for control of a robotic arm Phase 3 Map spinal signals to the signals measured at the neuro-muscular junction in conjunction with VUMC Our goal is to indirectly use brain activity to control a humanoid robotic arm via surface electromyographic signals extracted from a user’s arm muscles. Corresponding Action from ISAC User flexes arm musclesPublications: Publications K. Kawamura, R.A. Peters II, D.M. Wilkes, W.A. Alford, and T.E. Rogers, "ISAC: Foundations in Human-Humanoid Interaction", IEEE Intelligent Systems, July/August 2000. K. Kawamura, A. Alford, K. Hambuchen, and M. Wilkes, "Towards a Unified Framework for Human-Humanoid Interaction", Proceedings of the First IEEE-RAS International Conference on Humanoid Robots, September 2000. K. Kawamura, T.E. Rogers and X. Ao, “Development of a Human Agent for a Multi-Agent Based Human-Robot Interaction,” First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), Bologna, Italy, July 15-19, 2002. T. Rogers, and M. Wilkes, "The Human Agent: a work in progress toward human-humanoid interaction" Proceedings 2000 IEEE International Conference on Systems, Man and Cybernetics, Nashville, October, 2000. A. Alford, M. Wilkes, and K. Kawamura, "System Status Evaluation: Monitoring the state of agents in a humanoid system”, Proceedings 2000 IEEE International Conference on Systems, Man and Cybernetics, Nashville, October, 2000. K. Kawamura, R. A. Peters II, C. Johnson, P. Nilas, S. Thongchai, “Supervisory Control of Mobile Robots Using Sensory EgoSphere”, IEEE International Symposium on Computational Intelligence in Robotics and Automation, Banff, Canada, July 2001. K. Kawamura, D.M. Wilkes, S. Suksakulchai, A. Bijayendrayodhin, and K. Kusumalnukool, “Agent-Based Control and Communication of a Robot Convoy,” Proceedings of the 5th International Conference on Mechatronics Technology, Singapore, June 2001. K. Kawamura, R.A. Peters II, D.M. Wilkes, A.B. Koku and A. Sekman, “Towards Perception-Based Navigation using EgoSphere”, Proceedings of the International Society of Optical Engineering Conference (SPIE), October 28-20, 2001. K. Kawamura, D.M. Wilkes, A.B. Koku, T. Keskinpala, “Perception-Based Navigation for Mobile Robots”, Proceedings of Multi-Robot System Workshop, Washington, DC, March 18-20, 2002. D.M. Gaines, M. Wilkes, K. Kusumalnukool, S. Thongchai, K. Kawamura and J. White, “SAN-RL: Combining Spreading Activation Networks with Reinforcement Learning to Learn Configurable Behaviors,” Proceedings of the International Society of Optical Engineering Conference (SPIE), October 28-20, 2001. Acknowledgements: Acknowledgements This work has been partially sponsored under the DARPA – MARS Grant # DASG60-01-1-0001 and from the NASA/JSC - UH/RICIS Subcontract # NCC9-309-HQ Additionally, we would like to thank the following CIS students: Mobile Robot Group: Bugra Koku, Turker Keskinpala, Hande Keskinpala, Jian Peng Humanoid Robotic Group: Tamara Rogers, Kim Hambuchen, Xinyu Ao, Duygun Erol, and Christina CampbellSlide31: EndSlide33: DBAM with SAN DBAM provides Long Term Memory Recalls sequences of Actions SAN provides action selection and memory recall Modifies the robots action based on its goals and the environmental stateSensory EgoSphere Display for Humanoids: Sensory EgoSphere Display for Humanoids Provides a tool for person to visualize what ISAC has detected Multi-Agent Based Robot Control Architecture for Humanoids: Novel Approach: Distributed architecture that expressly represents human and humanoid internally Publication [1,2] Multi-Agent Based Robot Control Architecture for Humanoids Multi-Agent Based Robot Control Architecture for Mobile Robots: Multi-Agent Based Robot Control Architecture for Mobile Robots Publication  Novel Approach: Distributed, agent-based architecture to gather mission relevant information from robots Adaptive Human-Robot Interfacethe Robot: Adaptive Human-Robot Interface the Robot ATRV-Jr (iRobot Corporation) Sonar Laser Scanner Gyro Odometer Compass Camera (Pan/Tilt/Zoom) Wireless LAN AdapterSlide38: PDA Interface: Creating the LES PDA provides a lightweight portable interface User can sketch the landmark map for creating LES’s Screenshot of Landmark map Screenshot of LES from landmark mapSystem Status Evaluation - Self Agent: System Status Evaluation - Self Agent Contains the Command I/O and Status Agt, Performance Agt, Description Agt. And the Activator Agt. Accepts commands and queries from the Commander Agent Activates the necessary agents to implement the commands Reports significant errors SSE – Performance Agent: SSE – Performance Agent The highest level of SSE occurs within the Performance Agent. Various measures of task progress and system performance are combined to determine the system affect.Slide41: System Status Evaluation: A Behavior-Level Architecture A behavior-level architecture that is a hybrid of the subsumption and motor schema approaches Modifies its behaviors based on a performance measureSlide42: SES- and LES-Based Navigation Basics of the PBNav Algorithm Landmarks on SES are paired to compute the direction of the motion for any given instant, then unit vectors are created to point to these landmarks both in the SES and LES view (uci represents a unit vector on SES, uti represents a unit vector on LES). Any landmark that is present in LES but not in SES is neglected. D is the direction chosen for the situation described by an SES-LES pair LES SES dcij = uci . ucj Cij= uci x ucj dtij = uti . utj Tij= uti x utj Aij = sgn(dcij – dtij) Bij = [sgn(Cij . Tij) + 1] / 2 Dij = (1 + Bij(Aij -1) )(uci + ucj / || uci + ucj ||) D = Dij where ij Slide43: Human-Robot Teaming: Interactive Perception Correction Mixed-initiative perception correction for robust navigation Supports learning of landmarks You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.