Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv
Grasping and Manipulation
-
Movement-Aware Action Control - Integrating Symbolic and Control-Theoretic Action ExecutionIn this paper we propose a bridge between a symbolic reasoning system and a task function based controller. We suggest to use modular position- and force constraints, which are represented as action-object-object triples on the symbolic side and as task function parameters on the controller side. This description is a considerably more fine-grained interface than what has been seen in high-level robot control systems before. It can preserve the 'null space' of the task and make it available to the control level. We demonstrate how a symbolic description can be translated to a control-level description that is executable on the robot. We describe the relation to existing robot knowledge bases and indicate information sources for generating constraints on the symbolic side. On the control side we then show how our approach outperforms a traditional controller, by exploiting the task's null space, leading to a significantly extended work space.
-
Physically-Based Grasp Quality Evaluation under UncertaintyIn this paper new grasp quality measures considering both object dynamics and pose uncertainty are proposed. Dynamics of the object is incorporated into our grasping simulation to capture the change of its pose and contact points during grasping. Pose uncertainty is considered by running multiple simulations starting from slightly different initial poses sampled from a probability distribution model. A simple robotic grasping strategy is simulated and the quality score of the resulting grasp is evaluated from the simulation result. The effectiveness of the new quality measures on predicting the actual grasp success rate is shown through a real robot experiment.
-
Bimanual Regrasping from Unimanual Machine LearningWhile unimanual regrasping has been studied extensively, either by regrasping in-hand or by placing the object on a surface, bimanual regrasping has seen little attention. The recent popularity of simple end-effectors and dual-manipulator platforms makes bimanual regrasping an important behavior for service robots to possess. We solve the challenge of bimanual regrasping by casting it as an optimization problem, where the objective is to minimize execution time. The optimization problem is supplemented by image processing and a unimanual grasping algorithm based on machine learning that jointly identify two good grasping points on the object and the proper orientations for each end-effector. The optimization algorithm exploits this data by finding the proper regrasp location and orientation to minimize execution time. Influenced by human bimanual manipulation, the algorithm only requires a single stereo image as input. The efficacy of the method we propose is demonstrated on a dual manipulator torso equipped with Barrett WAM arms and Barrett Hands.
-
Planar, Bimanual, Whole-Arm GraspingWe address the problem of synthesizing planar, bimanual, whole-arm grasps by developing the abstraction of an open chain gripper, an open, planar chain of rigid links and revolute joints contacting a planar, polygonal object, and introducing the concept of a generalized contact. Since two generalized contacts suffice for planar grasps, we leverage previous work on caging and immobilization for two contact grasps to construct an algorithm which synthesizes contact configurations for stable grasping. Simulations show that our methodology can be applied to grasp a wide range of planar objects without relying on special-purpose end-effectors. Representative experiments with the PR2 humanoid robot illustrate that this approach is practical.
-
Identification of Contact Formations: Resolving Ambiguous Force Torque InformationThis paper presents the identification of contact formations using force torque information. As force torque measurements do not map uniquely to their corresponding contact formations, three steps are performed: A contact formation graph is augmented with a similarity index that reflects the similarity of contact formations with respect to their spanned wrench spaces. Prior to that, the wrench space for each contact formation is computed automatically. A particle filter is used to represent the likeliness of a contact formation given a force torque measurement. Finally, this probability distribution is resolved taking the similarity index, the transitions of the contact formation graph and the history of identified contact formations into account. This allows to recognize the order of demonstrated contact formations by a measured set of forces and torques. The approach is verified by experiments.
- All Sessions
- Modular Robots & Multi-Agent Systems
- Mechanism Design of Mobile Robots
- Bipedal Robot Control
- Navigation and Visual Sensing
- Localization
- Perception for Autonomous Vehicles
- Rehabilitation Robotics
- Embodied Intelligence - Complient Actuators
- Grasping: Modeling, Analysis and Planning
- Learning and Adaptive Control of Robotic Systems I
- Marine Robotics I
- Autonomy and Vision for UAVs
- RGB-D Localization and Mapping
- Micro and Nano Robots II
- Minimally Invasive Interventions II
- Biologically Inspired Robotics II
- Underactuated Robots
- Animation & Simulation
- Planning and Navigation of Biped Walking
- Sensing for manipulation
- Sampling-Based Motion Planning
- Space Robotics
- Stochastic in Robotics and Biological Systems
- Path Planning and Navigation
- Semiconductor Manufacturing
- Haptics
- Learning and Adaptation Control of Robotic Systems II
- Parts Handling and Manipulation
- Results of ICRA 2011 Robot Challenge
- Teleoperation
- Applied Machine Learning
- Biomimetics
- Micro - Nanoscale Automation
- Multi-Legged Robots
- Localization II
- Micro/Nanoscale Automation II
- Visual Learning
- Continuum Robots
- Robust and Adaptive Control of Robotic Systems
- Hand Modeling and Control
- Multi-Robot Systems 1
- Medical Robotics I
- Compliance Devices and Control
- Video Session
- AI Reasoning Methods
- Redundant robots
- High Level Robot Behaviors
- Biologically Inspired Robotics
- Novel Robot Designs
- Underactuated Grasping
- Data Based Learning
- Range Imaging
- Collision
- Localization and Mapping
- Climbing Robots
- Embodied Inteligence - iCUB
- Stochastic Motion Planning
- Medical Robotics II
- Vision-Based Attention and Interaction
- Control and Planning for UAVs
- Industrial Robotics
- Human Detection and Tracking
- Trajectory Planning and Generation
- Image-Guided Interventions
- Novel Actuation Technologies
- Micro/Nanoscale Automation III
- Human Like Biped Locamotion
- Embodied Soft Robots
- Mapping
- SLAM I
- Mobile Manipulation: Planning & Control
- Simulation and Search in Grasping
- Control of UAVs
- Grasp Planning
- Marine Robotics II
- Force & Tactile Sensors
- Motion Path Planning I
- Environment Mapping
- Octopus-Inspired Robotics
- Soft Tissue Interaction
- Pose Estimation
- Humanoid Motion Planning and Control
- Surveillance
- SLAM II
- Intelligent Manipulation Grasping
- Formal Methods
- Sensor Networks
- Cable-Driven Mechanisms
- Parallel Robots
- Visual Tracking
- Physical Human-Robot Interaction
- Robotic Software, Programming Environments, and Frameworks
- Minimally invasive interventions I
- Force, Torque and Contacts in Grasping and Assembly
- Hybrid Legged Robots
- Non-Holonomic Motion Planning
- Calibration and Identification
- Compliant Nanopositioning
- Micro and Nano Robots I
- Multi-Robot Systems II
- Grasping: Learning and Estimation
- Grasping and Manipulation
- Motion Planning II
- Estimation and Control for UAVs
- Multi Robots: Task Allocation
- 3D Surface Models, Point Cloud Processing
- Needle Steering
- Networked Robots