Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact firstname.lastname@example.org
Grasping and Manipulation
Movement-Aware Action Control - Integrating Symbolic and Control-Theoretic Action ExecutionIn this paper we propose a bridge between a symbolic reasoning system and a task function based controller. We suggest to use modular position- and force constraints, which are represented as action-object-object triples on the symbolic side and as task function parameters on the controller side. This description is a considerably more fine-grained interface than what has been seen in high-level robot control systems before. It can preserve the 'null space' of the task and make it available to the control level. We demonstrate how a symbolic description can be translated to a control-level description that is executable on the robot. We describe the relation to existing robot knowledge bases and indicate information sources for generating constraints on the symbolic side. On the control side we then show how our approach outperforms a traditional controller, by exploiting the task's null space, leading to a significantly extended work space.
Physically-Based Grasp Quality Evaluation under UncertaintyIn this paper new grasp quality measures considering both object dynamics and pose uncertainty are proposed. Dynamics of the object is incorporated into our grasping simulation to capture the change of its pose and contact points during grasping. Pose uncertainty is considered by running multiple simulations starting from slightly different initial poses sampled from a probability distribution model. A simple robotic grasping strategy is simulated and the quality score of the resulting grasp is evaluated from the simulation result. The effectiveness of the new quality measures on predicting the actual grasp success rate is shown through a real robot experiment.
Bimanual Regrasping from Unimanual Machine LearningWhile unimanual regrasping has been studied extensively, either by regrasping in-hand or by placing the object on a surface, bimanual regrasping has seen little attention. The recent popularity of simple end-effectors and dual-manipulator platforms makes bimanual regrasping an important behavior for service robots to possess. We solve the challenge of bimanual regrasping by casting it as an optimization problem, where the objective is to minimize execution time. The optimization problem is supplemented by image processing and a unimanual grasping algorithm based on machine learning that jointly identify two good grasping points on the object and the proper orientations for each end-effector. The optimization algorithm exploits this data by finding the proper regrasp location and orientation to minimize execution time. Influenced by human bimanual manipulation, the algorithm only requires a single stereo image as input. The efficacy of the method we propose is demonstrated on a dual manipulator torso equipped with Barrett WAM arms and Barrett Hands.
Planar, Bimanual, Whole-Arm GraspingWe address the problem of synthesizing planar, bimanual, whole-arm grasps by developing the abstraction of an open chain gripper, an open, planar chain of rigid links and revolute joints contacting a planar, polygonal object, and introducing the concept of a generalized contact. Since two generalized contacts suffice for planar grasps, we leverage previous work on caging and immobilization for two contact grasps to construct an algorithm which synthesizes contact configurations for stable grasping. Simulations show that our methodology can be applied to grasp a wide range of planar objects without relying on special-purpose end-effectors. Representative experiments with the PR2 humanoid robot illustrate that this approach is practical.
Identification of Contact Formations: Resolving Ambiguous Force Torque InformationThis paper presents the identification of contact formations using force torque information. As force torque measurements do not map uniquely to their corresponding contact formations, three steps are performed: A contact formation graph is augmented with a similarity index that reflects the similarity of contact formations with respect to their spanned wrench spaces. Prior to that, the wrench space for each contact formation is computed automatically. A particle filter is used to represent the likeliness of a contact formation given a force torque measurement. Finally, this probability distribution is resolved taking the similarity index, the transitions of the contact formation graph and the history of identified contact formations into account. This allows to recognize the order of demonstrated contact formations by a measured set of forces and torques. The approach is verified by experiments.