Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv
Robotic Software, Programming Environments, and Frameworks
-
A Framework for Autonomous Self-Righting of a Generic Robot on Planar SurfacesDuring various acts, a robot may unintentionally tip over, rendering it unable to move normally. The ability to self-right and recover in such situations is crucial to mission completion and safe robot recovery. However, nearly all self-righting solutions to date are point solutions, each designed for a specific platform. As a first step toward a generic solution, this paper presents a framework for analyzing the self-righting capabilities of any generic robot on sloped planar surfaces. Based on the planar assumption, interactions with the ground can be modeled entirely using the robot’s convex hull. We begin by analyzing the stability of each robot orientation for all possible joint configurations. From this, we develop a configuration space map, defining stable state sets as nodes and the configurations where discontinuous state changes occur as transitions. Finally, we convert this map into a directed graph and assign costs to the transitions according to changes in potential energy between states. Based upon the ability to traverse this directed graph to the goal state, one can analyze a robot’s ability to self-right. To illustrate each step in our framework, we use a two-dimensional robot with a one degree of freedom arm, and then show a case study of iRobot’s Packbot. Ultimately, we project that this framework will be useful both for designing robots with the ability to self-right and for maximizing autonomous self-righting capabilities of fielded robots.
-
OpenFABMAP: An Open Source Toolbox for Appearance-Based Loop Closure DetectionAppearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
-
High-Resolution Depth Maps Based on TOF-Stereo FusionThe combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in varying robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which projects the TOF data onto the stereo image pair as an initial set of correspondences. These initial “seeds†are then propagated using a similarity score based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show t
-
RoboFrameNet: Verb-Centric Semantics for Actions in Robot MiddlewareAdvancements in robotics have led to an ever-growing repertoire of software capabilities (e.g., recognition, mapping, and object manipulation). However, robotic capabilities grow, the complexity of operating and interacting with such robots increases (such as through speech, gesture, scripting, or programming). Language-based communication can offer users the ability to work with physically and computationally complex robots without diminishing the robot's inherent capability. However, it remains an open question how to build a common ground between natural language and goal-directed robot actions, particularly in a way that scales with the growth of robot capabilities. We examine using semantic frames -- a linguistics concept which describes scenes being acted out -- as a conceptual stepping stone between natural language and robot action. We examine the scalability of this solution through the development of RoboFrameNet, a generic language-to-action pipeline for ROS (the Robot Operating System) that abstracts verbs and their dependents into semantic frames, then grounds these frames into actions. We demonstrate the framework through experiments with the PR2 and Turtlebot robot platforms and consider the future scalability of the approach.
-
Building Occupancy Maps with a Mixture of Gaussian ProcessesThis paper proposes a new method for occupancy map building using a mixture of Gaussian processes. We consider occupancy maps as a binary classification problem of positions being occupied or not, and apply Gaussian processes. Particularly, since the computational complexity of Gaussian processes grows as O(n^3), where n is the number of data points, we divide the training data into small subsets and apply a mixture of Gaussian processes.The procedure of our map building method consists of three steps. First, we cluster acquired data by grouping laser hit points on the same line into the same cluster. Then, we build local occupancy maps by using Gaussian processes with clustered data. Finally, local occupancy maps are merged into one by using a mixture of Gaussian processes. Simulation results will be compared with previous researches and provided demonstrating the benefits of the approach.
- All Sessions
- 3D Surface Models, Point Cloud Processing
- Needle Steering
- Networked Robots
- Grasping and Manipulation
- Motion Planning II
- Estimation and Control for UAVs
- Multi Robots: Task Allocation
- Localization
- Perception for Autonomous Vehicles
- Rehabilitation Robotics
- Modular Robots & Multi-Agent Systems
- Mechanism Design of Mobile Robots
- Bipedal Robot Control
- Navigation and Visual Sensing
- Autonomy and Vision for UAVs
- RGB-D Localization and Mapping
- Micro and Nano Robots II
- Embodied Intelligence - Complient Actuators
- Grasping: Modeling, Analysis and Planning
- Learning and Adaptive Control of Robotic Systems I
- Marine Robotics I
- Animation & Simulation
- Planning and Navigation of Biped Walking
- Sensing for manipulation
- Sampling-Based Motion Planning
- Minimally Invasive Interventions II
- Biologically Inspired Robotics II
- Underactuated Robots
- Semiconductor Manufacturing
- Haptics
- Learning and Adaptation Control of Robotic Systems II
- Parts Handling and Manipulation
- Space Robotics
- Stochastic in Robotics and Biological Systems
- Path Planning and Navigation
- Biomimetics
- Micro - Nanoscale Automation
- Multi-Legged Robots
- Localization II
- Results of ICRA 2011 Robot Challenge
- Teleoperation
- Applied Machine Learning
- Hand Modeling and Control
- Multi-Robot Systems 1
- Medical Robotics I
- Micro/Nanoscale Automation II
- Visual Learning
- Continuum Robots
- Robust and Adaptive Control of Robotic Systems
- High Level Robot Behaviors
- Biologically Inspired Robotics
- Novel Robot Designs
- Compliance Devices and Control
- Video Session
- AI Reasoning Methods
- Redundant robots
- Localization and Mapping
- Climbing Robots
- Embodied Inteligence - iCUB
- Underactuated Grasping
- Data Based Learning
- Range Imaging
- Collision
- Industrial Robotics
- Human Detection and Tracking
- Trajectory Planning and Generation
- Stochastic Motion Planning
- Medical Robotics II
- Vision-Based Attention and Interaction
- Control and Planning for UAVs
- Embodied Soft Robots
- Mapping
- SLAM I
- Image-Guided Interventions
- Novel Actuation Technologies
- Micro/Nanoscale Automation III
- Human Like Biped Locamotion
- Marine Robotics II
- Force & Tactile Sensors
- Motion Path Planning I
- Mobile Manipulation: Planning & Control
- Simulation and Search in Grasping
- Control of UAVs
- Grasp Planning
- Humanoid Motion Planning and Control
- Surveillance
- Environment Mapping
- Octopus-Inspired Robotics
- Soft Tissue Interaction
- Pose Estimation
- Cable-Driven Mechanisms
- Parallel Robots
- SLAM II
- Intelligent Manipulation Grasping
- Formal Methods
- Sensor Networks
- Force, Torque and Contacts in Grasping and Assembly
- Hybrid Legged Robots
- Visual Tracking
- Physical Human-Robot Interaction
- Robotic Software, Programming Environments, and Frameworks
- Minimally invasive interventions I
- Multi-Robot Systems II
- Grasping: Learning and Estimation
- Non-Holonomic Motion Planning
- Calibration and Identification
- Compliant Nanopositioning
- Micro and Nano Robots I