Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv
3D Surface Models, Point Cloud Processing
-
A Semi-Local Method for Iterative Depth-Map RefinementBuilding a photorealistic, 3D model of an object or a complete scene from image-based methods is a fundamental problem in computer vision, and has many applications in robotic perception, navigation, exploration and mapping. In this paper, we extend current state-of-the-art in the computation of depth maps by presenting an accurate and computationally efficient iterative hierarchical algorithm for multi-view stereo. The algorithm is designed to utilise all available contextual information to compute highly-accurate and robust depth maps by iteratively examining different image resolutions in an image-pyramid. The novelty in our approach is that we are able to incrementally improve the depth fidelity as the algorithm progresses through the image pyramid by utilising a local method. This is achieved in a computationally efficient manner by simultaneously enforcing the consistency of the depth-map by continual comparison with neighbouring depth-maps. We present a detailed description of the algorithm, and describe how each step is carried out. The proposed technique is used to analyse multi-view stereo data from two well-known, standard datasets, and presented results show a significant decrease in computation time, as well as an increase in overall accuracy of the computed depth maps.
-
Convex Bricks: A New Primitive for Visual Hull Modeling and ReconstructionIndustrial automation tasks typically require a 3D model of the object for robotic manipulation. The ability to reconstruct the 3D model using a sample object is useful when CAD models are not available. For textureless objects, visual hull of the object obtained using silhouette-based reconstruction can avoid expensive 3D scanners for 3D modeling. We propose convex brick (CB), a new 3D primitive for modeling and reconstructing a visual hull from silhouettes. CB's are powerful in modeling arbitrary non-convex 3D shapes. Using CB, we describe an algorithm to generate a polyhedral visual hull from polygonal silhouettes; the visual hull is reconstructed as a combination of 3D convex bricks. Our approach uses well-studied geometric operations such as 2D convex decomposition and intersection of 3D convex cones using linear programming. The shape of CB can adapt to the given silhouettes, thereby significantly reducing the number of primitives required for a volumetric representation. Our framework allows easy control of reconstruction parameters such as accuracy and the number of required primitives. We present an extensive analysis of our algorithm and show visual hull reconstruction on challenging real datasets consisting of highly non-convex shapes. We also show real results on pose estimation of an industrial part in a bin-picking system using the reconstructed visual hull.
-
Real-Time Compression of Point Cloud StreamsWe present a novel lossy compression approach for point cloud streams which exploits spatial and temporal redundancy within the point data. Our proposed compression framework can handle general point cloud streams of arbitrary and varying size, point order and point density. Furthermore, it allows for controlling coding complexity and coding precision. To compress the point clouds, we perform a spatial decomposition based on octree data structures. Additionally, we present a technique for comparing the octree data structures of consecutive point clouds. By encoding their structural differences, we can successively extend the point clouds at the decoder. In this way, we are able to detect and remove temporal redundancy from the point cloud data stream. Our experimental results show a strong compression performance of a ratio of 14 at 1 mm coordinate precision and up to 40 at a coordinate precision of 9 mm.
-
Point Cloud Segmentation with LIDAR Reflection Intensity BehaviorLight Detection and Ranging (LIDAR) scans are increasingly being used for 3D map construction and reverse engineering. The utility and benefit of the processed data maybe enhanced if the objects and geometry of the area scanned can be segmented and labeled. In this paper, we present techniques to model the intensity of the laser reflection return from a point during LIDAR scanning to determine diffuse and specular reflection properties of the scanned surface. Using several illumination models, the reflection properties of the surface are characterized by Lambertian diffuse reflection model and Phong, Gaussian and Beckmann specular models. Experimental set up with eight different surfaces with varied textures and glossiness enabled measurement of algorithm performance. Examples of point cloud segmentation with the presented approach are presented.
- All Sessions
- Modular Robots & Multi-Agent Systems
- Mechanism Design of Mobile Robots
- Bipedal Robot Control
- Navigation and Visual Sensing
- Localization
- Perception for Autonomous Vehicles
- Rehabilitation Robotics
- Embodied Intelligence - Complient Actuators
- Grasping: Modeling, Analysis and Planning
- Learning and Adaptive Control of Robotic Systems I
- Marine Robotics I
- Autonomy and Vision for UAVs
- RGB-D Localization and Mapping
- Micro and Nano Robots II
- Minimally Invasive Interventions II
- Biologically Inspired Robotics II
- Underactuated Robots
- Animation & Simulation
- Planning and Navigation of Biped Walking
- Sensing for manipulation
- Sampling-Based Motion Planning
- Space Robotics
- Stochastic in Robotics and Biological Systems
- Path Planning and Navigation
- Semiconductor Manufacturing
- Haptics
- Learning and Adaptation Control of Robotic Systems II
- Parts Handling and Manipulation
- Results of ICRA 2011 Robot Challenge
- Teleoperation
- Applied Machine Learning
- Biomimetics
- Micro - Nanoscale Automation
- Multi-Legged Robots
- Localization II
- Micro/Nanoscale Automation II
- Visual Learning
- Continuum Robots
- Robust and Adaptive Control of Robotic Systems
- Hand Modeling and Control
- Multi-Robot Systems 1
- Medical Robotics I
- Compliance Devices and Control
- Video Session
- AI Reasoning Methods
- Redundant robots
- High Level Robot Behaviors
- Biologically Inspired Robotics
- Novel Robot Designs
- Underactuated Grasping
- Data Based Learning
- Range Imaging
- Collision
- Localization and Mapping
- Climbing Robots
- Embodied Inteligence - iCUB
- Stochastic Motion Planning
- Medical Robotics II
- Vision-Based Attention and Interaction
- Control and Planning for UAVs
- Industrial Robotics
- Human Detection and Tracking
- Trajectory Planning and Generation
- Image-Guided Interventions
- Novel Actuation Technologies
- Micro/Nanoscale Automation III
- Human Like Biped Locamotion
- Embodied Soft Robots
- Mapping
- SLAM I
- Mobile Manipulation: Planning & Control
- Simulation and Search in Grasping
- Control of UAVs
- Grasp Planning
- Marine Robotics II
- Force & Tactile Sensors
- Motion Path Planning I
- Environment Mapping
- Octopus-Inspired Robotics
- Soft Tissue Interaction
- Pose Estimation
- Humanoid Motion Planning and Control
- Surveillance
- SLAM II
- Intelligent Manipulation Grasping
- Formal Methods
- Sensor Networks
- Cable-Driven Mechanisms
- Parallel Robots
- Visual Tracking
- Physical Human-Robot Interaction
- Robotic Software, Programming Environments, and Frameworks
- Minimally invasive interventions I
- Force, Torque and Contacts in Grasping and Assembly
- Hybrid Legged Robots
- Non-Holonomic Motion Planning
- Calibration and Identification
- Compliant Nanopositioning
- Micro and Nano Robots I
- Multi-Robot Systems II
- Grasping: Learning and Estimation
- Grasping and Manipulation
- Motion Planning II
- Estimation and Control for UAVs
- Multi Robots: Task Allocation
- 3D Surface Models, Point Cloud Processing
- Needle Steering
- Networked Robots