TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact

Navigation and Visual Sensing

  • Navigation among Visually Connected Sets of Partially Distinguishable Landmarks Authors: Erickson, Lawrence H; LaValle, Steven M
    A robot navigates in a polygonal region populated by a set of partially distinguishable landmarks. The robot's motion primitives consist of actions of the form ``drive toward a landmark of class x''. To effectively navigate, the robot must always be able to see a landmark. Also, if the robot sees two landmarks of the same class, its motion primitives become ambiguous. Finally, if the robot wishes to navigate from landmark s_0 to landmark s_{goal} with a simple graph search algorithm, then there must be a sequence of landmarks [s_0,s_1,s_2,...,s_k=s_{goal}], in which landmark s_i is visible from s_{i-1}. Given these three conditions, how many landmark classes are required for navigation in a given polygon P? We call this minimum number of landmark classes the connected landmark class number, denoted chi_{CL}(P). We study this problem for the monotone polygons, an important family of polygons that are frequently generated as intermediate steps in other decomposition algorithms. We demonstrate that for all odd k, there exists a monotone polygon M_k with (3/4)(k^2+2k+1) vertices such that chi_{CL}(P) geq k. We also demonstrate that for any n-vertex monotone polygon P, chi_{CL}(P) leq n/3+12.
  • Natural Landmark Extraction in Cluttered Forested Environments Authors: Song, Meng; Sun, Fengchi; Iagnemma, Karl
    In this paper, a new systematical method for extracting tree trunk landmarks from 3D point clouds of cluttered forested environments is proposed. This purely geometric method is established on scene understanding and automatic analysis of trees. The pipeline of our method includes three steps. First, the raw point clouds are segmented by utilizing the circular shape of trees, and segments are grouped into tree sections based on the principle of spatial proximity. Second, circles and axes are extracted from tree sections which are subject to loss of shape information. Third, by clustering and integrating the tree sections resulted from various space inconsistencies, straight tree trunk landmarks are finally formed for future localization. The experimental results from real forested environments are presented.
  • Rapid Vanishing Point Estimation for General Road Detection Authors: Miksik, Ondrej
    This paper deals with fast vanishing point estimation for autonomous robot navigation. Preceding approaches showed suitable results and vanishing point estimation was used in many robotics tasks, especially in the detection of ill-structured roads. The main drawback of such approaches is the computational complexity - the possibilities of hardware accelerations are mentioned in many papers, however, we believe, that the biggest benefit of a vanishing point estimation algorithm is for primarily tele-operated robots in the case of signal loss, etc., that cannot use specialized hardware just for this feature. In this paper, we investigate possibilities of an efficient implementation by the expansion of Gabor wavelets into a linear combination of Haar-like box functions to perform fast filtering via integral image trick and discuss the utilization of superpixels in the voting scheme to provide a significant speed-up (more than 40 times), while we loose only 3-5% in precision.
  • A New Tentacles-Based Technique for Avoiding Obstacles During Visual Navigation Authors: Cherubini, Andrea; Spindler, Fabien; Chaumette, Francois
    In this paper, we design and validate a new tentacle-based approach, for avoiding obstacles during appearance-based navigation with a wheeled mobile robot. In the past, we have developed a framework for safe visual navigation. The robot follows a path represented as a set of key images, and during obstacle circumnavigation, the on-board camera is actuated to maintain scene visibility. In those works, the model used for obstacle avoidance was obtained using a potential vector field. Here, a more sophisticated and efficient method, that exploits the robot kinematic model, and predicts collision at look-ahead distances, is designed and integrated in that framework. Outdoor experiments comparing the two models show that the new approach presents many advantages. Higher speeds and precision can be attained, very cluttered scenarios involving large obstacles can be successfully dealt with, and the control inputs are smoother.
  • Maintaining visibility constraints during tele-echography with ultrasound visual servoing Authors: LI, Tao; Kermorgant, Olivier; Krupa, Alexandre
    This paper presents a multi-task control method to maintain the visibility of an anatomic element of interest while the doctor tele-operates a 2D ultrasound probe held by a medical robot. The prior task consists in automatically maintaining several visual constraints that guarantee an intersection between the ultrasound image plane and the anatomic object of interest and the second task allows the medical expert to manually apply probe motion through tele-operation. Unlike classical visual servoing technique which continually regulate the current visual features to desired values, our control approach gradually activates the regulation of one or several ultrasound visual features that go close to fixed limits in such a way to keep them in a safe domain. The main advantage of this approach is to give to the clinician the control of all the degrees of freedom of the probe to examine the patient while automatically preserving the visibility of the element of interest if required. Both simulations and experiments performed on an abdominal phantom demonstrate the efficiency of the visibility assistance task.
  • Simple and Robust Visual Servo Control of Robot Arms Using an On-Line Trajectory Generator Authors: Kroeger, Torsten; Padial, Jose
    Common visual servoing methods use image features to define a signal error in the feedback loops of robot motion controllers. This paper suggests a new visual servo control scheme that uses an on-line trajectory generator as an intermediate layer between image processing algorithms and robot motion controllers. The motion generation algorithm is capable of computing an entire trajectory from an arbitrary initial state of motion within one servo control cycle (typically one millisecond or less). This algorithm is fed with desired pose and velocity signals that are generated by an image processing algorithm. The advantages of this new architecture are: (a) jerk-limited and continuous motions are guaranteed independently of image processing signals, (b) kinematic motion constraints as well as physical and/or artificial workspace limits can be directly considered, and (c) the system can instantaneously and safely react to sensor failures (e.g., if cameras are covered or image processing fails). Real-world experimental results using a seven-joint robot arm are presented to underline the relevance for the field of robust sensor-guided robot motion control.