Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv
Autonomy and Vision for UAVs
-
Cooperative Vision-Aided Inertial Navigation Using Overlapping ViewsIn this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both robots. In contrast to existing CL methods, which require distance and/or bearing robot-to-robot observations, our algorithm infers the relative position and orientation (pose) of the robots using only the visual observations of common features in the scene. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.
-
UAV Vision: Feature Based Accurate Ground Target Localization through Propagated Initializations and Interframe HomographiesOur work presents solutions to two related vexing problems in feature-based localization of ground targets in Unmanned Aerial Vehicle(UAV) images: (i) A good initial guess at the pose estimate that would speed up the convergence to the final pose estimate for each image frame in a video sequence; and (ii) Time-bounded estimation of the position of the ground target. We address both these problems within the framework of the ICP (Iterative Closest Point) algorithm that now has a rich tradition of usage in computer vision and robotics applications. We solve the first of the two problems by frame-to-frame propagation of the computed pose estimates for the purpose of the initializations needed by ICP. The second problem is solved by terminating the iterative estimation process at the expiration of the available time for each image frame. We show that when frame-to-frame homography is factored into the iterative calculations, the accuracy of the position calculated at the time of bailing out of the iterations is nearly always sufficient for the goals of UAV vision.
-
First Results in Autonomous Landing and Obstacle Avoidance by a Full-Scale HelicopterCurrently deployed unmanned rotorcraft rely on carefully preplanned missions and operate from prepared sites and thus avoid the need to perceive and react to the environment. Here we consider the problems of finding suitable but previously unmapped landing sites given general coordinates of the goal and planning collision free trajectories in real time to land at the “optimal†site. This requires accurate mapping, fast landing zone evaluation algorithms, and motion planning. We report here on the sensing, perception and motion planning integrated onto a full-scale helicopter that flies completely autonomously. We show results from 8 experiments for landing site selection and 5 runs at obstacles. These experiments have demonstrated the first autonomous full-scale helicopter that successfully selects its own landing sites and avoids obstacles.
-
Real-Time Onboard Visual-Inertial State Estimation and Self-Calibration of MAVs in Unknown EnvironmentsThe combination of visual and inertial sensors has proved to be very popular in MAV navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. We propose a navigation algorithm for MAVs equipped with a single camera and an IMU which is able to run onboard and in real-time. The main focus is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40 Hz on an onboard Atom computer 1.6 GHz.
-
Autonomous Landing of a VTOL UAV on a Moving Platform Using Image-Based Visual ServoingIn this paper we describe a vision-based algorithm to control a vertical-takeoff-and-landing unmanned aerial vehicle while tracking and landing on a moving platform. Specifically, we use image-based visual servoing (IBVS) to track the platform in two-dimensional image space and generate a velocity reference command used as the input to an adaptive sliding mode controller. Compared with other vision-based control algorithms that reconstruct a full three-dimensional representation of the target, which requires precise depth estimation, IBVS is computationally cheaper since it is less sensitive to the depth estimation allowing for a faster method to obtain this estimate. To enhance velocity tracking of the sliding mode controller, an adaptive rule is described to account for the ground effect experienced during the maneuver. Finally, the IBVS algorithm integrated with the adaptive sliding mode controller for tracking and landing is validated in an experimental setup using a quadrotor.
- All Sessions
- Modular Robots & Multi-Agent Systems
- Mechanism Design of Mobile Robots
- Bipedal Robot Control
- Navigation and Visual Sensing
- Localization
- Perception for Autonomous Vehicles
- Rehabilitation Robotics
- Embodied Intelligence - Complient Actuators
- Grasping: Modeling, Analysis and Planning
- Learning and Adaptive Control of Robotic Systems I
- Marine Robotics I
- Autonomy and Vision for UAVs
- RGB-D Localization and Mapping
- Micro and Nano Robots II
- Minimally Invasive Interventions II
- Biologically Inspired Robotics II
- Underactuated Robots
- Animation & Simulation
- Planning and Navigation of Biped Walking
- Sensing for manipulation
- Sampling-Based Motion Planning
- Space Robotics
- Stochastic in Robotics and Biological Systems
- Path Planning and Navigation
- Semiconductor Manufacturing
- Haptics
- Learning and Adaptation Control of Robotic Systems II
- Parts Handling and Manipulation
- Results of ICRA 2011 Robot Challenge
- Teleoperation
- Applied Machine Learning
- Biomimetics
- Micro - Nanoscale Automation
- Multi-Legged Robots
- Localization II
- Micro/Nanoscale Automation II
- Visual Learning
- Continuum Robots
- Robust and Adaptive Control of Robotic Systems
- Hand Modeling and Control
- Multi-Robot Systems 1
- Medical Robotics I
- Compliance Devices and Control
- Video Session
- AI Reasoning Methods
- Redundant robots
- High Level Robot Behaviors
- Biologically Inspired Robotics
- Novel Robot Designs
- Underactuated Grasping
- Data Based Learning
- Range Imaging
- Collision
- Localization and Mapping
- Climbing Robots
- Embodied Inteligence - iCUB
- Stochastic Motion Planning
- Medical Robotics II
- Vision-Based Attention and Interaction
- Control and Planning for UAVs
- Industrial Robotics
- Human Detection and Tracking
- Trajectory Planning and Generation
- Image-Guided Interventions
- Novel Actuation Technologies
- Micro/Nanoscale Automation III
- Human Like Biped Locamotion
- Embodied Soft Robots
- Mapping
- SLAM I
- Mobile Manipulation: Planning & Control
- Simulation and Search in Grasping
- Control of UAVs
- Grasp Planning
- Marine Robotics II
- Force & Tactile Sensors
- Motion Path Planning I
- Environment Mapping
- Octopus-Inspired Robotics
- Soft Tissue Interaction
- Pose Estimation
- Humanoid Motion Planning and Control
- Surveillance
- SLAM II
- Intelligent Manipulation Grasping
- Formal Methods
- Sensor Networks
- Cable-Driven Mechanisms
- Parallel Robots
- Visual Tracking
- Physical Human-Robot Interaction
- Robotic Software, Programming Environments, and Frameworks
- Minimally invasive interventions I
- Force, Torque and Contacts in Grasping and Assembly
- Hybrid Legged Robots
- Non-Holonomic Motion Planning
- Calibration and Identification
- Compliant Nanopositioning
- Micro and Nano Robots I
- Multi-Robot Systems II
- Grasping: Learning and Estimation
- Grasping and Manipulation
- Motion Planning II
- Estimation and Control for UAVs
- Multi Robots: Task Allocation
- 3D Surface Models, Point Cloud Processing
- Needle Steering
- Networked Robots