Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact firstname.lastname@example.org
Stochastic Motion Planning
An Incremental Sampling-Based Algorithm for Stochastic Optimal ControlIn this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Building upon recent advances in Markov chain approximation methods and sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process (iMDP) to compute incrementally control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise.
Stochastic Distributed Multi-Agent Planning and Applications to TrafficThis paper proposes a method for multi-agent path planning on a road network in the presence of congestion. We suggest a distributed method to find paths for multiple agents by introducing a probabilistic path choice achieving global goals such as the social optimum. This approach, which shows that the global goals can be achieved by local processing using only local information, can be parallelized and sped-up using massive parallel processing. The probabilistic assignment reliably copes with the case of random choices of unidentified agents or random route changes of agents who ignore our path guidance. We provide the analytical result on convergence and running time. We demonstrate and evaluate our algorithm by an implementation using asynchronous computation on multi-core computers.
Navigating between People: A Stochastic Optimization ApproachThe objective of this paper is to present a strategy to safely move a robot in an unknown and complex environment where people are moving and interacting. The robot, by using only its sensor data, must navigate respecting humansâ€™ comfort. To obtain good results in such a dynamic environment, a prediction on humansâ€™ movement is also crucial. To solve all the aforementioned problems we introduce a suitable cost function. Its optimization is obtained by using a new stochastic and adaptive optimization algorithm (CAO). This method is very useful in particular when the analytical expression of the optimization function is unknown but numerical values are available for any state configuration. Additionally, the proposed method can easily incorporate any dynamical and environmental constraints. To validate the performance of the proposed solution, several simulation results are provided.
Probabilistic Path Planning for Multiple Robots with Subdimensional ExpansionProbabilistic planners such as Rapidly-Exploring Random Trees (RRTs) and Probabilistic Roadmaps (PRMs) are powerful path planning algorithms for high dimensional systems, but even these potent techniques suffer from the curse of dimensionality, as can be seen in multirobot systems. In this paper, we apply a technique called subdimensional expansion in order to enhance the performance of probabilistic planners for multirobot path planning. We accomplish this by exploiting the structure inherent to such problems. Subdimensional expansion initially plans in each individual robot's configuration space separately. It then couples those spaces when robots come into close proximity with one another. In this way, we constrain a probabilistic planner to search a low dimensional space, while dynamically generating a higher dimensional space where necessary. We show in simulation that subdimensional expansion enhanced PRMs can solve problems involving 32 robots and 128 total degrees of freedom in less than 10 minutes. We also demonstrate that enhancing RRTs and PRMs with subdimensional expansion can decrease the time required to find a solution by more than an order of magnitude.
Stochastic Receding Horizon Control for Robots with Probabilistic State ConstraintsThis paper deals with the problem of receding horizon control of a robot subject to stochastic uncertainty within a constrained environment. We deviate from the conventional approach that minimizes expectation of a cost functional while ensuring satisfaction of probabilistic state constraints. Instead, we reduce the problem into a particular form of stochastic optimal control where the path that minimizes the cost functional is planned deterministically and a local stochastic optimal controller with exit constraints ensures satisfaction of probabilistic state constraints while following the planned path. This control design strategy ensures boundedness of errors around the reference path and collision-free convergence to the goal with probability one under the assumption of unbounded inputs. We show that explicit expressions for the control law are possible for certain cases. We provide simulation results for a point robot moving in a constrained two-dimensional environment under Brownian noise. The method can be extended to systems with bounded inputs, if a small nonzero probability of failure can be accepted.
High-Speed Flight in an Ergodic ForestInspired by birds flying through cluttered environments such as dense forests, this paper studies the theoretical foundations of high-speed motion through a randomly-generated obstacle field. Assuming that the locations and the sizes of the trees are determined by an ergodic point process, and under mild technical conditions on the dynamics of the bird, it is shown that the existence of an infinite collision-free trajectory through the forest exhibits a phase transition. In other words, if the bird flies faster than a certain critical speed, there is no infinite collision-free trajectory, with probability one, i.e., the bird will eventually collide with some tree, almost surely, regardless of the planning algorithm governing its motion. On the other hand, if the bird flies slower than this critical speed, then there exists at least one infinite collision-free trajectory, almost surely. Lower and upper bounds on the critical speed are derived for the special case of a Poisson forest considering a simple model for the bird's dynamics. Moreover, results from an extensive Monte-Carlo simulation study are presented. This paper also establishes novel connections between robot motion planning and statistical physics through ergodic theory and the theory of percolation, which may be of independent interest.