Rapidly Exploring Random Trees (RRTs) have gained significant attention due to provable properties such as completeness and asymptotic optimality. However, offline methods are only useful when the entire problem landscape is known a priori. Furthermore, many real world applications have problem scopes that are orders of magnitude larger than typical mazes and bug traps that require large numbers of samples to match typical sample densities, resulting in high computational effort for reasonably low-cost trajectories. In this paper we propose an online trajectory optimization algorithm for uncertain large environments using RRTs, which we call Locally Adaptive Rapidly Exploring Random Tree (LARRT). This is achieved through two main contributions. We use an adaptive local sampling region and adaptive sampling scheme which depend on states of the dynamic system and observations of obstacles. We also propose a localized approach to planning and re-planning through fixing the root node to the current vehicle state and adding tree update functions. LARRT is designed to leverage local problem scope to reduce computational complexity and obtain a total lower-cost solution compared to a classical RRT of a similar number of nodes. Using this technique we can ensure that popular variants of RRT will remain online even for prohibitively large planning problems by transforming a large trajectory optimization approach to one that resembles receding horizon optimization. Finally, we demonstrate our approach in simulation and discuss various algorithmic trade-offs of the proposed approach.

This content is only available via PDF.
You do not currently have access to this content.