# Dynamic Programming Algorithms for Planning in Continuous Domains and the Hamilton-Jacobi Equation

## Ian Mitchell

### University of British Columbia

ABSTRACT:The two usual approaches that find optimal controls (policies) for
discrete systems via dynamic programming are value and policy
iteration. A primary advantage of policy iteration -- finite
termination -- no longer applies to continuous state continuous time
systems, because the space of policies is infinite. Fortunately,
several of the common value iteration schemes can be easily
translated into the continuous domain, and the dynamic programming
principle itself can be transformed into differential form: the
Hamilton-Jacobi (-Bellman) (-Isaacs) equation.
In this talk I will discuss some discrete algorithms, such as
Dijkstra's algorithm for shortest path through a discrete graph,
which are inappropriate for continuous domains but for which small
modifications permit computation of a consistent approximation to the
continuous problem without excessive graph complexity. I will
describe two recent extensions to these algorithms which permit path
planning for robots with multiple objectives and with different
action constraints, as well as some publicly available software for
approximating the solution of Hamilton-Jacobi equations.

Deepak Ramachandran
Last modified: Fri Aug 8 11:38:17 CST 2006