UMD Robotics Seminar

I was honored to give the Lockheed Martin Robotics Seminar at the University of Maryland last month. I talked about motivation dynamics, the framework I am developing to use dynamical systems tools for autonomous task management.

[Could not find the bibliography file(s)Much of the material I discussed can be found in various publications from my group, especially [?] and [?].

Giving talks over Zoom is an art I’m still learning. Showing videos poses a particular challenge because they’re often transmitted with a lag. For those who want to take a closer look at my videos from that talk, I post them here for reference.

A point robot using motivation dynamics to navigate in a sphere world, repeatedly patrolling two goal locations (red diamonds) while avoiding obstacles (black circles)
State trajectories for the two obstacles simulation. From top to bottom, the panels show navigation functions (normalized distance to goals), motivation state, and value state.
Simulation of the identical controller as in the two obstacle case, but now with one moving obstacle. The controller has no knowledge of the obstacle’s intent, just a perfect sensing of the obstacle’s current location.
State trajectories for the one moving obstacle simulation. From top to bottom, the panels show navigation functions (normalized distance to goals), motivation state, and value state.
Video for our recently-accepted T-RO paper on Motivation Dynamics [?].

CDC papers

[Could not find the bibliography file(s)We have two new papers accepted to this year’s Conference on Decision and Control:

[?] Extends the results of our previous paper [?] to account for individual tasks encoded as limit cycles.

[?] Shows how the decision-making mechanism at the heart of the motivation dynamics system [?] embeds an unfolded pitchfork bifurcation.

Hope to see you in Nice!

SoCal Robotics Symposium

[Could not find the bibliography file(s)

Last weekend we were at Caltech for the 2019 SoCal Robotics Symposium. It was a great, small conference with really interesting ideas from academia, the Jet Propulsion Laboratory, and industry.

I presented our work on motivation dynamics [?]. The extended abstract is available here [?] and the summary poster slide is below. Thanks to the organizers for a well-organized and stimulating day.

UCL algorithm corrections

[Could not find the bibliography file(s)

There was a subtle but small error in the proofs published in [?]. We have corrected the error in a new appendix G added to the arXiv version of the paper, available at These corrections also apply to other papers which built off of the results in [?], including [?], and [?].

The error arose from our application of concentration inequalities, sometimes known as tail bounds. In the originally-published proofs, we condition on the number \smash{n_i^t} of times that the algorithm has selected arm i up to time t. Since the arm selection policy depends on the rewards accrued, \smash{n_i^t} and the rewards are dependent random variables. In the correction, we build upon an alternative concentration inequality that accounts for this dependence and show that proofs of all the performance bounds follow a similar pattern with slight modification to the decision heuristic.

Students presenting at IMECE

The undergraduates who worked in the lab last summer will be presenting two posters at the ASME IMECE conference based on their summer work. If you’re attending IMECE in Pittsburgh, please stop by on November 11!

Brendan Bogar will present “Investigating a Framework for Visualizing Reinforcement Learning Algorithms via Quadrupedal Robotic Simulation”.

David Chan, Mel Nguyen, Oshadha Gunasekara, and Randall Kliman will present “An object-oriented framework for fast development and testing of mobile robot control algorithms”. Abstracts are available on the IMECE website.