Awards


During the awards ceremony, we will be presenting the conference best paper awards in addition to the International Journal of Robotics Research best paper award.

The best conference papers are selected from a group of finalists by an awards committee. Here is a list of past finalists and winners.

Best Paper Award

This award is given to the best paper of the conference.

Winner:

Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning We consider the problem of sequential manipulation and tool-use planning in domains that include physical interactions such as hitting and throwing. The approach integrates a Task And Motion Planning formulation with primitives that either impose stable kinematic constraints or differentiable dynamical and impulse exchange constraints on the path optimization level. We demonstrate our approach on a variety of physical puzzles that involve tool use and dynamic interactions. We also collected data of humans solving analogous trials, helping us to discuss prospects and limitations of the proposed approach.
[Full Paper]

Marc Toussaint, Kelsey Allen, Kevin Smith, Joshua Tenenbaum

Finalists:

Predicting Human Trust in Robot Capabilities across Tasks Trust plays a significant role in shaping our interactions with one another and with automation. In this work, we investigate how humans transfer or generalize trust in robot capabilities across tasks, even with limited observations. We first present results from a human-subjects study using a real-world Fetch robot performing household tasks, and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. Based on our findings, we adopt a functional view of trust and develop two novel predictive models that capture trust evolution and transfer across tasks. Empirical results show our models---a neural network comprising Gated Recurrent Units as memory, and a Bayesian Gaussian process---to outperform existing models when predicting trust for previously unseen participants and tasks.
[Full Paper]

Harold Soh, Shu Pan, Chen Min, David Hsu

Analytical Derivatives of Rigid Body Dynamics Algorithms Rigid body dynamics is a well-established methodology in robotics. It can be exploited to exhibit the analytic form of kinematic and dynamic functions of the robot model. Two major algorithms, namely the recursive Newton-Euler algorithm (RNEA) and the articulated body algorithm (ABA), have been proposed so far to compute inverse dynamics and forward dynamics in a few microseconds. However, computing their derivatives remains a costly process, either using finite differences (costly and approximate) or automatic differentiation (difficult to implement and suboptimal). As computing the derivatives becomes an important issue (in optimal control, estimation, co-design or reinforcement learning), we propose in this paper new algorithms to efficiently compute them using closed-form formulations. We first explicitly differentiate RNEA, using the chain rule and adequate algebraic differentiation of spatial algebra. Then, using properties about the derivative of function composition, we show that the same algorithm can also be used to compute the derivatives of the direct dynamics with marginal additional cost. To this end, we finally introduce a new algorithm to compute the inverse of the joint-space inertia matrix, without explicitly computing the matrix itself. The algorithms have been implemented in an open-source C++ framework. The reported benchmarks, based on several robot models, display computational costs varying between 4 microseconds (for a 7-dof arm) up to 17 microseconds (for a 36-dof humanoid), i.e. outperforms state-of-the-art results. We also experimental show the importance of exact computations (w.r.t. finite differences) when exhibiting the sparsity of the resulting matrices.
[Full Paper]

Justin Carpentier

Best Student Paper Award sponsored by Springer on behalf of Autonomous Robots

This award is given to the best paper of the conference whose first author is a student.

Winner:

In-Hand Manipulation via Motion Cones In this paper we present the mechanics and algorithms to compute the set of feasible motions of an object pushed in a plane. This set is known as the motion cone and was previously described for non-prehensile manipulation tasks in the horizontal plane. We generalize its geometric construction to a broader set of planar tasks, where external forces such as gravity influence the dynamics of pushing, and prehensile tasks, where there are complex interactions between the gripper, object, and pusher. We show that the motion cone is defined by a set of low-curvature surfaces and provide a polyhedral cone approximation to it. We verify its validity with 2000 pushing experiments recorded with motion tracking system. Motion cones abstract the algebra involved in simulating frictional pushing by providing bounds on the set of feasible motions and by characterizing which pushes will stick or slip. We demonstrate their use for the dynamic propagation step in a sampling-based planning algorithm for in-hand manipulation. The planner generates trajectories that involve sequences of continuous pushes with 5-1000x speed improvements to equivalent algorithms.
[Full Paper]

Nikhil Chavan Dafle, Rachel Holladay, Alberto Rodriguez

Finalists:

Optimal Solution of the Generalized Dubins Interval Problem The problem addressed in this paper is motivated by surveillance mission planning with curvature-constrained trajectories for Dubins vehicles that can be formulated as the Dubins Traveling Salesman Problem with Neighborhoods (DTSPN). We aim to provide a tight lower bound of the DTSPN, especially for the cases where the sequence of visits to the given regions is available. A problem to find the shortest Dubins path connecting two regions with prescribed intervals for possible departure and arrival heading angles of the vehicle is introduced. This new problem is called the Generalized Dubins Interval Problem (GDIP) and its optimal solution is addressed. Based on the solution of the GDIP, a tight lower bound of the above mentioned DTSPN is provided which is further utilized in a sampling-based solution of the DTSPN to determine a feasible solution that is close to the optimum.
[Full Paper]

Petr Váňa, Jan Faigl

EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain.
[Full Paper]

Alex Zhu, Liangzhe Yuan, Kenneth Chaney, Kostas Daniilidis

Best Systems Paper Award in Memory of Seth Teller

This award is given to outstanding systems papers presented at the RSS conference. The awards committee determines each year if a paper of sufficient quality is among the accepted papers and may decide not to give the award. In years when no award is given, the list of finalists will not be disclosed. This award was given for the first time in 2015 (more information).

Winner:

Embedded High Precision Control and Corn Stand Counting Algorithms for an Ultra-Compact 3D Printed Field Robot This paper presents embedded high precision control and corn stands counting algorithms for a low-cost, ultra-compact 3D printed and autonomous field robot for agricultural operations. Currently, plant traits, such as emergence rate, biomass, vigor and stand counting are measured manually. This is highly labor intensive and prone to errors. The robot, termed TerraSentia, is designed to automate the measurement of plant traits for efficient phenotyping as an alternative to manual measurements. In this paper, we formulate a nonlinear moving horizon estimator that identifies key terrain parameters using onboard robot sensors and a learning-based nonlinear model predictive control (NMPC) that ensures high precision path tracking in the presence of unknown wheel-terrain interaction. Moreover, we develop a machine vision algorithm to enable TerraSentia to count corn stands by driving through the fields autonomously. We present results of an extensive field-test study that shows that i) the robot can track paths precisely with less than 5cm error so that the robot is less likely to damage plants, and ii) the machine vision algorithm is robust against interferences from leaves and weeds, and the system has been verified in corn fields at the growth stage of V4, V6, VT, R2, and R6 from five different locations. The robot predictions agree well with the ground truth with the correlation coefficient R=0.96.
[Full Paper]

Erkan Kayacan, Zhongzhong Zhang, Girish Chowdhary

Finalists:

Autonomous Adaptive Modification of Unstructured Environments We present and validate a property-driven autonomous system that modifies its environment to achieve and maintain navigability over irregular 3-dimensional terrain. This capability is essential in systems that operate in unstructured outdoor or remote environments, either on their own or as part of a team. Our work focuses on using decision procedures in our building strategy that tie building actions to the function of the resulting structure, giving rise to adaptive and robust building behavior. Our approach is novel in its functionality in full 3d unstructured terrain, driven by continuous evaluations and reaction to terrain properties, rather than relying on a structure blueprint. We choose an experimental setup and building material that closely resemble real-world scenarios, and demonstrate effectiveness of our work using a low-cost robot system.
[Full Paper]

Maira Saboia da Silva, Vivek Thangavelu, Walker Gosrich, Nils Napp

Agile Autonomous Driving using End-to-End Deep Imitation Learning We present an end-to-end imitation learning system for agile, off-road autonomous driving using only low-cost on-board sensors. By imitating a model predictive controller equipped with advanced sensors, we train a deep neural network control policy to map raw, high-dimensional observations to continuous steering and throttle commands. Compared with recent approaches to similar tasks, our method requires neither state estimation nor on-the-fly planning to navigate the vehicle. Our approach relies on, and experimentally validates, recent imitation learning theory. Empirically, we show that policies trained with online imitation learning overcome well-known challenges related to covariate shift and generalize better than policies trained with batch imitation learning. Built on these insights, our autonomous driving system demonstrates successful high-speed off-road driving, matching the state-of-the-art performance.
[Full Paper]

Yunpeng Pan, Ching-An Cheng, Kamil Saigol, Keuntaek Lee, Xinyan Yan, Evangelos Theodorou, Byron Boots