37 research outputs found
An optimization-based formalism for shared autonomy in dynamic environments
Teleoperation is an integral component of various industrial processes. For
example, concrete spraying, assisted welding, plastering, inspection, and
maintenance. Often these systems implement direct control that maps interface
signals onto robot motions. Successful completion of tasks typically requires
high levels of manual dexterity and cognitive load. In addition, the operator is
often present nearby dangerous machinery. Consequently, safety is of critical
importance and training is expensive and prolonged -- in some cases taking
several months or even years.
An autonomous robot replacement would be an ideal solution since the human could
be removed from danger and training costs significantly reduced. However, this
is currently not possible due to the complexity and unpredictability of the
environments, and the levels of situational and contextual awareness required to
successfully complete these tasks.
In this thesis, the limitations of direct control are addressed by developing
methods for shared autonomy. A shared autonomous approach combines
human input with autonomy to generate optimal robot motions. The approach taken
in this thesis is to formulate shared autonomy within an optimization framework
that finds optimized states and controls by minimizing a cost function, modeling
task objectives, given a set of (changing) physical and operational constraints.
Online shared autonomy requires the human to be continuously interacting with
the system via an interface (akin to direct control). The key challenges
addressed in this thesis are: 1) ensuring computational feasibility (such a
method should be able to find solutions fast enough to achieve a sampling
frequency bound below by 40Hz), 2) being reactive to changes in the
environment and operator intention, 3) knowing how to appropriately blend
operator input and autonomy, and 4) allowing the operator to supply input in an
intuitive manner that is conducive to high task performance.
Various operator interfaces are investigated with regards to the control space,
called a mode of teleoperation. Extensive evaluations were carried out
to determine for which modes are most intuitive and lead to highest performance
in target acquisition tasks (e.g. spraying/welding/etc). Our performance metrics
quantified task difficulty based on Fitts' law, as well as a measure of how well
constraints affecting the task performance were met. The experimental
evaluations indicate that higher performance is achieved when humans submit
commands in low-dimensional task spaces as opposed to joint space manipulations.
In addition, our multivariate analysis indicated that those with regular
exposure to computer games achieved higher performance.
Shared autonomy aims to relieve human operators of the burden of precise motor
control, tracking, and localization. An optimization-based representation for
shared autonomy in dynamic environments was developed. Real-time tractability is
ensured by modulating the human input with information of the changing
environment within the same task space, instead of adding it to the optimization
cost or constraints. The method was illustrated with two real world
applications: grasping objects in cluttered environments and spraying tasks
requiring sprayed linings with greater homogeneity.
Maintaining motion patterns -- referred to as skills -- is often an
integral part of teleoperation for various industrial processes (e.g. spraying,
welding, plastering). We develop a novel model-based shared autonomous framework
for incorporating the notion of skill assistance to aid operators to sustain
these motion patterns whilst adhering to environment constraints. In order to
achieve computational feasibility, we introduce a novel parameterization for
state and control that combines skill and underlying trajectory models,
leveraging a special type of curve known as Clothoids. This new parameterization
allows for efficient computation of skill-based short term horizon plans,
enabling the use of a model predictive control loop. Our hardware realization
validates the effectiveness of our method to recognize a change of intended
skill, and showing an improved quality of output motion, even under dynamically
changing obstacles.
In addition, extensions of the work to supervisory control are described. An
exploratory study presents an approach that improves computational feasibility
for complex tasks with minimal interactive effort on the part of the human.
Adaptations are theorized which might allow such a method to be applicable and
beneficial to high degree of freedom systems. Finally, a system developed in our
lab is described that implements sliding autonomy and shown to complete
multi-objective tasks in complex environments with minimal interaction from the
human
Comparing Alternate Modes of Teleoperation for Constrained Tasks
Teleoperation of heavy machinery in industry often requires operators to be
in close proximity to the plant and issue commands on a per-actuator level
using joystick input devices. However, this is non-intuitive and makes
achieving desired job properties a challenging task requiring operators to
complete extensive and costly training. Despite this, operator fatigue is
common with implications for personal safety, project timeliness, cost, and
quality. While full automation is not yet achievable due to unpredictability
and the dynamic nature of the environment and task, shared control paradigms
allow operators to issue high-level commands in an intuitive, task-informed
control space while having the robot optimize for achieving desired job
properties.
In this paper, we compare a number of modes of teleoperation, exploring both
the number of dimensions of the control input as well as the most intuitive
control spaces. Our experimental evaluations of the performance metrics were
based on quantifying the difficulty of tasks based on the well known Fitts' law
as well as a measure of how well constraints affecting the task performance
were met. Our experiments show that higher performance is achieved when humans
submit commands in low-dimensional task spaces as opposed to joint space
manipulations
High-fidelity quantum state evolution in imperfect photonic integrated circuits
We propose and analyze the design of a programmable photonic integrated circuit for high-fidelity quantum computation and simulation. We demonstrate that the reconfigurability of our design allows us to overcome two major impediments to quantum optics on a chip: it removes the need for a full fabrication cycle for each experiment and allows for compensation of fabrication errors using numerical optimization techniques. Under a pessimistic fabrication model for the silicon-on-insulator process, we demonstrate a dramatic fidelity improvement for the linear optics controlled-not and controlled-phase gates and, showing the scalability of this approach, the iterative phase estimation algorithm built from individually optimized gates. We also propose and simulate an experiment that the programmability of our system would enable: a statistically robust study of the evolution of entangled photons in disordered quantum walks. Overall, our results suggest that existing fabrication processes are sufficient to build a quantum photonic processor capable of high-fidelity operation.United States. Air Force Office of Scientific Research. Multidisciplinary University Research Initiative (Grant FA9550-14-1-0052)iQuISE FellowshipNational Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)American Society for Engineering Education. National Defense Science and Engineering Graduate FellowshipAlfred P. Sloan Foundation (Sloan Research Fellowship
OpTaS: An Optimization-based Task Specification Library for Trajectory Optimization and Model Predictive Control
This paper presents OpTaS, a task specification Python library for Trajectory
Optimization (TO) and Model Predictive Control (MPC) in robotics. Both TO and
MPC are increasingly receiving interest in optimal control and in particular
handling dynamic environments. While a flurry of software libraries exists to
handle such problems, they either provide interfaces that are limited to a
specific problem formulation (e.g. TracIK, CHOMP), or are large and statically
specify the problem in configuration files (e.g. EXOTica, eTaSL). OpTaS, on the
other hand, allows a user to specify custom nonlinear constrained problem
formulations in a single Python script allowing the controller parameters to be
modified during execution. The library provides interface to several open
source and commercial solvers (e.g. IPOPT, SNOPT, KNITRO, SciPy) to facilitate
integration with established workflows in robotics. Further benefits of OpTaS
are highlighted through a thorough comparison with common libraries. An
additional key advantage of OpTaS is the ability to define optimal control
tasks in the joint space, task space, or indeed simultaneously. The code for
OpTaS is easily installed via pip, and the source code with examples can be
found at https://github.com/cmower/optas
Deep Reinforcement Learning Based System for Intraoperative Hyperspectral Video Autofocusing
Hyperspectral imaging (HSI) captures a greater level of spectral detail than
traditional optical imaging, making it a potentially valuable intraoperative
tool when precise tissue differentiation is essential. Hardware limitations of
current optical systems used for handheld real-time video HSI result in a
limited focal depth, thereby posing usability issues for integration of the
technology into the operating room. This work integrates a focus-tunable liquid
lens into a video HSI exoscope, and proposes novel video autofocusing methods
based on deep reinforcement learning. A first-of-its-kind robotic focal-time
scan was performed to create a realistic and reproducible testing dataset. We
benchmarked our proposed autofocus algorithm against traditional policies, and
found our novel approach to perform significantly () better than
traditional techniques ( mean absolute focal error compared to
). In addition, we performed a blinded usability trial by having
two neurosurgeons compare the system with different autofocus policies, and
found our novel approach to be the most favourable, making our system a
desirable addition for intraoperative HSI.Comment: To be presented at MICCAI 202
Scalable single-photon detection on a photonic chip
We developed a scalable method for integrating sub-70-ps-timing-jitter superconducting nanowire single-photon detectors with photonic integrated circuits. We assembled a photonic chip with four integrated detectors and performed the first on-chip g[superscript (2)](Ï„)-measurements of an entangled-photon source