4,432 research outputs found
A Protocol Generator Tool for Automatic In-Vitro HPV Robotic Analysis
Human Papilloma Virus (HPV) could develop precancerous
lesions and invasive cancer, as it is the main cause of nearly all cases
of cervical cancer. There are many strains of HPV and current vaccines
can only protect against some of them. This makes the detection and
genotyping of HPV a research area of utmost importance. Several biomedical
systems can detect HPV in DNA samples; however, most of
them do not have a procedure as fast, automatic or precise as it is actually
needed in this field. This manuscript presents a novel XML-based
hierarchical protocol architecture for biomedical robots to describe each
protocol step and execute it sequentially, along with a robust and automatic
robotic system for HPV DNA detection capable of processing from
1 to 24 samples simultaneously in a fast (from 45 to 162 min), efficient
(100% markers effectiveness) and precise (able to detect 36 different HPV
genotypes) way. It includes an efficient artificial vision process as the last
step of the diagnostic.FIDETIA P055-12/E03Ministerio de Economía y Competitivida TEC2016-77785-
PAMPC: Perception-Aware Model Predictive Control for Quadrotors
We present the first perception-aware model predictive control framework for
quadrotors that unifies control and planning with respect to action and
perception objectives. Our framework leverages numerical optimization to
compute trajectories that satisfy the system dynamics and require control
inputs within the limits of the platform. Simultaneously, it optimizes
perception objectives for robust and reliable sens- ing by maximizing the
visibility of a point of interest and minimizing its velocity in the image
plane. Considering both perception and action objectives for motion planning
and control is challenging due to the possible conflicts arising from their
respective requirements. For example, for a quadrotor to track a reference
trajectory, it needs to rotate to align its thrust with the direction of the
desired acceleration. However, the perception objective might require to
minimize such rotation to maximize the visibility of a point of interest. A
model-based optimization framework, able to consider both perception and action
objectives and couple them through the system dynamics, is therefore necessary.
Our perception-aware model predictive control framework works in a
receding-horizon fashion by iteratively solving a non-linear optimization
problem. It is capable of running in real-time, fully onboard our lightweight,
small-scale quadrotor using a low-power ARM computer, to- gether with a
visual-inertial odometry pipeline. We validate our approach in experiments
demonstrating (I) the contradiction between perception and action objectives,
and (II) improved behavior in extremely challenging lighting conditions
Uncalibrated Dynamic Mechanical System Controller
An apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s) is provided. In the preferred embodiment, tracking of a moving target by a robot having multiple degrees of freedom is achieved using an uncalibrated model independent visual servo control. Model independent visual servo control is defined as using visual feedback to control a robot's servomotors without a precisely calibrated kinematic robot model or camera model. A processor updates a Jacobian and a controller provides control signals such that the robot's end effector is directed to a desired location relative to a target on a workpiece.Georgia Tech Research Corporatio
Learning-based Image Enhancement for Visual Odometry in Challenging HDR Environments
One of the main open challenges in visual odometry (VO) is the robustness to
difficult illumination conditions or high dynamic range (HDR) environments. The
main difficulties in these situations come from both the limitations of the
sensors and the inability to perform a successful tracking of interest points
because of the bold assumptions in VO, such as brightness constancy. We address
this problem from a deep learning perspective, for which we first fine-tune a
Deep Neural Network (DNN) with the purpose of obtaining enhanced
representations of the sequences for VO. Then, we demonstrate how the insertion
of Long Short Term Memory (LSTM) allows us to obtain temporally consistent
sequences, as the estimation depends on previous states. However, the use of
very deep networks does not allow the insertion into a real-time VO framework;
therefore, we also propose a Convolutional Neural Network (CNN) of reduced size
capable of performing faster. Finally, we validate the enhanced representations
by evaluating the sequences produced by the two architectures in several
state-of-art VO algorithms, such as ORB-SLAM and DSO
Bio-inspired vision-based leader-follower formation flying in the presence of delays
Flocking starlings at dusk are known for the mesmerizing and intricate shapes they generate, as well as how fluid these shapes change. They seem to do this effortlessly. Real-life vision-based flocking has not been achieved in micro-UAVs (micro Unmanned Aerial Vehicles) to date. Towards this goal, we make three contributions in this paper: (i) we used a computational approach to develop a bio-inspired architecture for vision-based Leader-Follower formation flying on two micro-UAVs. We believe that the minimal computational cost of the resulting algorithm makes it suitable for object detection and tracking during high-speed flocking; (ii) we show that provided delays in the control loop of a micro-UAV are below a critical value, Kalman filter-based estimation algorithms are not required to achieve Leader-Follower formation flying; (iii) unlike previous approaches, we do not use external observers, such as GPS signals or synchronized communication with flock members. These three contributions could be useful in achieving vision-based flocking in GPS-denied environments on computationally-limited agents
Learned Camera Gain and Exposure Control for Improved Visual Feature Detection and Matching
Successful visual navigation depends upon capturing images that contain
sufficient useful information. In this paper, we explore a data-driven approach
to account for environmental lighting changes, improving the quality of images
for use in visual odometry (VO) or visual simultaneous localization and mapping
(SLAM). We train a deep convolutional neural network model to predictively
adjust camera gain and exposure time parameters such that consecutive images
contain a maximal number of matchable features. The training process is fully
self-supervised: our training signal is derived from an underlying VO or SLAM
pipeline and, as a result, the model is optimized to perform well with that
specific pipeline. We demonstrate through extensive real-world experiments that
our network can anticipate and compensate for dramatic lighting changes (e.g.,
transitions into and out of road tunnels), maintaining a substantially higher
number of inlier feature matches than competing camera parameter control
algorithms.Comment: Accepted to IEEE Robotics and Automation Letters and to the IEEE
International Conference on Robotics and Automation (ICRA) 202
- …