19,890 research outputs found
Hand gesture recognition with jointly calibrated Leap Motion and depth sensor
Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time
Automated pick-up of suturing needles for robotic surgical assistance
Robot-assisted laparoscopic prostatectomy (RALP) is a treatment for prostate
cancer that involves complete or nerve sparing removal prostate tissue that
contains cancer. After removal the bladder neck is successively sutured
directly with the urethra. The procedure is called urethrovesical anastomosis
and is one of the most dexterity demanding tasks during RALP. Two suturing
instruments and a pair of needles are used in combination to perform a running
stitch during urethrovesical anastomosis. While robotic instruments provide
enhanced dexterity to perform the anastomosis, it is still highly challenging
and difficult to learn. In this paper, we presents a vision-guided needle
grasping method for automatically grasping the needle that has been inserted
into the patient prior to anastomosis. We aim to automatically grasp the
suturing needle in a position that avoids hand-offs and immediately enables the
start of suturing. The full grasping process can be broken down into: a needle
detection algorithm; an approach phase where the surgical tool moves closer to
the needle based on visual feedback; and a grasping phase through path planning
based on observed surgical practice. Our experimental results show examples of
successful autonomous grasping that has the potential to simplify and decrease
the operational time in RALP by assisting a small component of urethrovesical
anastomosis
A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots
Legged robots are becoming popular not only in research, but also in
industry, where they can demonstrate their superiority over wheeled machines in
a variety of applications. Either when acting as mobile manipulators or just as
all-terrain ground vehicles, these machines need to precisely track the desired
base and end-effector trajectories, perform Simultaneous Localization and
Mapping (SLAM), and move in challenging environments, all while keeping
balance. A crucial aspect for these tasks is that all onboard sensors must be
properly calibrated and synchronized to provide consistent signals for all the
software modules they feed. In this paper, we focus on the problem of
calibrating the relative pose between a set of cameras and the base link of a
quadruped robot. This pose is fundamental to successfully perform sensor
fusion, state estimation, mapping, and any other task requiring visual
feedback. To solve this problem, we propose an approach based on factor graphs
that jointly optimizes the mutual position of the cameras and the robot base
using kinematics and fiducial markers. We also quantitatively compare its
performance with other state-of-the-art methods on the hydraulic quadruped
robot HyQ. The proposed approach is simple, modular, and independent from
external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic
Computing (IEEE IRC 2019)
The StarScan plate measuring machine: overview and calibrations
The StarScan machine at the U.S. Naval Observatory (USNO) completed measuring
photographic astrograph plates to allow determination of proper motions for the
USNO CCD Astrograph Catalog (UCAC) program. All applicable 1940 AGK2 plates,
about 2200 Hamburg Zone Astrograph plates, 900 Black Birch (USNO Twin
Astrograph) plates, and 300 Lick Astrograph plates have been measured. StarScan
comprises of a CCD camera, telecentric lens, air-bearing granite table, stepper
motor screws, and Heidenhain scales to operate in a step-stare mode. The
repeatability of StarScan measures is about 0.2 micrometer. The CCD mapping as
well as the global table coordinate system has been calibrated using a special
dot calibration plate and the overall accuracy of StarScan x,y data is derived
to be 0.5 micrometer. Application to real photographic plate data shows that
position information of at least 0.65 micrometer accuracy can be extracted from
course grain 103a-type emulsion astrometric plates. Transformations between
"direct" and "reverse" measures of fine grain emulsion plate measures are
obtained on the 0.3 micrometer level per well exposed stellar image and
coordinate, which is at the limit of the StarScan machine.Comment: 24 pages, 8 figures, accepted for PAS
An Improved Fatigue Detection System Based on Behavioral Characteristics of Driver
In recent years, road accidents have increased significantly. One of the
major reasons for these accidents, as reported is driver fatigue. Due to
continuous and longtime driving, the driver gets exhausted and drowsy which may
lead to an accident. Therefore, there is a need for a system to measure the
fatigue level of driver and alert him when he/she feels drowsy to avoid
accidents. Thus, we propose a system which comprises of a camera installed on
the car dashboard. The camera detect the driver's face and observe the
alteration in its facial features and uses these features to observe the
fatigue level. Facial features include eyes and mouth. Principle Component
Analysis is thus implemented to reduce the features while minimizing the amount
of information lost. The parameters thus obtained are processed through Support
Vector Classifier for classifying the fatigue level. After that classifier
output is sent to the alert unit.Comment: 4 pages, 2 figures, edited version of published paper in IEEE ICITE
201
Programmable Spectrometry -- Per-pixel Classification of Materials using Learned Spectral Filters
Many materials have distinct spectral profiles. This facilitates estimation
of the material composition of a scene at each pixel by first acquiring its
hyperspectral image, and subsequently filtering it using a bank of spectral
profiles. This process is inherently wasteful since only a set of linear
projections of the acquired measurements contribute to the classification task.
We propose a novel programmable camera that is capable of producing images of a
scene with an arbitrary spectral filter. We use this camera to optically
implement the spectral filtering of the scene's hyperspectral image with the
bank of spectral profiles needed to perform per-pixel material classification.
This provides gains both in terms of acquisition speed --- since only the
relevant measurements are acquired --- and in signal-to-noise ratio --- since
we invariably avoid narrowband filters that are light inefficient. Given
training data, we use a range of classical and modern techniques including SVMs
and neural networks to identify the bank of spectral profiles that facilitate
material classification. We verify the method in simulations on standard
datasets as well as real data using a lab prototype of the camera
- …