40,469 research outputs found
Small Bodies Science with Twinkle
Twinkle is an upcoming 0.45m space-based telescope equipped with a visible
and two near-infrared spectrometers covering the spectral range 0.4 to
4.5{\mu}m with a resolving power R~250 ({\lambda}<2.42{\mu}m) and R~60
({\lambda}>2.42{\mu}m). We explore Twinkle's capabilities for small bodies
science and find that, given Twinkle's sensitivity, pointing stability, and
spectral range, the mission can observe a large number of small bodies. The
sensitivity of Twinkle is calculated and compared to the flux from an object of
a given visible magnitude. The number, and brightness, of asteroids and comets
that enter Twinkle's field of regard is studied over three time periods of up
to a decade. We find that, over a decade, several thousand asteroids enter
Twinkle's field of regard with a brightness and non-sidereal rate that will
allow Twinkle to characterise them at the instrumentation's native resolution
with SNR > 100. Hundreds of comets can also be observed. Therefore, Twinkle
offers researchers the opportunity to contribute significantly to the field of
Solar System small bodies research.Comment: Published in JATI
Remote-sensing Characterisation of Major Solar System Bodies with the Twinkle Space Telescope
Remote-sensing observations of Solar System objects with a space telescope
offer a key method of understanding celestial bodies and contributing to
planetary formation and evolution theories. The capabilities of Twinkle, a
space telescope in a low Earth orbit with a 0.45m mirror, to acquire
spectroscopic data of Solar System targets in the visible and infrared are
assessed. Twinkle is a general observatory that provides on demand observations
of a wide variety of targets within wavelength ranges that are currently not
accessible using other space telescopes or that are accessible only to
oversubscribed observatories in the short-term future. We determine the periods
for which numerous Solar System objects could be observed and find that Solar
System objects are regularly observable. The photon flux of major bodies is
determined for comparison to the sensitivity and saturation limits of Twinkle's
instrumentation and we find that the satellite's capability varies across the
three spectral bands (0.4-1, 1.3-2.42, and 2.42-4.5{\mu}m). We find that for a
number of targets, including the outer planets, their large moons, and bright
asteroids, the model created predicts that with short exposure times,
high-resolution spectra (R~250, {\lambda}
2.42{\mu}m) could be obtained with signal-to-noise ratio (SNR) of >100 with
exposure times of <300s
Box-level Segmentation Supervised Deep Neural Networks for Accurate and Real-time Multispectral Pedestrian Detection
Effective fusion of complementary information captured by multi-modal sensors
(visible and infrared cameras) enables robust pedestrian detection under
various surveillance situations (e.g. daytime and nighttime). In this paper, we
present a novel box-level segmentation supervised learning framework for
accurate and real-time multispectral pedestrian detection by incorporating
features extracted in visible and infrared channels. Specifically, our method
takes pairs of aligned visible and infrared images with easily obtained
bounding box annotations as input and estimates accurate prediction maps to
highlight the existence of pedestrians. It offers two major advantages over the
existing anchor box based multispectral detection methods. Firstly, it
overcomes the hyperparameter setting problem occurred during the training phase
of anchor box based detectors and can obtain more accurate detection results,
especially for small and occluded pedestrian instances. Secondly, it is capable
of generating accurate detection results using small-size input images, leading
to improvement of computational efficiency for real-time autonomous driving
applications. Experimental results on KAIST multispectral dataset show that our
proposed method outperforms state-of-the-art approaches in terms of both
accuracy and speed
Automatic Image Registration in Infrared-Visible Videos using Polygon Vertices
In this paper, an automatic method is proposed to perform image registration
in visible and infrared pair of video sequences for multiple targets. In
multimodal image analysis like image fusion systems, color and IR sensors are
placed close to each other and capture a same scene simultaneously, but the
videos are not properly aligned by default because of different fields of view,
image capturing information, working principle and other camera specifications.
Because the scenes are usually not planar, alignment needs to be performed
continuously by extracting relevant common information. In this paper, we
approximate the shape of the targets by polygons and use affine transformation
for aligning the two video sequences. After background subtraction, keypoints
on the contour of the foreground blobs are detected using DCE (Discrete Curve
Evolution)technique. These keypoints are then described by the local shape at
each point of the obtained polygon. The keypoints are matched based on the
convexity of polygon's vertices and Euclidean distance between them. Only good
matches for each local shape polygon in a frame, are kept. To achieve a global
affine transformation that maximises the overlapping of infrared and visible
foreground pixels, the matched keypoints of each local shape polygon are stored
temporally in a buffer for a few number of frames. The matrix is evaluated at
each frame using the temporal buffer and the best matrix is selected, based on
an overlapping ratio criterion. Our experimental results demonstrate that this
method can provide highly accurate registered images and that we outperform a
previous related method
High resolution imaging of young M-type stars of the solar neighborhood: Probing the existence of companions down to the mass of Jupiter
Context. High contrast imaging is a powerful technique to search for gas
giant planets and brown dwarfs orbiting at separation larger than several AU.
Around solar-type stars, giant planets are expected to form by core accretion
or by gravitational instability, but since core accretion is increasingly
difficult as the primary star becomes lighter, gravitational instability would
be the a probable formation scenario for yet-to-be-found distant giant planets
around a low-mass star. A systematic survey for such planets around M dwarfs
would therefore provide a direct test of the efficiency of gravitational
instability. Aims. We search for gas giant planets orbiting around late-type
stars and brown dwarfs of the solar neighborhood. Methods. We obtained deep
high resolution images of 16 targets with the adaptive optic system of VLT-NACO
in the Lp band, using direct imaging and angular differential imaging. This is
currently the largest and deepest survey for Jupiter-mass planets around
Mdwarfs. We developed and used an integrated reduction and analysis pipeline to
reduce the images and derive our 2D detection limits for each target. The
typical contrast achieved is about 9 magnitudes at 0.5" and 11 magnitudes
beyond 1". For each target we also determine the probability of detecting a
planet of a given mass at a given separation in our images. Results. We derived
accurate detection probabilities for planetary companions, taking into account
orbital projection effects, with in average more than 50% probability to detect
a 3MJup companion at 10AU and a 1.5MJup companion at 20AU, bringing strong
constraints on the existence of Jupiter-mass planets around this sample of
young M-dwarfs.Comment: Accepted for publication in A&
Learning to Personalize in Appearance-Based Gaze Tracking
Personal variations severely limit the performance of appearance-based gaze
tracking. Adapting to these variations using standard neural network model
adaptation methods is difficult. The problems range from overfitting, due to
small amounts of training data, to underfitting, due to restrictive model
architectures. We tackle these problems by introducing the SPatial Adaptive
GaZe Estimator (SPAZE). By modeling personal variations as a low-dimensional
latent parameter space, SPAZE provides just enough adaptability to capture the
range of personal variations without being prone to overfitting. Calibrating
SPAZE for a new person reduces to solving a small optimization problem. SPAZE
achieves an error of 2.70 degrees with 9 calibration samples on MPIIGaze,
improving on the state-of-the-art by 14 %. We contribute to gaze tracking
research by empirically showing that personal variations are well-modeled as a
3-dimensional latent parameter space for each eye. We show that this
low-dimensionality is expected by examining model-based approaches to gaze
tracking. We also show that accurate head pose-free gaze tracking is possible
Application of the SRI cloud-tracking technique to rapid-scan GOES observations
An automatic cloud tracking system was applied to multilayer clouds associated with severe storms. The method was tested using rapid scan observations of Hurricane Eloise obtained by the GOES satellite on 22 September 1975. Cloud tracking was performed using clustering based either on visible or infrared data. The clusters were tracked using two different techniques. The data of 4 km and 8 km resolution of the automatic system yielded comparable in accuracy and coverage to those obtained by NASA analysts using the Atmospheric and Oceanographic Information Processing System
Recommended from our members
Expanding the use of real-time electromagnetic tracking in radiation oncology.
In the past 10 years, techniques to improve radiotherapy delivery, such as intensity-modulated radiation therapy (IMRT), image-guided radiation therapy (IGRT) for both inter- and intrafraction tumor localization, and hypofractionated delivery techniques such as stereotactic body radiation therapy (SBRT), have evolved tremendously. This review article focuses on only one part of that evolution, electromagnetic tracking in radiation therapy. Electromagnetic tracking is still a growing technology in radiation oncology and, as such, the clinical applications are limited, the expense is high, and the reimbursement is insufficient to cover these costs. At the same time, current experience with electromagnetic tracking applied to various clinical tumor sites indicates that the potential benefits of electromagnetic tracking could be significant for patients receiving radiation therapy. Daily use of these tracking systems is minimally invasive and delivers no additional ionizing radiation to the patient, and these systems can provide explicit tumor motion data. Although there are a number of technical and fiscal issues that need to be addressed, electromagnetic tracking systems are expected to play a continued role in improving the precision of radiation delivery
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
- …