1,648 research outputs found
The SGR 1806-20 magnetar signature on the Earth's magnetic field
SGRs denote ``soft -ray repeaters'', a small class of slowly spinning
neutron stars with strong magnetic fields. On 27 December 2004, a giant flare
was detected from magnetar SGR 1806-20. The initial spike was followed by a
hard-X-ray tail persisting for 380 s with a modulation period of 7.56 s. This
event has received considerable attention, particularly in the astrophysics
area. Its relevance to the geophysics community lies in the importance of
investigating the effects of such an event on the near-earth electromagnetic
environment. However, the signature of a magnetar flare on the geomagnetic
field has not previously been investigated. Here, by applying wavelet analysis
to the high-resolution magnetic data provided by the CHAMP satellite, a
modulated signal with a period of 7.5 s over the duration of the giant flare
appears in the observed data. Moreover, this event was detected by the
energetic ion counters onboard the DEMETER satellite.Comment: Science Editors' Choice:
http://www.sciencemag.org/content/vol314/issue5798/twil.dt
A stable quasi-periodic 4.18 d oscillation and mysterious occultations in the 2011 MOST light curve of TWHya
We present an analysis of the 2011 photometric observations of TW Hya by the
MOST satellite; this is the fourth continuous series of this type. The
large-scale light variations are dominated by a strong, quasi-periodic 4.18 d
oscillation with superimposed, apparently chaotic flaring activity; the former
is most likely produced by stellar rotation with one large hot spot created by
a stable accretion funnel in the stable regime of accretion while the latter
may be produced by small hot spots, created at moderate latitudes by unstable
accretion tongues. A new, previously unnoticed feature is a series of
semi-periodic, well defined brightness dips of unknown nature of which 19 were
observed during 43 days of our nearly-continuous observations. Re-analysis of
the 2009 MOST light curve revealed the presence of 3 similar dips. On the basis
of recent theoretical results, we tentatively conclude that the dips may
represent occultations of the small hot spots created by unstable accretion
tongues by hypothetical optically thick clumps.Comment: Printed in MNRA
Human height and weight classification based on footprint using gabor wavelet and K-NN methods
Height and weight are parameters to identify a person, especially for a forensic. To identify height and weight is usually done manually. In addition to manually using height measuring devices and scales, you can also use information related to the foot length. There is a relationship between height and foot length can be expressed in the correlation coefficient (r) as same as for weight. Therefore, in this study, a system for measuring human height and weight based on images of the footprint is implemented on Android. The methods used in this study are Gabor Wavelet and k-Nearest Neighbor (k-NN). The simulation results generate the best accuracy of 75%. The system can also used to categorize the ideal body level according to the Body Mass Index (BMI). The system is able to process images with an average computation time of 8.92 seconds.
 
Deep U band and R imaging of GOODS-South: Observations,data reduction and first results
We present deep imaging in the {\em U} band covering an area of 630
arcmin centered on the southern field of the Great Observatories Origins
Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the
ESO Very Large Telescope. The final images reach a magnitude limit (AB, 1, in a 1\arcsec radius aperture), and have good
image quality, with full width at half maximum \approx 0.8\arcsec. They are
significantly deeper than previous U--band images available for the GOODS
fields, and better match the sensitivity of other multi--wavelength GOODS
photometry. The deeper U--band data yield significantly improved photometric
redshifts, especially in key redshift ranges such as , and deeper
color--selected galaxy samples, e.g., Lyman--break galaxies at . We
also present the coaddition of archival ESO VIMOS R band data, with (AB, 1, 1\arcsec radius aperture), and image quality
\approx 0.75 \arcsec. We discuss the strategies for the observations and data
reduction, and present the first results from the analysis of the coadded
images.Comment: Accepted for publication ApJS, 54 pages, 27 figures. Released data
and full-quality paper version available at
http://archive.eso.org/cms/eso-data/data-packages/goods-vimos-imaging-data-release-version-1.
Adaptive foveated single-pixel imaging with dynamic super-sampling
As an alternative to conventional multi-pixel cameras, single-pixel cameras
enable images to be recorded using a single detector that measures the
correlations between the scene and a set of patterns. However, to fully sample
a scene in this way requires at least the same number of correlation
measurements as there are pixels in the reconstructed image. Therefore
single-pixel imaging systems typically exhibit low frame-rates. To mitigate
this, a range of compressive sensing techniques have been developed which rely
on a priori knowledge of the scene to reconstruct images from an under-sampled
set of measurements. In this work we take a different approach and adopt a
strategy inspired by the foveated vision systems found in the animal kingdom -
a framework that exploits the spatio-temporal redundancy present in many
dynamic scenes. In our single-pixel imaging system a high-resolution foveal
region follows motion within the scene, but unlike a simple zoom, every frame
delivers new spatial information from across the entire field-of-view. Using
this approach we demonstrate a four-fold reduction in the time taken to record
the detail of rapidly evolving features, whilst simultaneously accumulating
detail of more slowly evolving regions over several consecutive frames. This
tiered super-sampling technique enables the reconstruction of video streams in
which both the resolution and the effective exposure-time spatially vary and
adapt dynamically in response to the evolution of the scene. The methods
described here can complement existing compressive sensing approaches and may
be applied to enhance a variety of computational imagers that rely on
sequential correlation measurements.Comment: 13 pages, 5 figure
A multi-scale, multi-wavelength source extraction method: getsources
We present a multi-scale, multi-wavelength source extraction algorithm called
getsources. Although it has been designed primarily for use in the far-infrared
surveys of Galactic star-forming regions with Herschel, the method can be
applied to many other astronomical images. Instead of the traditional approach
of extracting sources in the observed images, the new method analyzes fine
spatial decompositions of original images across a wide range of scales and
across all wavebands. It cleans those single-scale images of noise and
background, and constructs wavelength-independent single-scale detection images
that preserve information in both spatial and wavelength dimensions. Sources
are detected in the combined detection images by following the evolution of
their segmentation masks across all spatial scales. Measurements of the source
properties are done in the original background-subtracted images at each
wavelength; the background is estimated by interpolation under the source
footprints and overlapping sources are deblended in an iterative procedure. In
addition to the main catalog of sources, various catalogs and images are
produced that aid scientific exploitation of the extraction results. We
illustrate the performance of getsources on Herschel images by extracting
sources in sub-fields of the Aquila and Rosette star-forming regions. The
source extraction code and validation images with a reference extraction
catalog are freely available.Comment: 31 pages, 27 figures, to be published in Astronomy & Astrophysic
Digital forensic techniques for the reverse engineering of image acquisition chains
In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction.
This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal.
The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces
- âŠ