12,347 research outputs found
ARES v2 - new features and improved performance
Aims: We present a new upgraded version of ARES. The new version includes a
series of interesting new features such as automatic radial velocity
correction, a fully automatic continuum determination, and an estimation of the
errors for the equivalent widths. Methods: The automatic correction of the
radial velocity is achieved with a simple cross-correlation function, and the
automatic continuum determination, as well as the estimation of the errors,
relies on a new approach to evaluating the spectral noise at the continuum
level. Results: ARES v2 is totally compatible with its predecessor. We show
that the fully automatic continuum determination is consistent with the
previous methods applied for this task. It also presents a significant
improvement on its performance thanks to the implementation of a parallel
computation using the OpenMP library.Comment: 4 pages, 2 Figures; accepted in A&A; ARES Webpage:
www.astro.up.pt/~sousasag/are
Automatic Network Fingerprinting through Single-Node Motifs
Complex networks have been characterised by their specific connectivity
patterns (network motifs), but their building blocks can also be identified and
described by node-motifs---a combination of local network features. One
technique to identify single node-motifs has been presented by Costa et al. (L.
D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett.,
87, 1, 2009). Here, we first suggest improvements to the method including how
its parameters can be determined automatically. Such automatic routines make
high-throughput studies of many networks feasible. Second, the new routines are
validated in different network-series. Third, we provide an example of how the
method can be used to analyse network time-series. In conclusion, we provide a
robust method for systematically discovering and classifying characteristic
nodes of a network. In contrast to classical motif analysis, our approach can
identify individual components (here: nodes) that are specific to a network.
Such special nodes, as hubs before, might be found to play critical roles in
real-world networks.Comment: 16 pages (4 figures) plus supporting information 8 pages (5 figures
Discrete curvature approximations and segmentation of polyhedral surfaces
The segmentation of digitized data to divide a free form surface into patches is one of the key steps required to perform a reverse engineering process of an object. To this end, discrete curvature approximations are introduced as the basis of a segmentation process that lead to a decomposition of digitized data into areas that will help the construction of parametric surface patches. The approach proposed relies on the use of a polyhedral representation of the object built from the digitized data input. Then, it is shown how noise reduction, edge swapping techniques and adapted remeshing schemes can participate to different preparation phases to provide a geometry that highlights useful characteristics for the segmentation process. The segmentation process is performed with various approximations of discrete curvatures evaluated on the polyhedron produced during the preparation phases. The segmentation process proposed involves two phases: the identification of characteristic polygonal lines and the identification of polyhedral areas useful for a patch construction process. Discrete curvature criteria are adapted to each phase and the concept of invariant evaluation of curvatures is introduced to generate criteria that are constant over equivalent meshes. A description of the segmentation procedure is provided together with examples of results for free form object surfaces
Arcfinder: An algorithm for the automatic detection of gravitational arcs
We present an efficient algorithm designed for and capable of detecting
elongated, thin features such as lines and curves in astronomical images, and
its application to the automatic detection of gravitational arcs. The algorithm
is sufficiently robust to detect such features even if their surface brightness
is near the pixel noise in the image, yet the amount of spurious detections is
low. The algorithm subdivides the image into a grid of overlapping cells which
are iteratively shifted towards a local centre of brightness in their immediate
neighbourhood. It then computes the ellipticity for each cell, and combines
cells with correlated ellipticities into objects. These are combined to graphs
in a next step, which are then further processed to determine properties of the
detected objects. We demonstrate the operation and the efficiency of the
algorithm applying it to HST images of galaxy clusters known to contain
gravitational arcs. The algorithm completes the analysis of an image with
3000x3000 pixels in about 4 seconds on an ordinary desktop PC. We discuss
further applications, the method's remaining problems and possible approaches
to their solution.Comment: 12 pages, 12 figure
Source finding, parametrization and classification for the extragalactic Effelsberg-Bonn HI Survey
Context. Source extraction for large-scale HI surveys currently involves
large amounts of manual labor. For data volumes expected from future HI surveys
with upcoming facilities, this approach is not feasible any longer.
Aims. We describe the implementation of a fully automated source finding,
parametrization, and classification pipeline for the Effelsberg-Bonn HI Survey
(EBHIS). With future radio astronomical facilities in mind, we want to explore
the feasibility of a completely automated approach to source extraction for
large-scale HI surveys.
Methods. Source finding is implemented using wavelet denoising methods, which
previous studies show to be a powerful tool, especially in the presence of data
defects. For parametrization, we automate baseline fitting, mask optimization,
and other tasks based on well-established algorithms, currently used
interactively. For the classification of candidates, we implement an artificial
neural network which is trained on a candidate set comprised of false positives
from real data and simulated sources. Using simulated data, we perform a
thorough analysis of the algorithms implemented.
Results. We compare the results from our simulations to the parametrization
accuracy of the HI Parkes All-Sky Survey (HIPASS) survey. Even though HIPASS is
more sensitive than EBHIS in its current state, the parametrization accuracy
and classification reliability match or surpass the manual approach used for
HIPASS data.Comment: 13 Pages, 13 Figures, 1 Table, accepted for publication in A&
Determination of Formant Features in Czech and Slovak for GMM Emotional Speech Classifier
The paper is aimed at determination of formant features (FF) which describe vocal tract characteristics. It comprises analysis of the first three formant positions together with their bandwidths and the formant tilts. Subsequently, the statistical evaluation and comparison of the FF was performed. This experiment was realized with the speech material in the form of sentences of male and female speakers expressing four emotional states (joy, sadness, anger, and a neutral state) in Czech and Slovak languages. The statistical distribution of the analyzed formant frequencies and formant tilts shows good differentiation between neutral and emotional styles for both voices. Contrary to it, the values of the formant 3-dB bandwidths have no correlation with the type of the speaking style or the type of the voice. These spectral parameters together with the values of the other speech characteristics were used in the feature vector for Gaussian mixture models (GMM) emotional speech style classifier that is currently developed. The overall mean classification error rate achieves about 18 %, and the best obtained error rate is 5 % for the sadness style of the female voice. These values are acceptable in this first stage of development of the GMM classifier that should be used for evaluation of the synthetic speech quality after applied voice conversion and emotional speech style transformation
New membership determination and proper motions of NGC 1817. Parametric and non-parametric approach
We have calculated proper motions and re-evaluated the membership
probabilities of 810 stars in the area of two NGC objects, NGC 1817 and NGC
1807. We have obtained absolute proper motions from 25 plates in the reference
system of the Tycho-2 Catalogue. The plates have a maximum epoch difference of
81 years; and they were taken with the double astrograph at Zo-Se station of
Shanghai Observatory, which has an aperture of 40 cm and a plate scale of 30
arcsec/mm. The average proper motion precision is 1.55 mas/yr. These proper
motions are used to determine the membership probabilities of stars in the
region, based on there being only one very extended physical cluster: NGC 1817.
With that aim, we have applied and compared parametric and non-parametric
approaches to cluster/field segregation. We have obtained a list of 169
probable member stars.Comment: 11 pages, 8 figures, A&A in pres
How accurately can we measure weak gravitational shear?
With the recent detection of cosmic shear, the most challenging effect of
weak gravitational lensing has been observed. The main difficulties for this
detection were the need for a large amount of high quality data and the control
of systematics during the gravitational shear measurement process, in
particular those coming from the Point Spread Function anisotropy. In this
paper we perform detailed simulations with the state-of-the-art algorithm
developed by Kaiser, Squires and Broadhurst (KSB) to measure gravitational
shear. We show that for realistic PSF profiles the KSB algorithm can recover
any shear amplitude in the range 0.012 < |\gammavec |<0.32 with a relative,
systematic error of . We give quantitative limits on the PSF correction
method as a function of shear strength, object size, signal-to-noise and PSF
anisotropy amplitude, and we provide an automatic procedure to get a reliable
object catalog for shear measurements out of the raw images.Comment: 23 pages LaTeX, 17 Figures, inclusion of referee comments, published
by A&A Main Journal (366, 717-735
Automated reduction of submillimetre single-dish heterodyne data from the James Clerk Maxwell Telescope using ORAC-DR
With the advent of modern multi-detector heterodyne instruments that can
result in observations generating thousands of spectra per minute it is no
longer feasible to reduce these data as individual spectra. We describe the
automated data reduction procedure used to generate baselined data cubes from
heterodyne data obtained at the James Clerk Maxwell Telescope. The system can
automatically detect baseline regions in spectra and automatically determine
regridding parameters, all without input from a user. Additionally it can
detect and remove spectra suffering from transient interference effects or
anomalous baselines. The pipeline is written as a set of recipes using the
ORAC-DR pipeline environment with the algorithmic code using Starlink software
packages and infrastructure. The algorithms presented here can be applied to
other heterodyne array instruments and have been applied to data from
historical JCMT heterodyne instrumentation.Comment: 18 pages, 13 figures, submitted to Monthly Notices of the Royal
Astronomical Societ
- âŠ