772 research outputs found
Multivariate Approaches to Classification in Extragalactic Astronomy
Clustering objects into synthetic groups is a natural activity of any
science. Astrophysics is not an exception and is now facing a deluge of data.
For galaxies, the one-century old Hubble classification and the Hubble tuning
fork are still largely in use, together with numerous mono-or bivariate
classifications most often made by eye. However, a classification must be
driven by the data, and sophisticated multivariate statistical tools are used
more and more often. In this paper we review these different approaches in
order to situate them in the general context of unsupervised and supervised
learning. We insist on the astrophysical outcomes of these studies to show that
multivariate analyses provide an obvious path toward a renewal of our
classification of galaxies and are invaluable tools to investigate the physics
and evolution of galaxies.Comment: Open Access paper.
http://www.frontiersin.org/milky\_way\_and\_galaxies/10.3389/fspas.2015.00003/abstract\>.
\<10.3389/fspas.2015.00003 \&g
A study of extended zodiacal structures
Observations of cometary dust trails and zodiacal dust bands, discovered by the Infrared Astronomical Satellite (IRAS) were analyzed in a continuing effort to understand their nature and relationship to comets, asteroids, and processes effecting those bodies. A survey of all trails observed by IRAS has been completed, and analysis of this phenomenon continues. A total of 8 trails have been associated with known short-period comets (Churyumov-Gerasimenko, Encke, Gunn, Kopff, Pons-Winnecke, Schwassmann-Wachmann 1, Tempel 1, and Tempel 2), and a few faint trails have been detected which are not associated with any known comet. It is inferred that all short-period comets may have trails, and that the trails detected were seen as a consequence of observational selection effects. Were IRAS launched today, it would likely observe a largely different set of trails. The Tempel 2 trail exhibits a small but significant excess in color temperature relative to a blackbody at the same heliocentric distance. This excess may be due to the presence of a population of small, low-beta particles deriving from large particles within the trail, or a temperature gradient over the surface of large trail particles. Trails represent the very first stage in the formation and evolution of a meteor stream, and may also be the primary mechanism by which comets contribute to the interplanetary dust complex. A mathematical model of the spatial distribution of orbitally evolved collisional debris was developed which reproduces the zodiacal dust band phenomena and was used in the analysis of dust band observations made by IRAS. This has resulted in the principal zodiacal dust bands being firmly related to the principal Hirayama asteroid families. In addition, evidence for the collisional diffusion of the orbital elements of the dust particles has been found in the case of dust generated in the Eos asteroid family
Image Processing Methods for Automatic Cell Counting In Vivo or In Situ Using 3D Confocal Microscopy
COASTLINE EXTRACTION IN VHR IMAGERY USING MATHEMATICAL MORPHOLOGY WITH SPATIAL AND SPECTRAL KNOWLEDGE
In this article, we are dealing with the problem of coastline extraction in Very High Resolution (VHR) multispectral images (Quickbird) on the Normandy Coast (France). Locating precisely the coastline is a crucial task in the context of coastal resource management and planning. In VHR imagery, some details on coastal zone become visible and the coastline definition depends on the geomorphologic context. According to the type of coastal units (sandy beach, wetlands, dune, cliff), several definitions for the coastline has to be used. So in this paper we propose a new approach in two steps based on morphological tools to extract coastline according to their context. More precisely, we first perform two detections of possible coastline pixels (respectively without false positive and without false negative). To do so, we apply a recent extension to multivariate images of the hit-or-miss transform, the morphological template matching tool, and rely on expert knowledge to define the sought templates. We then combine these two results through a double thresholding procedure followed by a final marker-based watershed to locate the exact coastline. In order to assess the performance and reliability of our method, results are compared with some ground-truth given by expert visual analysis. This comparison is made both visually and quantitatively. Results show the high performance of our method and its relevance to the problem under consideration
Determination of the high water mark and its location along a coastline
The High Water Mark (HWM) is an important cadastral boundary that separates land and water. It is also used as a baseline to facilitate coastal hazard management, from which land and infrastructure development is offset to ensure the protection of property from storm surge and sea level rise. However, the location of the HWM is difficult to define accurately due to the ambulatory nature of water and coastal morphology variations. Contemporary research has failed to develop an accurate method for HWM determination because continual changes in tidal levels, together with unimpeded wave runup and the erosion and accretion of shorelines, make it difficult to determine a unique position of the HWM. While traditional surveying techniques are accurate, they selectively record data at a given point in time, and surveying is expensive, not readily repeatable and may not take into account all relevant variables such as erosion and accretion.In this research, a consistent and robust methodology is developed for the determination of the HWM over space and time. The methodology includes two main parts: determination of the HWM by integrating both water and land information, and assessment of HWM indicators in one evaluation system. It takes into account dynamic coastal processes, and the effect of swash or tide probability on the HWM. The methodology is validated using two coastal case study sites in Western Australia. These sites were selected to test the robustness of the methodology in two distinctly different coastal environments
Why Chromatic Imaging Matters
During the last two decades, the first generation of beam combiners at the
Very Large Telescope Interferometer has proved the importance of optical
interferometry for high-angular resolution astrophysical studies in the near-
and mid-infrared. With the advent of 4-beam combiners at the VLTI, the u-v
coverage per pointing increases significantly, providing an opportunity to use
reconstructed images as powerful scientific tools. Therefore, interferometric
imaging is already a key feature of the new generation of VLTI instruments, as
well as for other interferometric facilities like CHARA and JWST. It is thus
imperative to account for the current image reconstruction capabilities and
their expected evolutions in the coming years. Here, we present a general
overview of the current situation of optical interferometric image
reconstruction with a focus on new wavelength-dependent information,
highlighting its main advantages and limitations. As an Appendix we include
several cookbooks describing the usage and installation of several state-of-the
art image reconstruction packages. To illustrate the current capabilities of
the software available to the community, we recovered chromatic images, from
simulated MATISSE data, using the MCMC software SQUEEZE. With these images, we
aim at showing the importance of selecting good regularization functions and
their impact on the reconstruction.Comment: Accepted for publication in Experimental Astronomy as part of the
topical collection: Future of Optical-infrared Interferometry in Europ
An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources
Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period.
In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection.
To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given
- …