3,845 research outputs found
The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch
Recent and forthcoming advances in instrumentation, and giant new surveys,
are creating astronomical data sets that are not amenable to the methods of
analysis familiar to astronomers. Traditional methods are often inadequate not
merely because of the size in bytes of the data sets, but also because of the
complexity of modern data sets. Mathematical limitations of familiar algorithms
and techniques in dealing with such data sets create a critical need for new
paradigms for the representation, analysis and scientific visualization (as
opposed to illustrative visualization) of heterogeneous, multiresolution data
across application domains. Some of the problems presented by the new data sets
have been addressed by other disciplines such as applied mathematics,
statistics and machine learning and have been utilized by other sciences such
as space-based geosciences. Unfortunately, valuable results pertaining to these
problems are mostly to be found only in publications outside of astronomy. Here
we offer brief overviews of a number of concepts, techniques and developments,
some "old" and some new. These are generally unknown to most of the
astronomical community, but are vital to the analysis and visualization of
complex datasets and images. In order for astronomers to take advantage of the
richness and complexity of the new era of data, and to be able to identify,
adopt, and apply new solutions, the astronomical community needs a certain
degree of awareness and understanding of the new concepts. One of the goals of
this paper is to help bridge the gap between applied mathematics, artificial
intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in
Astronomy, special issue "Robotic Astronomy
Combining local regularity estimation and total variation optimization for scale-free texture segmentation
Texture segmentation constitutes a standard image processing task, crucial to
many applications. The present contribution focuses on the particular subset of
scale-free textures and its originality resides in the combination of three key
ingredients: First, texture characterization relies on the concept of local
regularity ; Second, estimation of local regularity is based on new multiscale
quantities referred to as wavelet leaders ; Third, segmentation from local
regularity faces a fundamental bias variance trade-off: In nature, local
regularity estimation shows high variability that impairs the detection of
changes, while a posteriori smoothing of regularity estimates precludes from
locating correctly changes. Instead, the present contribution proposes several
variational problem formulations based on total variation and proximal
resolutions that effectively circumvent this trade-off. Estimation and
segmentation performance for the proposed procedures are quantified and
compared on synthetic as well as on real-world textures
The Multiscale Morphology Filter: Identifying and Extracting Spatial Patterns in the Galaxy Distribution
We present here a new method, MMF, for automatically segmenting cosmic
structure into its basic components: clusters, filaments, and walls.
Importantly, the segmentation is scale independent, so all structures are
identified without prejudice as to their size or shape. The method is ideally
suited for extracting catalogues of clusters, walls, and filaments from samples
of galaxies in redshift surveys or from particles in cosmological N-body
simulations: it makes no prior assumptions about the scale or shape of the
structures.}Comment: Replacement with higher resolution figures. 28 pages, 17 figures. For
Full Resolution Version see:
http://www.astro.rug.nl/~weygaert/tim1publication/miguelmmf.pd
Filtering of image sequences: on line edge detection and motion reconstruction
L'argomento della Tesi riguarda líelaborazione di sequenze di immagini, relative ad una
scena in cui uno o pi˘ oggetti (possibilmente deformabili) si muovono e acquisite da un
opportuno strumento di misura. A causa del processo di misura, le immagini sono corrotte da
un livello di degradazione. Si riporta la formalizzazione matematica dellíinsieme delle
immagini considerate, dellíinsieme dei moti ammissibili e della degradazione introdotta dallo
strumento di misura. Ogni immagine della sequenza acquisita ha una relazione con tutte le
altre, stabilita dalla legge del moto della scena. Líidea proposta in questa Tesi Ë quella di
sfruttare questa relazione tra le diverse immagini della sequenza per ricostruire grandezze di
interesse che caratterizzano la scena.
Nel caso in cui si conosce il moto, líinteresse Ë quello di ricostruire i contorni dellíimmagine
iniziale (che poi possono essere propagati attraverso la stessa legge del moto, in modo da
ricostruire i contorni della generica immagine appartenente alla sequenza in esame), stimando
líampiezza e del salto del livello di grigio e la relativa localizzazione.
Nel caso duale si suppone invece di conoscere la disposizione dei contorni nellíimmagine
iniziale e di avere un modello stocastico che descriva il moto; líobiettivo Ë quindi stimare i
parametri che caratterizzano tale modello.
Infine, si presentano i risultati dellíapplicazione delle due metodologie succitate a dati reali
ottenuti in ambito biomedicale da uno strumento denominato pupillometro. Tali risultati sono
di elevato interesse nellíottica di utilizzare il suddetto strumento a fini diagnostici
- …