119 research outputs found
Nonlinear Dimensionality Reduction Methods in Climate Data Analysis
Linear dimensionality reduction techniques, notably principal component
analysis, are widely used in climate data analysis as a means to aid in the
interpretation of datasets of high dimensionality. These linear methods may not
be appropriate for the analysis of data arising from nonlinear processes
occurring in the climate system. Numerous techniques for nonlinear
dimensionality reduction have been developed recently that may provide a
potentially useful tool for the identification of low-dimensional manifolds in
climate data sets arising from nonlinear dynamics. In this thesis I apply three
such techniques to the study of El Nino/Southern Oscillation variability in
tropical Pacific sea surface temperatures and thermocline depth, comparing
observational data with simulations from coupled atmosphere-ocean general
circulation models from the CMIP3 multi-model ensemble.
The three methods used here are a nonlinear principal component analysis
(NLPCA) approach based on neural networks, the Isomap isometric mapping
algorithm, and Hessian locally linear embedding. I use these three methods to
examine El Nino variability in the different data sets and assess the
suitability of these nonlinear dimensionality reduction approaches for climate
data analysis.
I conclude that although, for the application presented here, analysis using
NLPCA, Isomap and Hessian locally linear embedding does not provide additional
information beyond that already provided by principal component analysis, these
methods are effective tools for exploratory data analysis.Comment: 273 pages, 76 figures; University of Bristol Ph.D. thesis; version
with high-resolution figures available from
http://www.skybluetrades.net/thesis/ian-ross-thesis.pdf (52Mb download
Shape analysis of the human brain.
Autism is a complex developmental disability that has dramatically increased in prevalence, having a decisive impact on the health and behavior of children. Methods used to detect and recommend therapies have been much debated in the medical community because of the subjective nature of diagnosing autism. In order to provide an alternative method for understanding autism, the current work has developed a 3-dimensional state-of-the-art shape based analysis of the human brain to aid in creating more accurate diagnostic assessments and guided risk analyses for individuals with neurological conditions, such as autism. Methods: The aim of this work was to assess whether the shape of the human brain can be used as a reliable source of information for determining whether an individual will be diagnosed with autism. The study was conducted using multi-center databases of magnetic resonance images of the human brain. The subjects in the databases were analyzed using a series of algorithms consisting of bias correction, skull stripping, multi-label brain segmentation, 3-dimensional mesh construction, spherical harmonic decomposition, registration, and classification. The software algorithms were developed as an original contribution of this dissertation in collaboration with the BioImaging Laboratory at the University of Louisville Speed School of Engineering. The classification of each subject was used to construct diagnoses and therapeutic risk assessments for each patient. Results: A reliable metric for making neurological diagnoses and constructing therapeutic risk assessment for individuals has been identified. The metric was explored in populations of individuals having autism spectrum disorders, dyslexia, Alzheimers disease, and lung cancer. Conclusion: Currently, the clinical applicability and benefits of the proposed software approach are being discussed by the broader community of doctors, therapists, and parents for use in improving current methods by which autism spectrum disorders are diagnosed and understood
New Methodology for Automatic Process Parameters Optimization in Selective Laser Melting
Selective laser melting is one of the most promising additive manufacturing technologies, thanks to its capability to manufacture complex shaped parts with good dimensional accuracy and high mechanical performance. In recent years, this technique is starting to be adopted for the production of end-use parts, addressing high quality requirements.
To achieve the desired quality of the final product it is necessary to optimize the process parameters, possibly by reducing the build time needed for its production. However, the currently available process optimization methodologies are very time consuming and there is a lack of standards.
The aim of this work is to develop an automatic, reliable and objective process optimization technique, which can be employed to find optimal parameters combinations for different process conditions.
Therefore, it has been developed an experimental approach, based on single tracks analysis and on 3D benchmarks characterization. The main novelty of this optimization method is the automatization of samples analysis, which entailed the adoption of innovative surface metrology techniques and of novel algorithmic frameworks developed in MATLAB environment.
According to the novel method, the effects of laser power (P), scan speed (v) and laser spot size (ds) have been investigated for the two most used materials; the extra-low-interstitial grade of Ti6Al4V alloy and the 316L stainless steel. Hence, P-v optimal combinations has been defined for each spot size level investigated, finding a first optimal region in single tracks analysis and then identifying the optimal parameter set for 3D components
production.
This methodology has allowed the definition of multiple optimal parameter sets in an automatic way, limiting time and material waste. Therefore, it can be adopted in all existent production strategies that require more than one process parameter set and could allow the development of new production approaches
Deciphering Radio Emission from Solar Coronal Mass Ejections using High-fidelity Spectropolarimetric Radio Imaging
Coronal mass ejections (CMEs) are large-scale expulsions of plasma and
magnetic fields from the Sun into the heliosphere and are the most important
driver of space weather. The geo-effectiveness of a CME is primarily determined
by its magnetic field strength and topology. Measurement of CME magnetic
fields, both in the corona and heliosphere, is essential for improving space
weather forecasting. Observations at radio wavelengths can provide several
remote measurement tools for estimating both strength and topology of the CME
magnetic fields. Among them, gyrosynchrotron (GS) emission produced by
mildly-relativistic electrons trapped in CME magnetic fields is one of the
promising methods to estimate magnetic field strength of CMEs at lower and
middle coronal heights. However, GS emissions from some parts of the CME are
much fainter than the quiet Sun emission and require high dynamic range (DR)
imaging for their detection. This thesis presents a state-of-the-art
calibration and imaging algorithm capable of routinely producing high DR
spectropolarimetric snapshot solar radio images using data from a new
technology radio telescope, the Murchison Widefield Array. This allows us to
detect much fainter GS emissions from CME plasma at much higher coronal
heights. For the first time, robust circular polarization measurements have
been jointly used with total intensity measurements to constrain the GS model
parameters, which has significantly improved the robustness of the estimated GS
model parameters. A piece of observational evidence is also found that
routinely used homogeneous and isotropic GS models may not always be sufficient
to model the observations. In the future, with upcoming sensitive telescopes
and physics-based forward models, it should be possible to relax some of these
assumptions and make this method more robust for estimating CME plasma
parameters at coronal heights.Comment: 297 pages, 100 figures, 9 tables. Submitted at Tata Institute of
Fundamental Research, Mumbai, India, Ph.D Thesi
Traitement d'antenne et corrélation du bruit sismique ambiant (applications multi-échelles)
L'utilisation d'un grand nombre de capteurs sismiques est de plus en plus courant pour imager l'intérieur de notre planète depuis sa surface pour la prospection sismique, jusqu'à sa structure profonde avec la sismologie continentale et globale. L'application d'un traitement d'antenne aux enregistrements issus de réseaux de capteurs permet l'extraction de nouvelles observables et une meilleure compréhension de la propagation des ondes dans les milieux complexes. Parmi ces méthodes, on s'intéresse particulièrement aux traitements simultanés en émission-réception de type double formation de voies (DFV). A l'échelle de la prospection sismique, la DFV est utilisée pour extraire des ondes de volume pouvant être masquées par des ondes de surface plus énergétiques. A l'échelle continentale, les réseaux de sources étant plus rares, on propose d'appliquer la méthode DFV à des signaux reconstruits par corrélation du bruit sismique ambiant. De la même manière que pour un couple de stations, la corrélation d'enregistrements continus permet d'évaluer la fonction de Green entre deux antennes réceptrices. Cette méthode est appliquée à des données du réseau Transportable Array (USArray) afin de mesurer et cartographier la vitesse de phase des ondes de surface au centre des USA. Enfin à l'échelle globale, une combinaison de plusieurs grands réseaux sismologiques est utilisée pour démontrer que la corrélation d'enregistrements continus, dans la gamme de périodes, 5-100s permet la reconstruction des ondes de volume à des distances télésismiques. Une analyse de la contribution respective du bruit ambiant, d'origine océanique, et des séismes est réalisée. On montre que les arrivées tardives des forts séismes, réverbérées à l'intérieur du globe, contribuent de manière importante à la reconstruction des phases profondes. Les ondes de volume reconstruites à partir du bruit ambiant constituent une nouvelle source d'information, complémentaire aux données issues des séismes, et pouvant être utilisée pour imager notre planète.The use of a large number of sensors is becoming more common in seismology at both the global scale for deep Earth studies, and at the exploration geophysics scale for monitoring and subsurface imaging. Seismic arrays require array processing from which new type of observables contribute to a better understanding of the wave propagation complexity. This thesis deals with a subset of these techniques. It first focuses on a way to select and identify different phases between two source-receiver arrays based on the double beamforming (DBF) method. At the exploration geophysics scale, the goal is to identify and separate low-amplitude body waves from high-amplitude dispersive surface waves. At the continental scale, as the source arrays are uncommon, the cross-correlation (CC) method of broadband ambient seismic noise can be used to evaluate the Green's function between two receiver arrays. The combination of DBF and CC is applied on Transportable Array (USArray) data to construct high-resolution phase velocity maps of Rayleigh and Love waves. Finally, at the global scale, by using a large number of sensors, it is shown that body waves can emerge form CC of continuous records in the 5-100s period band. We also analyze the contribution of strong earthquakes and particularly their long lasting reverberated coda. We compare it to the contribution to correlations of the continuous background sources associated with the ocean-crust interaction. The reconstructed body waves constitute a valuable supplement to traditional earthquake data to image and to monitor the structure of the Earth from its surface to the inner core.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
- …