215 research outputs found

    Convex and non-convex optimization using centroid-encoding for visualization, classification, and feature selection

    Get PDF
    Includes bibliographical references.2022 Fall.Classification, visualization, and feature selection are the three essential tasks of machine learning. This Ph.D. dissertation presents convex and non-convex models suitable for these three tasks. We propose Centroid-Encoder (CE), an autoencoder-based supervised tool for visualizing complex and potentially large, e.g., SUSY with 5 million samples and high-dimensional datasets, e.g., GSE73072 clinical challenge data. Unlike an autoencoder, which maps a point to itself, a centroid-encoder has a modified target, i.e., the class centroid in the ambient space. We present a detailed comparative analysis of the method using various data sets and state-of-the-art techniques. We have proposed a variation of the centroid-encoder, Bottleneck Centroid-Encoder (BCE), where additional constraints are imposed at the bottleneck layer to improve generalization performance in the reduced space. We further developed a sparse optimization problem for the non-linear mapping of the centroid-encoder called Sparse Centroid-Encoder (SCE) to determine the set of discriminate features between two or more classes. The sparse model selects variables using the 1-norm applied to the input feature space. SCE extracts discriminative features from multi-modal data sets, i.e., data whose classes appear to have multiple clusters, by using several centers per class. This approach seems to have advantages over models which use a one-hot-encoding vector. We also provide a feature selection framework that first ranks each feature by its occurrence, and the optimal number of features is chosen using a validation set. CE and SCE are models based on neural network architectures and require the solution of non-convex optimization problems. Motivated by the CE algorithm, we have developed a convex optimization for the supervised dimensionality reduction technique called Centroid Component Retrieval (CCR). The CCR model optimizes a multi-objective cost by balancing two complementary terms. The first term pulls the samples of a class towards its centroid by minimizing a sample's distance from its class centroid in low dimensional space. The second term pushes the classes by maximizing the scattering volume of the ellipsoid formed by the class-centroids in embedded space. Although the design principle of CCR is similar to LDA, our experimental results show that CCR exhibits performance advantages over LDA, especially on high-dimensional data sets, e.g., Yale Faces, ORL, and COIL20. Finally, we present a linear formulation of Centroid-Encoder with orthogonality constraints, called Principal Centroid Component Analysis (PCCA). This formulation is similar to PCA, except the class labels are used to formulate the objective, resulting in the form of supervised PCA. We show the classification and visualization experiments results with this new linear tool

    Super-resolution of 3-dimensional scenes

    Full text link
    Super-resolution is an image enhancement method that increases the resolution of images and video. Previously this technique could only be applied to 2D scenes. The super-resolution algorithm developed in this thesis creates high-resolution views of 3-dimensional scenes, using low-resolution images captured from varying, unknown positions

    Doctor of Philosophy

    Get PDF
    dissertationMicrowave/millimeter-wave imaging systems have become ubiquitous and have found applications in areas like astronomy, bio-medical diagnostics, remote sensing, and security surveillance. These areas have so far relied on conventional imaging devices (empl

    User manual and programmer reference manual for the ATS-6 navigation model AOIPS and McIDAS versions, part 2

    Get PDF
    Development of a navigation system for a given satellite is reported. An algorithm for converting a satellite picture element location to earth location and vice versa was defined as well as a procedure for measuring the set of constants needed by the algorithm. A user manual briefly describing the current version of the navigation model and how to use the computer programs developed for it is presented

    Scalable Domain Decomposition for Parallel Solution of 3D Finite Element Multibody Rotorcraft Aeromechanics

    Get PDF
    A specialized mesh partitioner is developed for large-scale multibody three-dimensional finite element models. This partitioner enables modern domain decomposition algorithms to be leveraged for the parallel solution of complex, multibody, three-dimensional finite element-based rotor structural dynamics problems. The partitioner works with any domain decomposition algorithm, but contains special features for FETI-DP, a state-of-the-art iterative substructuring algorithm. The algorithm was implemented into an aeroelastic rotor solver X3D, with several modifications to improve performance. The parallel solver was applied to two practical test cases: the NASA Tiltrotor Aeroacoustic Model (TRAM) and the NASA Rotor Optimization for the Advancement of Mars eXploration (ROAMX) rotor blade. The mesh partitioner was developed from two sets of requirements: one standard to any domain decomposition algorithm and one specific to the FETI-DP method. The main feature of the partitioner is the ability to robustly partition any multibody structure, but with several special features for rotary-wing structures. The NASA TRAM, a 1/4 scale V-22 model, was specially released by NASA as a challenge test case. This model contained four flexible parts, six joints, nearly twenty composite material decks, a fluid-structure interface, and trim control inputs. The solver performance was studied for three test problems of increasing complexity: 1) an elementary beam, 2) the isolated TRAM blade, and 3) the TRAM blade and hub assembly. A key conclusion is that the use of a skyline solver for the coarse problem eliminates the coarse problem scalability barrier. Overall, the principle barrier of computational time that prevented the use of high-fidelity three-dimensional structures in rotorcraft is thus resolved. The two selected cases provided a template for how 3D structures should be used in the future. A detailed aeromechanical analysis of the NASA TRAM rotor was conducted. The solver was validated against experimental results in hover. The stresses in the blade and hub components were examined, illustrating the unique benefit of 3D structures. The NASA ROAMX blade was the first rotor blade to our knowledge designed exclusively with 3D structures. The torsional stability, blade loads, blade deformations, and 3D stresses/strains were evaluated for multiple blade designs before the final selection. The aeroelastic behavior of this blade was studied in steady and unsteady hover. Inertial effects were found to dominate over aerodynamics on Mars. The rotor blade was found to have sufficient factor of safety and damping for all test conditions. Over 20 thousand cases were executed with detailed stresses/strains as means of downselection, demonstrating the efficiency and utility of the parallel solver, and providing a roadmap for its use in future designs

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    A user's manual for the Automatic Synthesis Program /program C/

    Get PDF
    Digital computer program for numerical solution of problems in system theory involving linear mathematic

    Reconfigurable Antenna Systems: Platform implementation and low-power matters

    Get PDF
    Antennas are a necessary and often critical component of all wireless systems, of which they share the ever-increasing complexity and the challenges of present and emerging trends. 5G, massive low-orbit satellite architectures (e.g. OneWeb), industry 4.0, Internet of Things (IoT), satcom on-the-move, Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles, all call for highly flexible systems, and antenna reconfigurability is an enabling part of these advances. The terminal segment is particularly crucial in this sense, encompassing both very compact antennas or low-profile antennas, all with various adaptability/reconfigurability requirements. This thesis work has dealt with hardware implementation issues of Radio Frequency (RF) antenna reconfigurability, and in particular with low-power General Purpose Platforms (GPP); the work has encompassed Software Defined Radio (SDR) implementation, as well as embedded low-power platforms (in particular on STM32 Nucleo family of micro-controller). The hardware-software platform work has been complemented with design and fabrication of reconfigurable antennas in standard technology, and the resulting systems tested. The selected antenna technology was antenna array with continuously steerable beam, controlled by voltage-driven phase shifting circuits. Applications included notably Wireless Sensor Network (WSN) deployed in the Italian scientific mission in Antarctica, in a traffic-monitoring case study (EU H2020 project), and into an innovative Global Navigation Satellite Systems (GNSS) antenna concept (patent application submitted). The SDR implementation focused on a low-cost and low-power Software-defined radio open-source platform with IEEE 802.11 a/g/p wireless communication capability. In a second embodiment, the flexibility of the SDR paradigm has been traded off to avoid the power consumption associated to the relevant operating system. Application field of reconfigurable antenna is, however, not limited to a better management of the energy consumption. The analysis has also been extended to satellites positioning application. A novel beamforming method has presented demonstrating improvements in the quality of signals received from satellites. Regarding those who deal with positioning algorithms, this advancement help improving precision on the estimated position

    Galactic Bulges, spinning black holes and star forming galaxies in their cosmological context: insights from a semi-analytical perspective

    Get PDF
    Durante las últimas décadas, los astrofísicos han desarrollado una teoría sobre cómo se forman y evolucionan las galaxias. A pesar de ser exitosa en muchos aspectos, todavía tiene ciertas limitaciones que trabajos teóricos y observacionales están tratando de resolver. En esta tesis, contribuimos con estos trabajos teóricos abordando tres temas diferentes: bulbos galácticos, agujeros negros supermasivos y el desarrollo de catálogos simulados para la nueva generación de cartografiados de banda estrecha. Hemos abordado todos estos temas utilizando el modelo semianalítico L-Galaxies . A grandes rasgos, los modelos semianalíticos consisten en seguir la evolución de la componente bariónica del Universo utilizando aproximaciones analíticas aplicadas a “árboles” de fusiones de materia oscura. L-Galaxies es uno de los modelos más avanzados de la literatura, cuya capacidad para predecir las propiedades correctas de las galaxias en diferentes tiempos cosmológicos ha sido probada durante la última década en muchos trabajos. Una de las principales ventajas de L-Galaxies es la capacidad de se ejecutado en los árboles de fusiones de materia oscura extraídos de las simulaciones Millennium cuyas diferencias en tamaños de caja y resolución en masa de materia oscura ofrecen la posibilidad de explorar los procesos físicos experimentados por la las galaxias en una amplia variedad de escalas y entornos. En la primera parte de la tesis, abordamos la formación de bulbos galácticos con especial énfasis en la población de pseudobulbos, cuya evolución en un universo jerárquico no ha sido del todo explorada. Concretamente, estudiamos su proceso de formación y caracterizamos las propiedades de sus galaxias anfitrionas a diferentes tiempos cosmológicos. Dentro del marco que nos proporciona L-Galaxies , las galaxias son capaces de desarrollar un bulbo a través de fusiones con otras galaxias e inestabilidades de disco. Suponiendo que los pseudobulbos solo pueden formarse y crecer a través de una evolución secular, hemos modificado el tratamiento de las inestabilidades del disco de L-Galaxies asumiendo que solo los eventos de inestabilidad esencadenados por procesos seculares conducen a estructuras de barra duraderas que finalmente forman y desarrollan pseudobulbos. Hemos aplicado este escenario en L-Galaxies ejecutado sobre los árboles de fusiones de Millennium y Millennium II. Los resultados del modelo están en concordancia con las observaciones, mostrando que los pseudobulbos en el universo local son estructuras pequeñas ( 0:5 kpc) alojadas en galaxias similares a la Vía Láctea. Estos resultados son alentadores y respaldan nuestra principal suposición subyacente de que la estructura de pseudobulbo se forma principalmente a través de una evolución secular. Hemos ampliado nuestro análisis de pseudobulbos estudiando el comportamiento del criterio de inestabilidad de disco utilizada por L-Galaxies cuando es aplicada a una muestra de galaxia con y sin barra extraída de la simulación hidrodinámica cosmológica TNG100, actualmente una de las simulaciones más completas disponible. A pesar de encontrar una correlación entre las predicciones del criterio analítico y el (no) ensamblaje real de las galaxias con (no) barra, hemos detectado casos en los que el criterio analítico falla, ya sea afirmando estabilidad del disco para galaxias barradas o inestabilidad del disco para las galaxias sin barra. Por ello, hemos propuesto una condición nueva adicional para ser combinada con el criterio de L-Galaxies . Esta combinación mejora la detectabilidad de barras y reduce la contaminación de falsas galaxias barradas. La segunda parte de la tesis explora el ensamblaje en masas y la evolución del espín de los agujeros negros supermasivos a lo largo del tiempo cosmológico. Para ello, hemos actualizado el modelo L-Galaxies , incluyendo nuevos procesos físicos. Hemos asumido que el crecimiento de los agujeros negros se desencadena principalmente a través de la acumulación de gas frío después de fusiones de galaxias o inestabilidades de disco. Este crecimiento tiene lugar a través de una etapa de acrecimiento rápido seguida de un una lenta. Durante estas fases, la evolución del espín del agujero negro es calculada usando de las propiedades morfológicas del bulbo en el que reside. Las predicciones del modelo muestran una buena compatibilidad con los resultados observacionales como la función de masa de los agujeros negros, la distribución de sus valores de espín, la relación entre la masa del bulbo y la del agujero negro y las funciones de luminosidad. Una de las principales novedades de esta tesis ha sido utilizar el modelo explicado anteriormente para explorar la formación y evolución de la población de agujeros negros errantes, es decir, una población que se encuentra fuera de las galaxias en órbitas cerradas dentro de los subhalos de materia oscura. Hemos descubierto que la formación de este tipo de agujeros negros errantes deja una huella en la co-evolución entre el agujero negro y la galaxia anfitriona, pudiendo ser detectada por los estudios de galaxias actuales y futuros.Finalmente, la tercera parte de la tesis aborda el desarrollo de catálogos simulados especialmente diseñados para la nueva generación de cartografiados fotométricos de banda estrecha. Con este fin, hemos incluido la construcción de un cono de luz dentro de L-Galaxies incorporando en la fotometría de las galaxias simuladas el efecto de líneas de emisión producidas en regiones de formación estelar. Esto último ha asegurado la capacidad de los catálogos para predecir correctamente la fotometría de galaxias en filtros de banda estrecha. Para determinar el flujo exacto en estas líneas hemos utilizado un modelo de emisión nebular y de atenuación por el polvo capaz de predecir el flujo emitido por 9 líneas diferentes: Ly, H , H , [OII], [OIII], [NeIII], [OI], [NII] y [SII]. La validación de nuestro cono de luz se ha realizado comparando con diversas observaciones el número de galaxias detectado en diferentes filtros, la distribución angular de galaxias y las funciones de luminosidad de las líneas H , H , [OII] y[OIII]5007. Hemos utilizado todos estos procedimientos para generar catálogos especialmente diseñados para J-PLUS, un cartografiado fotométrico de galaxias que presenta una gran cantidad de filtros de banda estrecha. Al analizar estos catálogos hemos demostrado la capacidad del cartografiado para identificar correctamente la población de galaxias con líneas de emisión a diferentes tiempos cosmológicos.Como resumimos anteriormente, en esta tesis hemos abordado varios aspectos relacionados con la formación de galaxias, tratando de unir enfoques teóricos y observacionales. Sin duda alguna, el avance de los modelos teóricos combinado con los datos de experimentos futuros ayudará a construir una imagen más detallada de cómo se forman y evolucionan las estructuras en nuestro Universo.During the last decades, astrophysicists have developed a theory about how galaxies form and evolve within the Lambda-CDM cosmological framework. Despite being successful in many aspects, this general picutre has still some missing pieces that observational and theoretical works are trying to put all together. In this thesis, we try to answer to some open problems by addressing three different topics: galactic bulges, supermassive black holes and the development of mocks for the new generation of multi-narrow band surveys. We have tackled all these subjects by using the L-Galaxies semi-analytical model (SAM). Roughly, SAMs consist of dark matter merger trees populated with galaxies through analytical recipes. L-Galaxies is one of the state-of-the-art models whose capability to predict the correct galaxy properties at different redshifts has been proven during the last decade in many works. One of the main advantages of L-Galaxies is its flexibility to be run on the dark matter merger trees of the Millennium suite of simulations whose different box sizes and dark matter mass resolution offer the capability to explore different physical processes undergone by galaxies over a wide range of scales and environments. In the first part of the thesis, we address the cosmological build-up of galactic bulges with special focus on pseudobulges, whose cosmological evolution in a Lambda-CDM Universe has not been fully explored yet. In particular, we study their formation process and characterize the properties of their host galaxies at different redshifts. Within the L-Galaxies framework, galaxies are allowed to develop a bulge component via mergers and disk instabilities (DIs). Under the hypothesis that pseudobulges can only form and grow via secular evolution, we have modified the treatment of galaxy DIs. In detail, we assumed that only secular DI events lead to the development and growth of pseudobulges through the formation of long-lasting bar structures. We have applied this pseudobulge formation scenario to L-Galaxies, run on top of the Millennium and Millennium II dark matter merger trees. The outcomes of the model are in agreement with observations, showing that z=0 pseudobulges are small structures ~0.5 [kpc] hosted in main-sequence Milky Way-type galaxies. These results give support to our main underlying assumption that pseudobulge structure mainly form via secular evolution. We have extended our analysis of pseudobulge structures studying the performance of the DI criterion used by L-Galaxies when it is applied on a barred and unbarred galaxy sample of the cosmological hydrodynamical simulation TNG100. Despite finding a correlation between the analytical criteria predictions and the actual bar assembly (non-assembly) shown in the barred (unbarred) galaxies, we have detected cases where the analytical criterion fails, either claiming disk stability for barred galaxies or disk instability for the stable unbarred disks. We have proposed a new extra condition whose combination with the L-Galaxies criterion improves the detectability of bar structures and reduces both the contamination of fake barred galaxies and the number of undetected bar formation events. The second part of the thesis explores the mass assembly and spin evolution of supermassive black holes (BHs) across cosmic time. For this objective, we have updated L-Galaxies with new physical prescriptions. We have assumed that BH-mass assembly is mainly triggered by gas accretion after galaxy mergers or disk instabilities, and it takes place through a stage of rapid growth followed by a regime of slow accretion rates. During these phases, the BH spin evolution is followed by linking it with the morphological properties of the hosting bulge. The model predictions display a good consistency with some local observables, such as the black hole mass function, spin values distribution, BH-bulge mass relation and quasar luminosity functions. One of the main novelties of this thesis has been to use the BH model previously explained for exploring the formation and evolution of the wandering black hole population, i.e the population of BH outside of galaxies in bound orbits within the dark matter subhalos. We have found that the formation of these type of wandering black holes leave an imprint in the co-evolution between the black hole and the host galaxy which can be detected by current and future galaxy surveys. Finally, the third part of the thesis tackles the construction of mocks specially designed for the new generation of narrow-bands surveys. For this, we have inserted the lightcone assembly inside L-Galaxies, including in the photometry of the simulated galaxies the effect of emission lines produced in starforming regions. The latter has ensured the mock capability to correctly predict the galaxy photometry in narrow band filters. To determine the exact flux of emission lines we have used a model for the nebular emission in star-forming regions, coupled with a dust attenuation model, able to predict the flux emitted in 9 different lines. The validation of our lightcone has been done by comparing galaxy number counts, angular clustering, and Halpha, Hbeta, OII and OIII luminosity functions to a compilation of observations. We have applied all these procedures to generate catalogues tailored for J-PLUS, a large optical galaxy survey featuring a large number of narrow band filters. By analysing the J-PLUS mock catalogues, we have proved the ability of the survey to correctly identify a population of emission-line galaxies at various redshifts. As we summarize above, in this thesis we have tackled several aspects related to the details of galaxy formation, trying to bridge theoretical and observational approaches. The advance of theoretical models combined with the data from future experiments will certainly help to complete a detailed picture of how structures in our Universe form and evolve.<br /

    Efficient MCMC and posterior consistency for Bayesian inverse problems

    Get PDF
    Many mathematical models used in science and technology often contain parameters that are not known a priori. In order to match a model to a physical phenomenon, the parameters have to be adapted on the basis of the available data. One of the most important statistical concepts applied to inverse problems is the Bayesian approach which models the a priori and a posteriori uncertainty through probability distributions, called the prior and posterior, respectively. However, computational methods such as Markov Chain Monte Carlo (MCMC) have to be used because these probability measures are only given implicitly. This thesis deals with two major tasks in the area of Bayesian inverse problems: the improvement of the computational methods, in particular, different kinds of MCMC algorithms, and the properties of the Bayesian approach to inverse problems such as posterior consistency. In inverse problems, the unknown parameters are often functions and therefore elements of infinite dimensional spaces. For this reason, we have to discretise the underlying problem in order to apply MCMC methods to it. Finer discretisations lead to a higher dimensional state space and usually to a slower convergence rate of the Markov chain. We study these convergence rates rigorously and show how they deteriorate for standard methods. Moreover, we prove that slightly modified methods exhibit a dimension independent performance constituting one of the first dimension independent convergence results for locally moving MCMC algorithms. The second part of the thesis concerns numerical and analytical investigations of the posterior based on artificially generated data corresponding to a true set of parameters. In particular, we study the behaviour of the posterior as the amount of data increases or the noise in the data decreases. Posterior consistency describes the phenomenon that a sequence of posteriors concentrates around the truth. In this thesis, we present one of the first posterior consistency results for non-linear infinite dimensional inverse problems. We also study a multiscale elliptic inverse problem in detail. In particular, we show that it is not posterior consistent but the posterior concentrates around a manifold
    corecore