107 research outputs found

    Lightfield Analysis and Its Applications in Adaptive Optics and Surveillance Systems

    Get PDF
    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies

    Geodesics, Parallel Transport & One-parameter Subgroups for Diffeomorphic Image Registration

    Get PDF
    International audienceComputational anatomy aims at developing models to understand the anatomical variability of organs and tissues. A widely used and validated instrument for comparing the anatomy in medical images is non-linear diffeomorphic registration which is based on a rich mathematical background. For instance, the "large deformation diffeomorphic metric mapping" (LDDMM) framework defines a Riemannian setting by providing a right invariant metric on the tangent spaces, and solves the registration problem by computing geodesics parametrized by time-varying velocity fields. A simpler alternative based on stationary velocity fields (SVF) has been proposed, using the one-parameter subgroups from Lie groups theory. In spite of its better computational efficiency, the geometrical setting of the SVF is more vague, especially regarding the relationship between one-parameter subgroups and geodesics. In this work, we detail the properties of finite dimensional Lie groups that highlight the geometric foundations of one-parameter subgroups. We show that one can define a proper underlying geometric structure (an affine manifold) based on the canonical Cartan connections, for which tne-parameter subgroups and their translations are geodesics. This geometric structure is perfectly compatible with all the group operations (left, right composition and inversion), contrarily to left- (or right-) invariant Riemannian metrics. Moreover, we derive closed-form expressions for the parallel transport. Then, we investigate the generalization of such properties to infinite dimensional Lie groups. We suggest that some of the theoretical objections might actually be ruled out by the practical implementation of both the LDDMM and the SVF frameworks for image registration. This leads us to a more practical study comparing the parameterization (initial velocity field) of metric and Cartan geodesics in the specific optimization context of longitudinal and inter-subject image registration.Our experimental results suggests that stationarity is a good approximation for longitudinal deformations, while metric geodesics notably differ from stationary ones for inter-subject registration, which involves much larger and non-physical deformations. Then, we turn to the practical comparison of five parallel transport techniques along oneparameter subgroups. Our results point out the fundamental role played by the numerical implementation, which may hide the theoretical differences between the different schemes. Interestingly, even if the parallel transport generally depends on the path used, an experiment comparing the Cartan parallel transport along the one-parameter subgroup and the LDDMM (metric) geodesics from inter-subject registration suggests that our parallel transport methods are not so sensitive to the path

    Aggregation and the PPP puzzle in a sticky-price model

    Get PDF
    We study the purchasing power parity (PPP) puzzle in a multi-sector, two-country, sticky- price model. Across sectors, firms differ in the extent of price stickiness, in accordance with recent microeconomic evidence on price setting in various countries. Combined with local currency pricing, this leads sectoral real exchange rates to have heterogeneous dynamics. We show analytically that in this economy, deviations of the real exchange rate from PPP are more volatile and persistent than in a counterfactual one-sector world economy that features the same average frequency of price changes, and is otherwise identical to the multi-sector world economy. When simulated with a sectoral distribution of price stickiness that matches the microeconomic evidence for the U.S. economy, the model produces a half-life of deviations from PPP of 39 months. In contrast, the half-life of such deviations in the counterfactual one-sector economy is only slightly above one year. As a by-product, our model provides a decomposition of this difference in persistence that allows a structural interpretation of the different approaches found in the empirical literature on aggregation and the real exchange rate. In particular, we reconcile the apparently conflicting findings that gave rise to the "PPP Strikes Back debate" (Imbs et al. 2005a,b and Chen and Engel 2005).Purchasing power parity ; Prices ; Foreign exchange rates

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /

    Unsupervised deep learning of human brain diffusion magnetic resonance imaging tractography data

    Get PDF
    L'imagerie par résonance magnétique de diffusion est une technique non invasive permettant de connaître la microstructure organisationnelle des tissus biologiques. Les méthodes computationnelles qui exploitent la préférence orientationnelle de la diffusion dans des structures restreintes pour révéler les voies axonales de la matière blanche du cerveau sont appelées tractographie. Ces dernières années, diverses méthodes de tractographie ont été utilisées avec succès pour découvrir l'architecture de la matière blanche du cerveau. Pourtant, ces techniques de reconstruction souffrent d'un certain nombre de défauts dérivés d'ambiguïtés fondamentales liées à l'information orientationnelle. Cela a des conséquences dramatiques, puisque les cartes de connectivité de la matière blanche basées sur la tractographie sont dominées par des faux positifs. Ainsi, la grande proportion de voies invalides récupérées demeure un des principaux défis à résoudre par la tractographie pour obtenir une description anatomique fiable de la matière blanche. Des approches méthodologiques innovantes sont nécessaires pour aider à résoudre ces questions. Les progrès récents en termes de puissance de calcul et de disponibilité des données ont rendu possible l'application réussie des approches modernes d'apprentissage automatique à une variété de problèmes, y compris les tâches de vision par ordinateur et d'analyse d'images. Ces méthodes modélisent et trouvent les motifs sous-jacents dans les données, et permettent de faire des prédictions sur de nouvelles données. De même, elles peuvent permettre d'obtenir des représentations compactes des caractéristiques intrinsèques des données d'intérêt. Les approches modernes basées sur les données, regroupées sous la famille des méthodes d'apprentissage profond, sont adoptées pour résoudre des tâches d'analyse de données d'imagerie médicale, y compris la tractographie. Dans ce contexte, les méthodes deviennent moins dépendantes des contraintes imposées par les approches classiques utilisées en tractographie. Par conséquent, les méthodes inspirées de l'apprentissage profond conviennent au changement de paradigme requis, et peuvent ouvrir de nouvelles possibilités de modélisation, en améliorant ainsi l'état de l'art en tractographie. Dans cette thèse, un nouveau paradigme basé sur les techniques d'apprentissage de représentation est proposé pour générer et analyser des données de tractographie. En exploitant les architectures d'autoencodeurs, ce travail tente d'explorer leur capacité à trouver un code optimal pour représenter les caractéristiques des fibres de la matière blanche. Les contributions proposées exploitent ces représentations pour une variété de tâches liées à la tractographie, y compris (i) le filtrage et (ii) le regroupement efficace sur les résultats générés par d'autres méthodes, ainsi que (iii) la reconstruction proprement dite des fibres de la matière blanche en utilisant une méthode générative. Ainsi, les méthodes issues de cette thèse ont été nommées (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), et (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectivement. Les performances des méthodes proposées sont évaluées par rapport aux méthodes de l'état de l'art sur des données de diffusion synthétiques et des données de cerveaux humains chez l'adulte sain in vivo. Les résultats montrent que (i) la méthode de filtrage proposée offre une sensibilité et spécificité supérieures par rapport à d'autres méthodes de l'état de l'art; (ii) le regroupement des tractes dans des faisceaux est fait de manière consistante; et (iii) l'approche générative échantillonnant des tractes comble mieux l'espace de la matière blanche dans des régions difficiles à reconstruire. Enfin, cette thèse révèle les possibilités des autoencodeurs pour l'analyse des données des fibres de la matière blanche, et ouvre la voie à fournir des données de tractographie plus fiables.Abstract : Diffusion magnetic resonance imaging is a non-invasive technique providing insights into the organizational microstructure of biological tissues. The computational methods that exploit the orientational preference of the diffusion in restricted structures to reveal the brain's white matter axonal pathways are called tractography. In recent years, a variety of tractography methods have been successfully used to uncover the brain's white matter architecture. Yet, these reconstruction techniques suffer from a number of shortcomings derived from fundamental ambiguities inherent to the orientation information. This has dramatic consequences, since current tractography-based white matter connectivity maps are dominated by false positive connections. Thus, the large proportion of invalid pathways recovered remains one of the main challenges to be solved by tractography to obtain a reliable anatomical description of the white matter. Methodological innovative approaches are required to help solving these questions. Recent advances in computational power and data availability have made it possible to successfully apply modern machine learning approaches to a variety of problems, including computer vision and image analysis tasks. These methods model and learn the underlying patterns in the data, and allow making accurate predictions on new data. Similarly, they may enable to obtain compact representations of the intrinsic features of the data of interest. Modern data-driven approaches, grouped under the family of deep learning methods, are being adopted to solve medical imaging data analysis tasks, including tractography. In this context, the proposed methods are less dependent on the constraints imposed by current tractography approaches. Hence, deep learning-inspired methods are suit for the required paradigm shift, may open new modeling possibilities, and thus improve the state of the art in tractography. In this thesis, a new paradigm based on representation learning techniques is proposed to generate and to analyze tractography data. By harnessing autoencoder architectures, this work explores their ability to find an optimal code to represent the features of the white matter fiber pathways. The contributions exploit such representations for a variety of tractography-related tasks, including efficient (i) filtering and (ii) clustering on results generated by other methods, and (iii) the white matter pathway reconstruction itself using a generative method. The methods issued from this thesis have been named (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), and (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectively. The proposed methods' performance is assessed against current state-of-the-art methods on synthetic data and healthy adult human brain in vivo data. Results show that the (i) introduced filtering method has superior sensitivity and specificity over other state-of-the-art methods; (ii) the clustering method groups streamlines into anatomically coherent bundles with a high degree of consistency; and (iii) the generative streamline sampling technique successfully improves the white matter coverage in hard-to-track bundles. In summary, this thesis unlocks the potential of deep autoencoder-based models for white matter data analysis, and paves the way towards delivering more reliable tractography data

    Experimental Investigations on Multiphase Phenomena in Porous Media

    Get PDF
    Modeling of waterflow and solute transport in porous media is typically based on the water dynamics only, while the gaseous phase is neglected. Since the two fluids share the same pore space a particular investigation of the gaseous phase is mandatory to understand its influence on the basic processes (continuity, hysteresis, entrapment,. . . ) especially near water saturation. For the multiphase measurements an existing multistep outflow setup for determination of hydraulic properties with laboratory sized columns, was improved by an additional air-flow measurement device where gas phase continuity and conductivity could be measured simultaneously. Measured hydraulic data was analyzed by inverse modeling on which the pneumatic data analysis was based. Several gas conductivity models were tested. The possibilities of a combined measurement of hydraulic and pneumatic properties were demonstrated with artificial porous media made of sintered glass. The comparison of measurement and simulations of air conductivity showed the necessity of a rescaling of the effective air saturation for predictions. The influence of the sample structure on the hydraulic and pneumatic properties was illustrated with several homogenous and heterogeneous samples of repacked sands. The differences between purely hydraulic and combined measurement could be shown with experiments carried out with two pathologic structures. For this samples the basic structure elements could be detected by the combination of hydraulic and pneumatic measurements

    Proceedings of the Second International Workshop on Mathematical Foundations of Computational Anatomy (MFCA'08) - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceThe goal of computational anatomy is to analyze and to statistically model the anatomy of organs in different subjects. Computational anatomic methods are generally based on the extraction of anatomical features or manifolds which are then statistically analyzed, often through a non-linear registration. There are nowadays a growing number of methods that can faithfully deal with the underlying biomechanical behavior of intra-subject deformations. However, it is more difficult to relate the anatomies of different subjects. In the absence of any justified physical model, diffeomorphisms provide a general mathematical framework that enforce topological consistency. Working with such infinite dimensional space raises some deep computational and mathematical problems, in particular for doing statistics. Likewise, modeling the variability of surfaces leads to rely on shape spaces that are much more complex than for curves. To cope with these, different methodological and computational frameworks have been proposed (e.g. smooth left-invariant metrics, focus on well-behaved subspaces of diffeomorphisms, modeling surfaces using courants, etc.) The goal of the Mathematical Foundations of Computational Anatomy (MFCA) workshop is to foster the interactions between the mathematical community around shapes and the MICCAI community around computational anatomy applications. It targets more particularly researchers investigating the combination of statistical and geometrical aspects in the modeling of the variability of biological shapes. The workshop aims at being a forum for the exchange of the theoretical ideas and a source of inspiration for new methodological developments in computational anatomy. A special emphasis is put on theoretical developments, applications and results being welcomed as illustrations. Following the very successful first edition of this workshop in 2006 (see http://www.inria.fr/sophia/asclepios/events/MFCA06/), the second edition was held in New-York on September 6, in conjunction with MICCAI 2008. Contributions were solicited in Riemannian and group theoretical methods, Geometric measurements of the anatomy, Advanced statistics on deformations and shapes, Metrics for computational anatomy, Statistics of surfaces. 34 submissions were received, among which 9 were accepted to MICCAI and had to be withdrawn from the workshop. Each of the remaining 25 paper was reviewed by three members of the program committee. To guaranty a high level program, 16 papers only were selected

    Economic structures, the nature of shocks and the role of exchange rate in the monetary policy formation in the emerging countries of East-Asia

    Get PDF
    This thesis investigates the role of exchange rate in a small open economy policy framework. Focusing the analysis on the crisis-hit East-Asian countries, the main objective of this thesis is to investigate the necessity of the monetary authority to concern about the exchange rate stability by reacting directly to the exchange rate movements under the flexible exchange rate regime. This thesis conducts both numerical simulations and empirical analyses and it is organized in six chapters. Chapter One is the introduction about the content of each chapter and the summary of the main findings. Chapter Two is the overview about the economic and monetary policy of East-Asian countries. Chapter Four applies a model of Lindé, Nessén & Söderström (2004) and conducts simulations to compare the performances of various policy rules in terms of policy loss and variations. The remaining chapters are about the empirical analyses, i.e. Chapter Three applies GMM technique and SUR model to estimate the degree of exchange rate pass-through in East-Asia in the pre- and post-crisis of 1997/98, Chapter Five applies GMM technique to estimate the policy reaction function for East-Asia and the last chapter conducts a SVAR model to investigate the change in the economic structure, the dynamic of shocks and the performances of the policy regimes in East-Asian countries. The simulations reveal some evidences on more effective monetary policy rules/ regimes that react directly to the exchange rate terms, taking into account for different degrees of exchange rate pass-through, trade openness, policy objective, the source and persistency of shocks. However, the size of improvements depends on country specific factors. Empirical results report different results for the degree of exchange rate pass-through along the pricing chain, over time and across countries. Besides, there are empirical evidences that the monetary authorities in East-Asian countries influence the exchange rate movements through short-term interest rate adjustments and foreign market intervention under the floating regime aftermath the crisis. Empirical findings indicate that the policy regimes aftermath the crisis is more effective. The source of shocks and the change in the economic structure matters in determining the performances of policy regimes. The empirical results are in line with the theoretical outcomes that favor the reaction to the exchange rate movements under the flexible exchange rate regime in the emerging countries of East-Asia

    Säteilytyksen vaikutukset grafeenissa ja sen sukulaismateriaaleissa

    Get PDF
    Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.Hexagonaalisesti järjestäytyneillä nanomateriaaleilla, mukaanlukien grafeenilla, hiili- ja boori-nitridinanoputkilla ja valkoisella grafeenilla (yksittäiskerros hexagonaalista boori-nitridiä), on lukuisia vaikuttavia ominaisuuksia. Nämä materiaalit ovat esimerkiksi ennennäkemättömän jäykkiä ja lujia. Grafeenin ja hiilinanoputkien erikoislaatuiset elektroniset ominaisuudet herättävät toiveita siitä, että näitä materiaaleja voitaisiin hyödyntää tulevaisuuden elektroniikassa. Virheettömässä muodossaan kyseisten materiaalien ominaisuudet eivtä kuitenkaan ole aina sitä, mitä sovellutuksissa tarvitaan, mutta näiden rakenteiden huolellinen muokkaaminen, esimerkiksi hiukkassäteilytyksen avulla, voi olla tapa räätälöidä kyseisten materiaalien ominaisuuksia. Toisaalta materiaalit voivat altistua sovellutuksissa ei-toivotulle säteilylle ja tällöin on tärkeää tuntea materiaalien kestävyys kyseisessä tilanteessa. Tässä väitöskirjassa tarkastellaan vaurionmuodostusta edellämainituissa materiaaleissa niiden altistuessa hiukkassäteilytykselle. Ionisäteilytyksen vaikutuksia moniseinäisissä hiili- ja boori-nitridinanoputkissa tutkitaan altistamalla näytteitä ensin hiukkaspommitukselle käyttäen hiukkaskiihdytintä, minkä jälkeen näytteet karakterisoidaan läpäisyelektronimikroskopialla ja Raman spektroskopialla. Näiden karakterisointimenetelmien käyttökelpoisuutta tarkastellaan kriittisesti ja läpäisyelektronimikroskooppikuviin perustuva asteikko esitetään käytettäväksi analyysin apuvälineenä. Hiukkassäteilytyksen vaikutuksia grafeenissa ja valkoisessa grafeenissa tarkastellaan teoreettisesti. Kvanttimekaniikkaan perustuvien molekyylidynaamisten simulaatioiden pohjalta rakennettua tilastollista mallia sovelletaan yhdessä elektronisäteilytyskokeiden kanssa mallinnettaessa erikoisen muotoisten kolmion muotoisten vauriorakenteiden muodostumista valkoisessa grafeenissa. Suuri määrä klassisia molekyylidynaamisia simulaatoita suoritettaan, tavoitteena mallintaa vauriontuottoa ionisäteilytykselle altistetussa grafeenissa ja valkoisessa grafeenissa. Näiden tulosten pohjalta toteutetaan tilastollinen malli, jota voidaan hyödyntää esimerkisi mikäli grafeenia tai valkoista grafeenia halutaan muokata käyttäen fokusoituja ionisuihkuja
    corecore