234 research outputs found

    Advanced digital and analog error correction codes

    Get PDF

    Mineral identification using data-mining in hyperspectral infrared imagery

    Get PDF
    Les applications de l’imagerie infrarouge dans le domaine de la géologie sont principalement des applications hyperspectrales. Elles permettent entre autre l’identification minérale, la cartographie, ainsi que l’estimation de la portée. Le plus souvent, ces acquisitions sont réalisées in-situ soit à l’aide de capteurs aéroportés, soit à l’aide de dispositifs portatifs. La découverte de minéraux indicateurs a permis d’améliorer grandement l’exploration minérale. Ceci est en partie dû à l’utilisation d’instruments portatifs. Dans ce contexte le développement de systèmes automatisés permettrait d’augmenter à la fois la qualité de l’exploration et la précision de la détection des indicateurs. C’est dans ce cadre que s’inscrit le travail mené dans ce doctorat. Le sujet consistait en l’utilisation de méthodes d’apprentissage automatique appliquées à l’analyse (au traitement) d’images hyperspectrales prises dans les longueurs d’onde infrarouge. L’objectif recherché étant l’identification de grains minéraux de petites tailles utilisés comme indicateurs minéral -ogiques. Une application potentielle de cette recherche serait le développement d’un outil logiciel d’assistance pour l’analyse des échantillons lors de l’exploration minérale. Les expériences ont été menées en laboratoire dans la gamme relative à l’infrarouge thermique (Long Wave InfraRed, LWIR) de 7.7m à 11.8 m. Ces essais ont permis de proposer une méthode pour calculer l’annulation du continuum. La méthode utilisée lors de ces essais utilise la factorisation matricielle non négative (NMF). En utlisant une factorisation du premier ordre on peut déduire le rayonnement de pénétration, lequel peut ensuite être comparé et analysé par rapport à d’autres méthodes plus communes. L’analyse des résultats spectraux en comparaison avec plusieurs bibliothèques existantes de données a permis de mettre en évidence la suppression du continuum. Les expérience ayant menés à ce résultat ont été conduites en utilisant une plaque Infragold ainsi qu’un objectif macro LWIR. L’identification automatique de grains de différents matériaux tels que la pyrope, l’olivine et le quartz a commencé. Lors d’une phase de comparaison entre des approches supervisées et non supervisées, cette dernière s’est montrée plus approprié en raison du comportement indépendant par rapport à l’étape d’entraînement. Afin de confirmer la qualité de ces résultats quatre expériences ont été menées. Lors d’une première expérience deux algorithmes ont été évalués pour application de regroupements en utilisant l’approche FCC (False Colour Composite). Cet essai a permis d’observer une vitesse de convergence, jusqu’a vingt fois plus rapide, ainsi qu’une efficacité significativement accrue concernant l’identification en comparaison des résultats de la littérature. Cependant des essais effectués sur des données LWIR ont montré un manque de prédiction de la surface du grain lorsque les grains étaient irréguliers avec présence d’agrégats minéraux. La seconde expérience a consisté, en une analyse quantitaive comparative entre deux bases de données de Ground Truth (GT), nommée rigid-GT et observed-GT (rigide-GT: étiquet manuel de la région, observée-GT:étiquetage manuel les pixels). La précision des résultats était 1.5 fois meilleur lorsque l’on a utlisé la base de données observed-GT que rigid-GT. Pour les deux dernières epxérience, des données venant d’un MEB (Microscope Électronique à Balayage) ainsi que d’un microscopie à fluorescence (XRF) ont été ajoutées. Ces données ont permis d’introduire des informations relatives tant aux agrégats minéraux qu’à la surface des grains. Les résultats ont été comparés par des techniques d’identification automatique des minéraux, utilisant ArcGIS. Cette dernière a montré une performance prometteuse quand à l’identification automatique et à aussi été utilisée pour la GT de validation. Dans l’ensemble, les quatre méthodes de cette thèse représentent des méthodologies bénéfiques pour l’identification des minéraux. Ces méthodes présentent l’avantage d’être non-destructives, relativement précises et d’avoir un faible coût en temps calcul ce qui pourrait les qualifier pour être utilisée dans des conditions de laboratoire ou sur le terrain.The geological applications of hyperspectral infrared imagery mainly consist in mineral identification, mapping, airborne or portable instruments, and core logging. Finding the mineral indicators offer considerable benefits in terms of mineralogy and mineral exploration which usually involves application of portable instrument and core logging. Moreover, faster and more mechanized systems development increases the precision of identifying mineral indicators and avoid any possible mis-classification. Therefore, the objective of this thesis was to create a tool to using hyperspectral infrared imagery and process the data through image analysis and machine learning methods to identify small size mineral grains used as mineral indicators. This system would be applied for different circumstances to provide an assistant for geological analysis and mineralogy exploration. The experiments were conducted in laboratory conditions in the long-wave infrared (7.7μm to 11.8μm - LWIR), with a LWIR-macro lens (to improve spatial resolution), an Infragold plate, and a heating source. The process began with a method to calculate the continuum removal. The approach is the application of Non-negative Matrix Factorization (NMF) to extract Rank-1 NMF and estimate the down-welling radiance and then compare it with other conventional methods. The results indicate successful suppression of the continuum from the spectra and enable the spectra to be compared with spectral libraries. Afterwards, to have an automated system, supervised and unsupervised approaches have been tested for identification of pyrope, olivine and quartz grains. The results indicated that the unsupervised approach was more suitable due to independent behavior against training stage. Once these results obtained, two algorithms were tested to create False Color Composites (FCC) applying a clustering approach. The results of this comparison indicate significant computational efficiency (more than 20 times faster) and promising performance for mineral identification. Finally, the reliability of the automated LWIR hyperspectral infrared mineral identification has been tested and the difficulty for identification of the irregular grain’s surface along with the mineral aggregates has been verified. The results were compared to two different Ground Truth(GT) (i.e. rigid-GT and observed-GT) for quantitative calculation. Observed-GT increased the accuracy up to 1.5 times than rigid-GT. The samples were also examined by Micro X-ray Fluorescence (XRF) and Scanning Electron Microscope (SEM) in order to retrieve information for the mineral aggregates and the grain’s surface (biotite, epidote, goethite, diopside, smithsonite, tourmaline, kyanite, scheelite, pyrope, olivine, and quartz). The results of XRF imagery compared with automatic mineral identification techniques, using ArcGIS, and represented a promising performance for automatic identification and have been used for GT validation. In overall, the four methods (i.e. 1.Continuum removal methods; 2. Classification or clustering methods for mineral identification; 3. Two algorithms for clustering of mineral spectra; 4. Reliability verification) in this thesis represent beneficial methodologies to identify minerals. These methods have the advantages to be a non-destructive, relatively accurate and have low computational complexity that might be used to identify and assess mineral grains in the laboratory conditions or in the field

    Dynamic information and constraints in source and channel coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 237-251).This thesis explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback).(cont.) Using duality, we develop a linear complexity algorithm which exploits the feedforward information to achieve the rate-distortion bound. The second part of the thesis focuses on channel dynamics in communication by introducing a new system model to study delay in streaming applications. We first consider an adversarial channel model where at any time the channel may suffer a burst of degraded performance (e.g., due to signal fading, interference, or congestion) and prove a coding theorem for the minimum decoding delay required to recover from such a burst. Our coding theorem illustrates the relationship between the structure of a code, the dynamics of the channel, and the resulting decoding delay. We also consider more general channel dynamics. Specifically, we prove a coding theorem establishing that, for certain collections of channel ensembles, delay-universal codes exist that simultaneously achieve the best delay for any channel in the collection. Practical constructions with low encoding and decoding complexity are described for both cases.(cont.) Finally, we also consider architectures consisting of both source and channel coding which deal with channel dynamics by spreading information over space, frequency, multiple antennas, or alternate transmission paths in a network to avoid coding delays. Specifically, we explore whether the inherent diversity in such parallel channels should be exploited at the application layer via multiple description source coding, at the physical layer via parallel channel coding, or through some combination of joint source-channel coding. For on-off channel models application layer diversity architectures achieve better performance while for channels with a continuous range of reception quality (e.g., additive Gaussian noise channels with Rayleigh fading), the reverse is true. Joint source-channel coding achieves the best of both by performing as well as application layer diversity for on-off channels and as well as physical layer diversity for continuous channels.by Emin Martinian.Ph.D

    Aeronautical engineering: A continuing bibliography with indexes (supplement 240)

    Get PDF
    This bibliography lists 629 reports, articles, and other documents introduced into the NASA scientific and technical information system in May, 1989. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Correcting, Improving, and Verifying Automated Guidance in a New Warning Paradigm

    Get PDF
    The prototype Probabilistic Hazards Information (PHI) system allows forecasters to experimentally issue dynamically evolving severe weather warning and advisory products in a testbed environment, providing hypothetical end users with specific probabilities that a given location will experience severe weather over a predicted time period. When issuing these products, forecasters are provided with an automated, first-guess storm identification object which is intended to support the probabilistic warning issuance process. However, empirical results from experimentation suggest forecasters have a general distrust of the automated guidance, leading to frequent adjustments to the automated information. Additionally, feedback from several years of experimentation suggest that forecasters have limited direct experience with how storm-scale severe weather probabilities tend to evolve in different convective situations. To help address some of these concerns, the first part of this thesis provides a detailed analysis of the maximum attainable predictability of the automated guidance during the spring season of 2015, and offers a comparison of the verification statistics from automation to those of the corresponding storm-based warnings issued by the National Weather Service during the same time period. The second part of this thesis addresses storm-scale severe weather probability trends by developing a machine learning model to predict the evolution of a storm's likelihood of producing severe weather. This model uses the ensemble average of six machine learning members trained on variables obtained from the initial automated guidance, environmental parameters, and a storm's history, to predict future probabilities of severe weather occurrence over a predicted duration of a storm. Finally, the model was implemented and tested during the 2017 PHI prototype experiment

    Assessment of avionics technology in European aerospace organizations

    Get PDF
    This report provides a summary of the observations and recommendations made by a technical panel formed by the National Aeronautics and Space Administration (NASA). The panel, comprising prominent experts in the avionics field, was tasked to visit various organizations in Europe to assess the level of technology planned for use in manufactured civil avionics in the future. The primary purpose of the study was to assess avionics systems planned for implementation or already employed on civil aircraft and to evaluate future research, development, and engineering (RD&E) programs, address avionic systems and aircraft programs. The ultimate goal is to ensure that the technology addressed by NASa programs is commensurate with the needs of the aerospace industry at an international level. The panel focused on specific technologies, including guidance and control systems, advanced cockpit displays, sensors and data networks, and fly-by-wire/fly-by-light systems. However, discussions the panel had with the European organizations were not limited to these topics

    CIRA annual report FY 2016/2017

    Get PDF
    Reporting period April 1, 2016-March 31, 2017

    Towards the “Perfect” Weather Warning

    Get PDF
    This book is about making weather warnings more effective in saving lives, property, infrastructure and livelihoods, but the underlying theme of the book is partnership. The book represents the warning process as a pathway linking observations to weather forecasts to hazard forecasts to socio-economic impact forecasts to warning messages to the protective decision, via a set of five bridges that cross the divides between the relevant organisations and areas of expertise. Each bridge represents the communication, translation and interpretation of information as it passes from one area of expertise to another and ultimately to the decision maker, who may be a professional or a member of the public. The authors explore the partnerships upon which each bridge is built, assess the expertise and skills that each partner brings and the challenges of communication between them, and discuss the structures and methods of working that build effective partnerships. The book is ordered according to the “first mile” paradigm in which the decision maker comes first, and then the production chain through the warning and forecast to the observations is considered second. This approach emphasizes the importance of co-design and co-production throughout the warning process. The book is targeted at professionals and trainee professionals with a role in the warning chain, i.e. in weather services, emergency management agencies, disaster risk reduction agencies, risk management sections of infrastructure agencies. This is an open access book

    CIRA annual report FY 2011/2012

    Get PDF
    • …
    corecore