3,765 research outputs found

    Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians

    Full text link
    This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally justified from standard probabilistic principles and illustrative examples are provided in the fields of nonparametric pattern classification, nonlinear regression and pattern completion. Finally, experiments on a real application and comparative results over standard databases provide empirical evidence of the utility of the method in a wide range of applications

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models

    Integrated Data Acquisition for State-of-the-Art Large-Bore Engine Test Cell

    Get PDF
    Abstract: Internal combustion engines will have an important role on a road to decarbonization and a sustainable powertrain system in the maritime sector. Electrification of the maritime sector is currently difficult due to its excessive energy density demand. Therefore, internal combustion engines will remain a primary power source for ships in the near future. A novel combustion concept, reactivity-controlled compression ignition (RCCI), can be seen as one of the promising combustion technologies that enables simultaneous ultra-low NOx and soot emissions, as well as high thermal efficiency. Although the concept has been developed for a long time, its feasibility for large-bore engine applications has not been publicly demonstrated. The goal of this thesis was to design and implement a new data acquisition system for the large-bore RCCI test bench in University of Vaasa’s VEBIC engine laboratory, as part of the Clean Propulsion Technologies (CPT) project’s work package 3, novel combustion and advanced aftertreatment. The test bench was instrumented with new sensors, analyzers and data acquisition hardware. Devices required to build the system were acquired and device installations, as well as electrical connections were established and supervised. Additionally, data storing workflow, suitable for the new system, was developed. In order to validate the system performance, a partial system test was carried out due to the inability to start up the engine during the thesis. The results from the partial system test proved that the new data acquisition system is able to measure high sampling frequency signals and record them in reference to crank angle. The system that was designed and implemented in the thesis provided several improvements when compared to the previous system. The number of available high sample frequency channels increased from 8 to 16 and the system provides more flexible real-time post-processing capabilities. The upgraded system also provides a significant improvement in integration, as the high-speed and low-speed measurements can be recorded into a single file. In addition to immediate system improvements, the new system is able to expand according to future requirements of the test bench.Tiivistelmä: Polttomoottoreilla tulee olemaan tärkeä rooli hiilidioksidipäästöjen vähentämisessä ja kestävän voimansiirtojärjestelmän toteuttamisessa merenkulkualalla. Merenkulkualan sähköistäminen on nykyisellään hankalaa valtavan energiantarpeen vuoksi. Sen vuoksi polttomoottorit tulevat pysymään lähitulevaisuudessakin laivojen tärkeimpänä voimanlähteenä. Uutta palamismenetelmää, reaktiivisuudella hallittua puristussytytystä (RCCI), voidaan pitää yhtenä lupaavista polttomoottoriteknologioista, jonka avulla voidaan samanaikaisesti saavuttaa erittäin alhaiset typen oksidi- ja hiukkaspäästöt, sekä korkea hyötysuhde. Vaikka konseptia on kehitetty pitkään, soveltuvuutta isosylinterisissä moottoreissa ei ole osoitettu julkisesti. Tämän opinnäytetyön tavoitteena oli suunnitella ja toteuttaa uusi tiedonkeruujärjestelmä isosylinteriseen RCCI -testipenkkiin Vaasan yliopiston VEBIC moottorilaboratoriossa osana Clean Propulsion Technologies (CPT) -projektin työpakettia 3. Testipenkki instrumentoitiin uusilla antureilla, analysaattoreilla ja tiedonkeruulaitteilla. Järjestelmän rakentamiseen tarvittavat laitteet hankittiin ja laiteasennukset sekä sähköliitännät toteutettiin. Lisäksi mahdollistettiin uuteen järjestelmään soveltuva tiedon tallennusprosessi. Järjestelmän suorituskyvyn arvioimiseksi suoritettiin osittainen järjestelmätesti, koska moottoria ei ollut mahdollista käynnistää vielä opinnäytetyön aikana. Osittaisen järjestelmätestin tulokset osoittivat, että uusi tiedonkeruujärjestelmä kykenee mittaamaan korkealla näytteenottotaajuudella ja tallentamaan mittaukset kampiakselin asennon suhteen. Opinnäytetyössä suunniteltu ja toteutettu järjestelmä tarjosi useita parannuksia edelliseen järjestelmään verrattuna. Käytettävissä olevien korkean näytteenottotaajuuden kanavien lukumäärä kasvoi 8:sta 16:een ja järjestelmä tarjoaa joustavamman reaaliaikaisen tiedon jälkikäsittelyn. Päivitetty järjestelmä tarjoaa myös merkittävän parannuksen datan integroimiseen, koska nopeat ja hitaat mittaukset voidaan tallentaa samaan tiedostoon. Välittömien järjestelmän parannusten lisäksi uusi järjestelmä kykenee mukautumaan tulevaisuuden tarpeiden mukaan

    Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data

    Full text link
    The classification of high dimensional data with kernel methods is considered in this article. Exploit- ing the emptiness property of high dimensional spaces, a kernel based on the Mahalanobis distance is proposed. The computation of the Mahalanobis distance requires the inversion of a covariance matrix. In high dimensional spaces, the estimated covariance matrix is ill-conditioned and its inversion is unstable or impossible. Using a parsimonious statistical model, namely the High Dimensional Discriminant Analysis model, the specific signal and noise subspaces are estimated for each considered class making the inverse of the class specific covariance matrix explicit and stable, leading to the definition of a parsimonious Mahalanobis kernel. A SVM based framework is used for selecting the hyperparameters of the parsimonious Mahalanobis kernel by optimizing the so-called radius-margin bound. Experimental results on three high dimensional data sets show that the proposed kernel is suitable for classifying high dimensional data, providing better classification accuracies than the conventional Gaussian kernel

    Index to NASA Tech Briefs, 1975

    Get PDF
    This index contains abstracts and four indexes--subject, personal author, originating Center, and Tech Brief number--for 1975 Tech Briefs

    Dielectric Spectroscopy in Biomaterials: Agrophysics

    Get PDF
    Being dependent on temperature and frequency, dielectric properties are related to various types of food. Predicting multiple physical characteristics of agri-food products has been the main objective of non-destructive assessment possibilities executed in many studies on horticultural products and food materials. This review manipulates the basic fundamentals of dielectric properties with their concepts and principles. The different factors affecting the behavior of dielectric properties have been dissected, and applications executed on different products seeking the characterization of a diversity of chemical and physical properties are all pointed out and referenced with their conclusions. Throughout the review, a detailed description of the various adopted measurement techniques and the mostly popular equipment are presented. This compiled review serves in coming out with an updated reference for the dielectric properties of spectroscopy that are applied in the agrophysics fiel

    Soil Moisture Sensing via Swept Frequency Based Microwave Sensors

    Get PDF
    There is a need for low-cost, high-accuracy measurement of water content in various materials. This study assesses the performance of a new microwave swept frequency domain instrument (SFI) that has promise to provide a low-cost, high-accuracy alternative to the traditional and more expensive time domain reflectometry (TDR). The technique obtains permittivity measurements of soils in the frequency domain utilizing a through transmission configuration, transmissometry, which provides a frequency domain transmissometry measurement (FDT). The measurement is comparable to time domain transmissometry (TDT) with the added advantage of also being able to separately quantify the real and imaginary portions of the complex permittivity so that the measured bulk permittivity is more accurate that the measurement TDR provides where the apparent permittivity is impacted by the signal loss, which can be significant in heavier soils. The experimental SFI was compared with a high-end 12 GHz TDR/TDT system across a range of soils at varying soil water contents and densities. As propagation delay is the fundamental measurement of interest to the well-established TDR or TDT technique; the first set of tests utilized precision propagation delay lines to test the accuracy of the SFI instrument’s ability to resolve propagation delays across the expected range of delays that a soil probe would present when subjected to the expected range of soil types and soil moisture typical to an agronomic cropping system. The results of the precision-delay line testing suggests the instrument is capable of predicting propagation delays with a RMSE of +/−105 ps across the range of delays ranging from 0 to 12,000 ps with a coefficient of determination of r2 = 0.998. The second phase of tests noted the rich history of TDR for prediction of soil moisture and leveraged this history by utilizing TDT measured with a high-end Hewlett Packard TDR/TDT instrument to directly benchmark the SFI instrument over a range of soil types, at varying levels of moisture. This testing protocol was developed to provide the best possible comparison between SFI to TDT than would otherwise be possible by using soil moisture as the bench mark, due to variations in soil density between soil water content levels which are known to impact the calibration between TDR’s estimate of soil water content from the measured propagation delay which is converted to an apparent permittivity measurement. This experimental decision, to compare propagation delay of TDT to FDT, effectively removes the errors due to variations in packing density from the evaluation and provides a direct comparison between the SFI instrument and the time domain technique of TDT. The tests utilized three soils (a sand, an Acuff loam and an Olton clay-loam) that were packed to varying bulk densities and prepared to provide a range of water contents and electrical conductivities by which to compare the performance of the SFI technology to TDT measurements of propagation delay. For each sample tested, the SFI instrument and the TDT both performed the measurements on the exact same probe, thereby both instruments were measuring the exact same soil/soil-probe response to ensure the most accurate means to compare the SFI instrument to a high-end TDT instrument. Test results provided an estimated instrumental accuracy for the SFI of +/−0.98% of full scale, RMSE basis, for the precision delay lines and +/−1.32% when the SFI was evaluated on loam and clay loam soils, in comparison to TDT as the bench-mark. Results from both experiments provide evidence that the low-cost SFI approach is a viable alternative to conventional TDR/TDT for high accuracy applications

    The role of wireless sensor networks (WSNs) in industrial oil and gas condition monitoring

    Get PDF
    Wireless sensor networks have a vast amount of applications including environmental monitoring, military, ecology, agriculture, inventory control, robotics and health care. This paper focuses on the area of monitoring and protection of oil and gas operations using wireless sensor networks that are optimized to decrease installation, and maintenance cost, energy requirements, increase reliability and improve communication efficiency. In addition, simulation experiments using the proposed model are presented. Such models could provide new tools for research in predictive maintenance and condition-based monitoring of factory machinery in general and for “open architecture machining systems” in particular. Wireless sensing no longer needs to be relegated to locations where access is difficult or where cabling is not practical. Wireless condition monitoring systems can be cost effectively implemented in extensive applications that were historically handled by running routes with data collectors.The result would be a lower cost program with more frequent data collection, increased safety, and lower spare parts inventories. Facilities would be able to run leaner because they will have more confidence in their ability to avoid downtime

    Component-level aggregation of probabilistic PCA mixtures using variational-Bayes

    Get PDF
    Technical Report. This report of an extended version of our ICPR'2010 paper.This paper proposes a technique for aggregating mixtures of probabilistic principal component analyzers, which are a powerful probabilistic generative model for coping with a high-dimensional, non linear, data set. Aggregation is carried out through Bayesian estimation with a specific prior and an original variational scheme. We demonstrate how such models may be aggregated by accessing model parameters only, rather than original data, which can be advantageous for learning from distributed data sets. Experimental results illustrate the effectiveness of the proposal
    corecore