162,432 research outputs found

    The composite load spectra project

    Get PDF
    Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it

    The NESSUS finite element code

    Get PDF
    The objective of this development is to provide a new analysis tool which integrates the structural modeling versatility of a modern finite element code with the latest advances in the area of probabilistic modeling and structural reliability. Version 2.0 of the NESSUS finite element code was released last February, and is currently being exercised on a set of problems which are representative of typical Space Shuttle Main Engine (SSME) applications. NESSUS 2.0 allows linear elastostatic and eigenvalue analysis of structures with uncertain geometry, material properties and boundary conditions, which are subjected to a random mechanical and thermal loading environment. The NESSUS finite element code is a key component in a broader software system consisting of five major modules. NESSUS/EXPERT is an expert system under development at Southwest Research Institute, with the objective of centralizing all component-specific knowledge useful for conducting probabilistic analysis of typical Space Shuttle Main Engine (SSME) components. NESSUS/FEM contains the finite element code used for the structural analysis and parameter sensitivity evaluation of these components. The task of parametrizing a finite element mesh in terms of the random variables present is facilitated with the use of the probabilistic data preprocessor in NESSUS/PRE. An external database file is used for managing the bulk of the data generated by NESSUS/FEM

    Scalable Statistical Modeling and Query Processing over Large Scale Uncertain Databases

    Get PDF
    The past decade has witnessed a large number of novel applications that generate imprecise, uncertain and incomplete data. Examples include monitoring infrastructures such as RFIDs, sensor networks and web-based applications such as information extraction, data integration, social networking and so on. In my dissertation, I addressed several challenges in managing such data and developed algorithms for efficiently executing queries over large volumes of such data. Specifically, I focused on the following challenges. First, for meaningful analysis of such data, we need the ability to remove noise and infer useful information from uncertain data. To address this challenge, I first developed a declarative system for applying dynamic probabilistic models to databases and data streams. The output of such probabilistic modeling is probabilistic data, i.e., data annotated with probabilities of correctness/existence. Often, the data also exhibits strong correlations. Although there is prior work in managing and querying such probabilistic data using probabilistic databases, those approaches largely assume independence and cannot handle probabilistic data with rich correlation structures. Hence, I built a probabilistic database system that can manage large-scale correlations and developed algorithms for efficient query evaluation. Our system allows users to provide uncertain data as input and to specify arbitrary correlations among the entries in the database. In the back end, we represent correlations as a forest of junction trees, an alternative representation for probabilistic graphical models (PGM). We execute queries over the probabilistic database by transforming them into message passing algorithms (inference) over the junction tree. However, traditional algorithms over junction trees typically require accessing the entire tree, even for small queries. Hence, I developed an index data structure over the junction tree called INDSEP that allows us to circumvent this process and thereby scalably evaluate inference queries, aggregation queries and SQL queries over the probabilistic database. Finally, query evaluation in probabilistic databases typically returns output tuples along with their probability values. However, the existing query evaluation model provides very little intuition to the users: for instance, a user might want to know Why is this tuple in my result? or Why does this output tuple have such high probability? or Which are the most influential input tuples for my query ?'' Hence, I designed a query evaluation model, and a suite of algorithms, that provide users with explanations for query results, and enable users to perform sensitivity analysis to better understand the query results

    Text Dependent Speaker Verification

    Get PDF
    Cílem této bakalářské práce bylo navrhnout systém pro textově závislé rozpoznávání mluvčího. Bylo otestováno několik přístupů na databázi MIT, která obsahuje nahrávky průměrné délky 0,46s. Z otestovaných přístupů se jeví jako nejlepší kombinace systému DTW s využitím odhadu posteriorních pravděpodobností fonémů (posteriogramu) jako výstupu z Fonémového rozpoznávače, a akustického SID systému založeného na iVektorech a PLDA (Probabilistic Linear Component Analysis). Fúze těchto dvou systémů pomocí Neuronové sítě dosahuje nejlepších výsledků (EER) a to 17,84% pro ženy a 16,38% pro muže, což je relativní zlepšení 49,9% u žen a 54,2% u mužů oproti samostatnému akustickému rozpoznávání.The goal of this Bachelor's thesis was to design text dependent speaker recognition system. There were few systems tested for MIT database. This database contains recordings of 0.46s average length. Best case for recognition is to use a combination of DTW system using posterior probability estimation (posteriograms) as an output of Phoneme recognizer and acoustic SID system based on iVectors and PLDA (Probabilistic Linear Component Analysis). Fusion with Neural network gives the best results (EER). These are 17.84% EER for women and 16.38% for men. It's 49.9% relative improvement for women and 54.2% for men against acoustic recognition alone.

    ELISA: Structure-Function Inferences based on statistically significant and evolutionarily inspired observations

    Get PDF
    The problem of functional annotation based on homology modeling is primary to current bioinformatics research. Researchers have noted regularities in sequence, structure and even chromosome organization that allow valid functional cross-annotation. However, these methods provide a lot of false negatives due to limited specificity inherent in the system. We want to create an evolutionarily inspired organization of data that would approach the issue of structure-function correlation from a new, probabilistic perspective. Such organization has possible applications in phylogeny, modeling of functional evolution and structural determination. ELISA (Evolutionary Lineage Inferred from Structural Analysis, ) is an online database that combines functional annotation with structure and sequence homology modeling to place proteins into sequence-structure-function "neighborhoods". The atomic unit of the database is a set of sequences and structural templates that those sequences encode. A graph that is built from the structural comparison of these templates is called PDUG (protein domain universe graph). We introduce a method of functional inference through a probabilistic calculation done on an arbitrary set of PDUG nodes. Further, all PDUG structures are mapped onto all fully sequenced proteomes allowing an easy interface for evolutionary analysis and research into comparative proteomics. ELISA is the first database with applicability to evolutionary structural genomics explicitly in mind. Availability: The database is available at

    The MAPPER2 Database: a multi-genome catalog of putative transcription factor binding sites

    Get PDF
    The mapper2 Database (http://genome.ufl.edu/mapperdb) is a component of mapper2, a web-based system for the analysis of transcription factor binding sites in multiple genomes. The database contains predicted binding sites identified in the promoters of all human, mouse and Drosophila genes using 1017 probabilistic models representing over 600 different transcription factors. In this article we outline the current contents of the database and we describe its web-based user interface in detail. We then discuss ongoing work to extend the database contents to experimental data and to add analysis capabilities. Finally, we provide information about recent improvements to the hardware and software platform that mapper2 is based on

    Conversion of Piano Recording from WAV to MIDI

    Get PDF
    Cílem této práce je návrh systému pro strojový převod polyfonních nahrávek piana z audio formátu WAV do MIDI. Práce popisuje problematiku rozpoznání tónů v hudebních záznamech a předkládá návrh řešení postavený na pravděpodobnostním modelu využívajícím metodu Probabilistic Latent Component Analysis. Pro trénování modelu byly použity nahrávky jednotlivých tónů digitálního piana. Navržený systém byl následně testován na sadě syntetizovaných nahrávek klasické hudby z databáze Classical Piano Midi i na sadě nahrávek piana Korg SP-250 a následně byl vyhodnocen za pomoci odlišných metrik. V závěru jsou výsledky úspěšnosti rozpoznání porovnány s jinými již existujícími systémy.The aim of the thesis is to propose a system capable of automatic conversion of polyphonic piano recordings from the audio format WAV to MIDI. The thesis describes problems related to single tone recognition in music recordings and proposes a solution based on a probabilistic model that uses the Probabilistic Latent Component Analysis method. Recordings of isolated digital piano tones were used to train the system. The proposed system was tested on classical recordings of the Classical Piano MIDI database and on recordings of a Korg SP-250 piano and evaluated using a variety of metrics. The conclusion part contains the results of recognition success rate and their comparison with other existing systems.
    corecore