2,105 research outputs found

    BMC Genomics

    Get PDF
    BackgroundPoxviruses constitute one of the largest and most complex animal virus families known. The notorious smallpox disease has been eradicated and the virus contained, but its simian sister, monkeypox is an emerging, untreatable infectious disease, killing 1 to 10\ua0% of its human victims. In the case of poxviruses, the emergence of monkeypox outbreaks in humans and the need to monitor potential malicious release of smallpox virus requires development of methods for rapid virus identification. Whole-genome sequencing (WGS) is an emergent technology with increasing application to the diagnosis of diseases and the identification of outbreak pathogens. But \u201cfinishing\u201d such a genome is a laborious and time-consuming process, not easily automated. To date the large, complete poxvirus genomes have not been studied comprehensively in terms of applying WGS techniques and evaluating genome assembly algorithms.ResultsTo explore the limitations to finishing a poxvirus genome from short reads, we first analyze the repetitive regions in a monkeypox genome and evaluate genome assembly on the simulated reads. We also report on procedures and insights relevant to the assembly (from realistically short reads) of genomes. Finally, we propose a neural network method (namely Neural-KSP) to \u201cfinish\u201d the process by closing gaps remaining after conventional assembly, as the final stage in a protocol to elucidate clinical poxvirus genomic sequences.ConclusionsThe protocol may prove useful in any clinical viral isolate (regardless if a reference-strain sequence is available) and especially useful in genomes confounded by many global and local repetitive sequences embedded in them. This work highlights the feasibility of finishing real, complex genomes by systematically analyzing genetic characteristics, thus remedying existing assembly shortcomings with a neural network method. Such finished sequences may enable clinicians to track genetic distance between viral isolates that provides a powerful epidemiological tool.2016-08-31T00:00:00Z27585810PMC50095261119

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Oscillatory dynamics as a mechanism of integration in complex networks of neurons

    No full text
    The large-scale integrative mechanisms of the brain, the means by which the activity of functionally segregated neuronal regions are combined, are not well understood. There is growing agreement that a flexible mechanism of integration must be present in order to support the myriad changing cognitive demands under which we are placed. Neuronal communication through phase-coherent oscillation stands as the prominent theory of cognitive integration. The work presented in this thesis explores the role of oscillation and synchronisation in the transfer and integration of information in the brain. It is first shown that complex metastable dynamics suitable for modelling phase-coherent neuronal synchronisation emerge from modularity in networks of delay and pulse-coupled oscillators. Within a restricted parameter regime these networks display a constantly changing set of partially synchronised states where some modules remain highly synchronised while others desynchronise. An examination of network phase dynamics shows increasing coherence with increasing connectivity between modules. The metastable chimera states that emerge from the activity of modular oscillator networks are demonstrated to be synchronous with a constant phase relationship as would be required of a mechanism of large-scale neural integration. A specific example of functional phase-coherent synchronisation within a spiking neural system is then developed. Competitive stimulus selection between converging population encoded stimuli is demonstrated through entrainment of oscillation in receiving neurons. The behaviour of the model is shown to be analogous to well-known competitive processes of stimulus selection such as binocular rivalry, matching key experimentally observed properties for the distribution and correlation of periods of entrainment under differing stimuli strength. Finally two new measures of network centrality, knotty-centrality and set betweenness centrality, are developed and applied to empirically derived human structural brain connectivity data. It is shown that human brain organisation exhibits a topologically central core network within a modular structure consistent with the generation of synchronous oscillation with functional phase dynamics

    Path Representation Learning in Road Networks

    Get PDF

    Developing of Ultrasound Experimental Methods using Machine Learning Algorithms for Application of Temperature Monitoring of Nano-Bio-Composites Extrusion

    Get PDF
    In industry fiber degradation during processing of biocomposite in the extruder is a problem that requires a reliable solution to save time and money wasted on producing damaged material. In this thesis, We try to focus on a practical solution that can monitor the change in temperature that causes fiber degradation and material damage to stop it when it occurs. Ultrasound can be used to detect the temperature change inside the material during the process of material extrusion. A monitoring approach for the extruder process has been developed using ultrasound system and the techniques of machine learning algorithms. A measurement cell was built to form a dataset of ultrasound signals at different temperatures for analysis. Machine learning algorithms were applied through machine-learning algorithm’s platform to classify the dataset based on the temperature. The dataset was classified with accuracy 97% into two categories representing over and below damage temperature (190oc) ultrasound signal. This approach could be used in industry to send an alarm or a temperature control signal when material damage is detected. Biocomposite is at the core of automotive industry material research and development concentration. Melt mixing process was used to mix biocomposite material with multi-walled carbon nanotubes (MWCNTs) for the purpose of enhancing mechanical and thermal properties of biocomposite. The resulting composite nano-bio- composite was tested via different types of thermal and mechanical tests to evaluate its performance relative to biocomposite. The developed material showed enhancement in mechanical and thermal properties that considered a high potential for applications in the future

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd
    • …
    corecore