46 research outputs found

    Accelerated face detector training using the PSL framework

    Get PDF
    We train a face detection system using the PSL framework [1] which combines the AdaBoost learning algorithm and Haar-like features. We demonstrate the ability of this framework to overcome some of the challenges inherent in training classifiers that are structured in cascades of boosted ensembles (CoBE). The PSL classifiers are compared to the Viola-Jones type cas- caded classifiers. We establish the ability of the PSL framework to produce classifiers in a complex domain in significantly reduced time frame. They also comprise of fewer boosted en- sembles albeit at a price of increased false detection rates on our test dataset. We also report on results from a more diverse number of experiments carried out on the PSL framework in order to shed more insight into the effects of variations in its adjustable training parameters

    Exploring a Modelling Method with Semantic Link Network and Resource Space Model

    Get PDF
    To model the complex reality, it is necessary to develop a powerful semantic model. A rational approach is to integrate a relational view and a multi-dimensional view of reality. The Semantic Link Network (SLN) is a semantic model based on a relational view and the Resource Space Model (RSM) is a multi-dimensional view for managing, sharing and specifying versatile resources with a universal resource observation. The motivation of this research consists of four aspects: (1) verify the roles of Semantic Link Network and the Resource Space Model in effectively managing various types of resources, (2) demonstrate the advantages of the Resource Space Model and Semantic Link Network, (3) uncover the rules through applications, and (4) generalize a methodology for modelling complex reality and managing various resources. The main contribution of this work consists of the following aspects: 1. A new text summarization method is proposed by segmenting a document into clauses based on semantic discourse relations and ranking and extracting the informative clauses according to their relations and roles. The Resource Space Model benefits from using semantic link network, ranking techniques and language characteristics. Compared with other summarization approaches, the proposed approach based on semantic relations achieves a higher recall score. Three implications are obtained from this research. 2. An SLN-based model for recommending research collaboration is proposed by extracting a semantic link network of different types of semantic nodes and different types of semantic links from scientific publications. Experiments on three data sets of scientific publications show that the model achieves a good performance in predicting future collaborators. This research further unveils that different semantic links play different roles in representing texts. 3. A multi-dimensional method for managing software engineering processes is developed. Software engineering processes are mapped into multiple dimensions for supporting analysis, development and maintenance of software systems. It can be used to uniformly classify and manage software methods and models through multiple dimensions so that software systems can be developed with appropriate methods. Interfaces for visualizing Resource Space Model are developed to support the proposed method by keeping the consistency among interface, the structure of model and faceted navigation

    Nonlinear wave-particle resonance in deterministic and stochastic kinetic plasmas

    Get PDF
    In kinetic plasma physics, BGK modes are ubiquitous solutions to the Vlasov equation, with particles travelling along orbits where the single particle energy is conserved. Approximate extensions of these exact solutions have been successfully used in the past to understand the formation and evolution of ‘holes’ and ‘clumps’, coherent structures on the particle distribution function which under certain conditions form in the nonlinear phase of the evolution of kinetic plasmas. In this thesis, analytical results are shown which consider perturbations and deformations to BGK orbits, allowing one to robustly construct more exotic orbits that allow for mode growth and frequency chirping. Computational results produced using the DARK code are presented, examining stochastic and deterministic populations in a 1D electrostatic plasma, and how they affect electrostatic waves exhibiting Landau resonance, based on Berk-Breizman models. A model is presented for parametric mode-mode destabilisation via holes and clumps interacting via the background distribution. Finally, work using the machine learning framework ERICSON is presented, analysing frequency spectrograms of magnetic perturbations in AlfvĂ©nic and sub-AlfvĂ©nic frequency ranges

    Bioaerosol detection through simultaneous measurement of particle intrinsic fluorescence and spatial light scattering

    Get PDF
    Interest in the role and detection of airborne biological micro-organisms has increased dramatically in recent years, in part through heightened fears of bioterrorism. Traditional bio-detection methods have generally slow response times and require the use of reagents. Conversely, techniques based on light scattering phenomena are reagent-free and are able to operate in real-time. Previous research has established that classification of certain types of airborne particles on the basis of shape and size may be achieved through the analysis of the spatial light scattering patterns produced by individual particles. Similarly, other research has shown that the intrinsic fluorescence of particles excited by radiation of an appropriate wavelength can be used to establish the presence of biological particles, provided background particles with similar fluorescence properties are not present. This is often not the case. This thesis, therefore, describes the design, development, and testing of a new type of bioaerosol detection instrument in which the advantages of both particle spatial light scattering analysis and intrinsic fluorescence are exploited. The instrument, referred to as the Mult- Parameter Aerosol Monitor (MPAM), is unique in simultaneously recording data relating to the size, shape, and fluorescence properties of individual airborne particles at rates up to several thousand particles per second. The MPAM uses a continuous-wave frequency quadrupled Nd: YVO4 laser to produce both spatial scattering and fluorescence data from particles carried in single-file through the laser beam. This use of a CW laser leads to opto-mechanical simplicity and reduces fluorescence bleaching effects. A custom-designed multi-pixel Hybrid Photodiode (HPD) detector is used to record the spatial scattering data in forward scattering plane whilst particle fluorescence is recorded via a large solid-angle ellipsoidal reflector and single photomultiplier detector. Calibration tests and experimental trials involving a range of both biological and nonbiological aerosols have shown that the MPAM, when supported by appropriate data analysis algorithms, is capable of achieving enhanced levels of discrimination between biological and non-biological particles down to the submicrometre sizes and, in some cases, enhanced discrimination between classes of biological particle

    Reining in the Functional Verification of Complex Processor Designs with Automation, Prioritization, and Approximation

    Full text link
    Our quest for faster and efficient computing devices has led us to processor designs with enormous complexity. As a result, functional verification, which is the process of ascertaining the correctness of a processor design, takes up a lion's share of the time and cost spent on making processors. Unfortunately, functional verification is only a best-effort process that cannot completely guarantee the correctness of a design, often resulting in defective products that may have devastating consequences.Functional verification, as practiced today, is unable to cope with the complexity of current and future processor designs. In this dissertation, we identify extensive automation as the essential step towards scalable functional verification of complex processor designs. Moreover, recognizing that a complete guarantee of design correctness is impossible, we argue for systematic prioritization and prudent approximation to realize fast and far-reaching functional verification solutions. We partition the functional verification effort into three major activities: planning and test generation, test execution and bug detection, and bug diagnosis. Employing a perspective we refer to as the automation, prioritization, and approximation (APA) approach, we develop solutions that tackle challenges across these three major activities. In pursuit of efficient planning and test generation for modern systems-on-chips, we develop an automated process for identifying high-priority design aspects for verification. In addition, we enable the creation of compact test programs, which, in our experiments, were up to 11 times smaller than what would otherwise be available at the beginning of the verification effort. To tackle challenges in test execution and bug detection, we develop a group of solutions that enable the deployment of automatic and robust mechanisms for catching design flaws during high-speed functional verification. By trading accuracy for speed, these solutions allow us to unleash functional verification platforms that are over three orders of magnitude faster than traditional platforms, unearthing design flaws that are otherwise impossible to reach. Finally, we address challenges in bug diagnosis through a solution that fully automates the process of pinpointing flawed design components after detecting an error. Our solution, which identifies flawed design units with over 70% accuracy, eliminates weeks of diagnosis effort for every detected error.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137057/1/birukw_1.pd

    Using drones to improve wildlife monitoring in a changing climate

    Get PDF
    This thesis advances knowledge of wildlife monitoring techniques and demonstrates the potential of high-resolution, remotely sensed data to inform species conservation, improve ecosystem management and assess mitigation strategies for biodiversity loss. Drones can easily collect systematic, high spatial and temporal resolution data to detect fluctuations in key parameters such as abundance, range and condition of some species. Advances in drone-facilitated wildlife monitoring of sentinel species will provide rapid, efficient insights into ecosystem-level changes. This thesis focused on resolving knowledge gaps within three key areas of wildlife drone-ecology: disturbance, population monitoring and body condition. From the outset, we recognised drones might have undesirable or unforeseen behavioural and physiological effects on wildlife. To address this, I led a time-critical publication that advocated researchers adopt a precautionary approach given the limited understanding of the impacts. It also provided recommendations for conducting drone-facilitated research around wildlife as the basis for a code of best practice. Then, using colonial birds as a study group, we tested the utility of drone-derived data for population monitoring. First, life-sized, replica seabird colonies containing a known number of fake birds were used to robustly assess the accuracy of our intended approach compared to the traditional ground-based counting method. Drone-derived abundance data were, on average, between 43% and 96% more accurate, as well as more precise, than estimates from the traditional approach. Our open-source, semi-automated detection algorithm estimated abundance 94% similar to manual counts from the remotely sensed imagery. To apply this in the field, we collected drone-derived abundance data by repeatedly surveying representative, wild colonial birds (a tern, cormorant and pelican species). We used these data to develop a transferable technique requiring minimal user-input for adaptable and high spatiotemporal population monitoring. Finally, to investigate the use of drone-facilitated photogrammetry, we used a representative pinniped species to test if non-invasively acquired, morphometric data could infer body condition. Drone-derived measurements of endangered Australian sea lions (Neophoca cinerea) of known size and mass were precise and without bias. These two- and three-dimensional measurements from orthomosaics and digital elevation models were highly correlated with animal mass and body condition indices and not significantly different to those generated from ground-collected data. This work addresses and informs a range of issues arising from human activity in the Anthropocene, including rapid habitat loss, species extinctions and an altered climate. We have shown that using technology for wildlife monitoring enables timely, proactive environmental and conservation management.Thesis (Ph.D.) -- University of Adelaide, School of Biological Sciences, 202

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    Optimal transport based simulation methods for deep probabilistic models

    Get PDF
    Deep probabilistic models have emerged as state-of-the-art for high-dimensional, multi-modal data synthesis and density estimation tasks. By combining abstract probabilistic formulations with the expressivity and scalability of neural networks, deep probabilistic models have become a fundamental component of the machine learning toolbox. Such models still have a number of limitations however. For example, deep probabilistic models are often limited to gradient based training and hence struggle to incorporate non-differentiable operations; they are expensive to train and sample from; and often deep probabilistic models do not leverage prior geometric and problem-specific structural knowledge. This thesis consists of four contributing pieces of work and advances the field of deep probabilistic models through optimal transport based simulation methods. First, by using regularized optimal transport via the Sinkhorn algorithm, we provide a theoretically grounded and differentiable approximation to resampling within particle filtering. This allows one to perform gradient based training of state space models, a class of sequential probabilistic model, with end-to-end differentiable particle filtering. Next, we explore initialization strategies for the Sinkhorn algorithm to address speed issues. We show that careful initializations result in dramatic acceleration of the Sinkhorn algorithm. This has applications in differentiable sorting; clustering within the latent space of a variational autoencoder; and within particle filtering. The remaining two works contribute to the field of diffusion based generative modelling through the Schrödinger Bridge. First, we connect diffusion models to the Schrödinger Bridge, coined the Diffusion Schrödinger Bridge. This methodology enables accelerated sampling; data-to-data simulation, and a novel way to compute regularized optimal transport for high dimensional, continuous state-space problems. Finally, we extend the Diffusion Schrödinger Bridge to the Riemannian manifold setting. This allows one to incorporate prior geometric knowledge and hence enable more efficient training and inference for diffusion models on Riemannian manifold valued data. This has applications in climate and Earth science

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore