40,646 research outputs found

    Single-particle levitation system for automated study of homogeneous solute nucleation

    Get PDF
    We present an instrument that addresses two critical requirements for quantitative measurements of the homogeneous crystal nucleation rate in supersaturated aqueous solution. First, the need to perform repeated measurements of nucleation incubation times is met by automating experiments to enable programmable cycling of thermodynamic conditions. Second, the need for precise and robust control of the chemical potential in supersaturated aqueous solution is met by implementing a novel technique for regulating relative humidity. The apparatus levitates and weighs micron-sized samples in an electric field, providing access to highly supersaturated states. We report repeated observations of the crystal nucleation incubation time in a supersaturated aqueous sodium chloride droplet, from which we infer the nucleation rate

    Reducing Spatial Data Complexity for Classification Models

    Get PDF
    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which fiirther hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels

    Development and validation of 'AutoRIF': Software for the automated analysis of radiation-induced foci

    Get PDF
    Copyright @ 2012 McVean et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.Background: The quantification of radiation-induced foci (RIF) to investigate the induction and subsequent repair of DNA double strands breaks is now commonplace. Over the last decade systems specific for the automatic quantification of RIF have been developed for this purpose, however to ask more mechanistic questions on the spatio-temporal aspects of RIF, an automated RIF analysis platform that also quantifies RIF size/volume and relative three-dimensional (3D) distribution of RIF within individual nuclei, is required. Results: A java-based image analysis system has been developed (AutoRIF) that quantifies the number, size/volume and relative nuclear locations of RIF within 3D nuclear volumes. Our approach identifies nuclei using the dynamic Otsu threshold and RIF by enhanced Laplacian filtering and maximum entropy thresholding steps and, has an application ‘batch optimisation’ process to ensure reproducible quantification of RIF. AutoRIF was validated by comparing output against manual quantification of the same 2D and 3D image stacks with results showing excellent concordance over a whole range of sample time points (and therefore range of total RIF/nucleus) after low-LET radiation exposure. Conclusions: This high-throughput automated RIF analysis system generates data with greater depth of information and reproducibility than that which can be achieved manually and may contribute toward the standardisation of RIF analysis. In particular, AutoRIF is a powerful tool for studying spatio-temporal relationships of RIF using a range of DNA damage response markers and can be run independently of other software, enabling most personal computers to perform image analysis. Future considerations for AutoRIF will likely include more complex algorithms that enable multiplex analysis for increasing combinations of cellular markers.This article is made available through the Brunel Open Access Publishing Fund

    Southern Hemisphere automated supernova search

    Get PDF
    The Perth Astronomy Research Group has developed an automated supernova search program, using the 61 cm Perth–Lowell reflecting telescope at Perth Observatory in Western Australia, equipped with a CCD camera. The system is currently capable of observing about 15 objects per hour, using 3 min exposures, and has a detection threshold of 18th–19th magnitude. The entire system has been constructed using low‐cost IBM‐compatible computers. Two original discoveries (SN 1993K, SN 1994R) have so far been made during automated search runs. This paper describes the hardware and software used for the supernova search program, and shows some preliminary results from the search system

    Automated counter-terrorism

    Get PDF
    We present a holistic systems view of automated intelligence analysis for counter-terrorism with focus on the behavioural attributes of terrorist groups

    Identifying how automation can lose its intended benefit along the development process : a research plan

    Get PDF
    Doctoral Consortium Presentation Š The Authors 2009Automation is usually considered to improve performance in virtually any domain. However it can fail to deliver the target benefit as intended by those managers and designers advocating the introduction of the tool. In safety critical domains this problem is of significance not only because the unexpected effects of automation might prevent its widespread usage but also because they might turn out to be a contributor to incident and accidents. Research on failures of automation to deliver the intended benefit has focused mainly on human automation interaction. This paper presents a PhD research plan that aims at characterizing decisions for those involved in development process of automation for safety critical domains, taken under productive pressure, to identify where and when the initial intention the automation is supposed to deliver can be lost along the development process. We tentatively call such decisions as drift and the final objective is to develop principles that will allow to identify and compensate for possible sources of drift in the development of new automation. The research is based on case studies and is currently entering Year 2

    ReSHAPE: A Framework for Dynamic Resizing and Scheduling of Homogeneous Applications in a Parallel Environment

    Get PDF
    Applications in science and engineering often require huge computational resources for solving problems within a reasonable time frame. Parallel supercomputers provide the computational infrastructure for solving such problems. A traditional application scheduler running on a parallel cluster only supports static scheduling where the number of processors allocated to an application remains fixed throughout the lifetime of execution of the job. Due to the unpredictability in job arrival times and varying resource requirements, static scheduling can result in idle system resources thereby decreasing the overall system throughput. In this paper we present a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executed on distributed memory platforms. The framework includes a scheduler that supports resizing of applications, an API to enable applications to interact with the scheduler, and a library that makes resizing viable. Applications executed using the ReSHAPE scheduler framework can expand to take advantage of additional free processors or can shrink to accommodate a high priority application, without getting suspended. In our research, we have mainly focused on structured applications that have two-dimensional data arrays distributed across a two-dimensional processor grid. The resize library includes algorithms for processor selection and processor mapping. Experimental results show that the ReSHAPE framework can improve individual job turn-around time and overall system throughput.Comment: 15 pages, 10 figures, 5 tables Submitted to International Conference on Parallel Processing (ICPP'07
    • …
    corecore