69,122 research outputs found

    Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks

    Full text link
    Experiments in particle physics produce enormous quantities of data that must be analyzed and interpreted by teams of physicists. This analysis is often exploratory, where scientists are unable to enumerate the possible types of signal prior to performing the experiment. Thus, tools for summarizing, clustering, visualizing and classifying high-dimensional data are essential. In this work, we show that meaningful physical content can be revealed by transforming the raw data into a learned high-level representation using deep neural networks, with measurements taken at the Daya Bay Neutrino Experiment as a case study. We further show how convolutional deep neural networks can provide an effective classification filter with greater than 97% accuracy across different classes of physics events, significantly better than other machine learning approaches

    Citizen Science 2.0 : Data Management Principles to Harness the Power of the Crowd

    Get PDF
    Citizen science refers to voluntary participation by the general public in scientific endeavors. Although citizen science has a long tradition, the rise of online communities and user-generated web content has the potential to greatly expand its scope and contributions. Citizens spread across a large area will collect more information than an individual researcher can. Because citizen scientists tend to make observations about areas they know well, data are likely to be very detailed. Although the potential for engaging citizen scientists is extensive, there are challenges as well. In this paper we consider one such challenge – creating an environment in which non-experts in a scientific domain can provide appropriate and accurate data regarding their observations. We describe the problem in the context of a research project that includes the development of a website to collect citizen-generated data on the distribution of plants and animals in a geographic region. We propose an approach that can improve the quantity and quality of data collected in such projects by organizing data using instance-based data structures. Potential implications of this approach are discussed and plans for future research to validate the design are described

    The Critical Incident Technique

    Get PDF
    {Excerpt} Organizations are often challenged to identify and resolve workplace problems. The Critical Incident technique gives them a starting point and a process for advancing organizational development through learning experiences. It helps them study “what people do” in various situations. One might think there are no answers to the following questions: How fast can you think on your feet? How do you react in the face of the unexpected? How can you prepare if you cannot predict? And yet, there are. Evidently, some behaviors contribute to the successor failure of individuals—and organizations—in specific situations. And so, responses to the unforeseen lie in identifying before the fact events or circumstances, or series of them, that are outside the range of ordinary human experiences. The questions posed earlier are as old as mankind; but our ability to address them owes largely to the relatively recent work of John Flanagan. These days critical incidents can be harvested to provide a rich, personal perspective of life that facilitates understanding of the issues and obstacles people face every now and then and illuminates avenues for improvement (or replication if outcomes are effective)—avenues that may not be apparent through purely quantitative methods of data collection. This should matter to high-performance organizations

    Using worker flows in the analysis of establishment turnover : evidence from German administrative data

    Get PDF
    "Economists have long been interested in the determinants and components of job creation and destruction. In many countries administrative datasets provide an excellent source for detailed analysis on a fine and disaggregate level. However, administrative datasets are not without problems: restructuring and relabeling of firms is often poorly measured and can potentially create large biases. We provide evidence of the extent of this bias and provide a new solution to deal with it using the German Establishment History Panel (BHP). While previous research has relied on the first and last appearance of the establishment identifier (EID) to identify openings and closings, we improve on this approach using a new dataset containing all worker flows between establishments in Germany. This allows us to credibly identify establishment births and deaths from 1975 to 2004. We show that the misclassification bias of using only the EID is very severe: Only about 35 to 40 percent of new and disappearing EIDs with more than 3 employees correspond unambiguously to real establishment entries and exits. Among larger establishments misclassification is even more common. We show that many new establishment IDs appear to be 'Spin-Offs' and these have become increasingly more common over time. We then demonstrate that using only EID entries and exits may dramatically overstate, by as much as 100 percent, the role of establishment turnover for job creation and destruction. Furthermore correcting job creation and destruction measures for spurious EID entries and exits reduces these measures and aligns them closer with the business cycle." (Author's abstract, IAB-Doku) ((en))labour turnover, Arbeitsplatzwechsel, zwischenbetriebliche Mobilität, Arbeitskräftemobilität, Unternehmensgründung, zusätzliche Arbeitsplätze, Arbeitsplatzabbau, job turnover, Betriebsstilllegung, IAB-Betriebs-Historik-Panel

    Machine learning for automatic prediction of the quality of electrophysiological recordings

    Get PDF
    The quality of electrophysiological recordings varies a lot due to technical and biological variability and neuroscientists inevitably have to select “good” recordings for further analyses. This procedure is time-consuming and prone to selection biases. Here, we investigate replacing human decisions by a machine learning approach. We define 16 features, such as spike height and width, select the most informative ones using a wrapper method and train a classifier to reproduce the judgement of one of our expert electrophysiologists. Generalisation performance is then assessed on unseen data, classified by the same or by another expert. We observe that the learning machine can be equally, if not more, consistent in its judgements as individual experts amongst each other. Best performance is achieved for a limited number of informative features; the optimal feature set being different from one data set to another. With 80–90% of correct judgements, the performance of the system is very promising within the data sets of each expert but judgments are less reliable when it is used across sets of recordings from different experts. We conclude that the proposed approach is relevant to the selection of electrophysiological recordings, provided parameters are adjusted to different types of experiments and to individual experimenters

    What sorts of worlds do we live in nowadays? Teaching biology in a post-modern age.

    Get PDF
    Most historians of science, sociologists of science, philosophers of science and science educators now accept that there is no such thing as 'the scientific method'. We explore the implications of this view of the nature of science for biology education in particular. Accepting that there is no single way of investigating and describing the world scientifically presents both challenges and opportunities, especially when teaching biology. We illustrate these opportunities by suggesting fresh approaches to the teaching of drawing in biology, the teaching of classification and the teaching of human biology
    corecore