639 research outputs found

    Accelerated particle beams in a 3D simulation of the quiet Sun

    Full text link
    Observational and theoretical evidence suggest that beams of accelerated particles are produced in flaring events of all sizes in the solar atmosphere, from X-class flares to nanoflares. Current models of these types of particles in flaring loops assume an isolated 1D atmosphere. A more realistic environment for modelling accelerated particles can be provided by 3D radiative magnetohydrodynamics codes. Here, we present a simple model for particle acceleration and propagation in the context of a 3D simulation of the quiet solar atmosphere, spanning from the convection zone to the corona. We then examine the additional transport of energy introduced by the particle beams. The locations of particle acceleration associated with magnetic reconnection were identified by detecting changes in magnetic topology. At each location, the parameters of the accelerated particle distribution were estimated from local conditions. The particle distributions were then propagated along the magnetic field, and the energy deposition due to Coulomb collisions with the ambient plasma was computed. We find that particle beams originate in extended acceleration regions that are distributed across the corona. Upon reaching the transition region, they converge and produce strands of intense heating that penetrate the chromosphere. Within these strands, beam heating consistently dominates conductive heating below the bottom of the transition region. This indicates that particle beams qualitatively alter the energy transport even outside of active regions.Comment: Accepted for publication in A&

    Comparing efficiency of health systems across industrialized countries: a panel analysis.

    Get PDF
    BackgroundRankings from the World Health Organization (WHO) place the US health care system as one of the least efficient among Organization for Economic Cooperation and Development (OECD) countries. Researchers have questioned this, noting simplistic or inappropriate methodologies, poor measurement choice, and poor control variables. Our objective is to re-visit this question by using newer modeling techniques and a large panel of OECD data.MethodsWe primarily use the OECD Health Data for 25 OECD countries. We compare results from stochastic frontier analysis (SFA) and fixed effects models. We estimate total life expectancy as well as life expectancy at age 60. We explore a combination of control variables reflecting health care resources, health behaviors, and economic and environmental factors.ResultsThe US never ranks higher than fifth out of all 36 models, but is also never the very last ranked country though it was close in several models. The SFA estimation approach produces the most consistent lead country, but the remaining countries did not maintain a steady rank.DiscussionOur study sheds light on the fragility of health system rankings by using a large panel and applying the latest efficiency modeling techniques. The rankings are not robust to different statistical approaches, nor to variable inclusion decisions.ConclusionsFuture international comparisons should employ a range of methodologies to generate a more nuanced portrait of health care system efficiency

    Mouse Behavior Recognition with The Wisdom of Crowd

    Get PDF
    In this thesis, we designed and implemented a crowdsourcing system to annotatemouse behaviors in videos; this involves the development of a novel clip-based video labeling tools, that is more efficient than traditional labeling tools in crowdsourcing platform, as well as the design of probabilistic inference algorithms that predict the true labels and the workers' expertise from multiple workers' responses. Our algorithms are shown to perform better than majority vote heuristic. We also carried out extensive experiments to determine the effectiveness of our labeling tool, inference algorithms and the overall system

    Learning with a Wasserstein loss

    Get PDF
    Learning to predict multi-label outputs is challenging, but in many problems there is a natural metric on the outputs that can be used to improve predictions.In this paper we develop a loss function for multi-label learning, based on the Wasserstein distance. The Wasserstein distance provides a natural notion of dissimilarity for probability measures. Although optimizing with respect to the exact Wasserstein distance is costly, recent work has described a regularized approximation that is efficiently computed. We describe an efficient learning algorithm based on this regularization, as well as a novel extension of the Wasserstein distance from probability measures to unnormalized measures. We also describe a statistical learning bound for the loss. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data tag prediction problem, using the Yahoo Flickr Creative Commons dataset, outperforming a baseline that doesn't use the metric

    Automated fault detection without seismic processing

    Get PDF
    For hydrocarbon exploration, large volumes of data are acquired and used in physical modeling-based workflows to identify geologic features of interest such as fault networks, salt bodies, or, in general, elements of petroleum systems. The adjoint modeling step, which transforms the data into the model space, and subsequent interpretation can be very expensive, both in terms of computing resources and domain-expert time. We propose and implement a unique approach that bypasses these demanding steps, directly assisting interpretation. We do this by training a deep neural network to learn a mapping relationship between the data space and the final output (particularly, spatial points indicating fault presence). The key to obtaining accurate predictions is the use of the Wasserstein loss function, which properly handles the structured output — in our case, by exploiting fault surface continuity. The promising results shown here for synthetic data demonstrate a new way of using seismic data and suggest more direct methods to identify key elements in the subsurface

    A New Measure to Assess Psychopathic Personality in Children: The Child Problematic Traits Inventory

    Get PDF
    Understanding the development of psychopathic personality from childhood to adulthood is crucial for understanding the development and stability of severe and long-lasting conduct problems and criminal behavior. This paper describes the development of a new teacher rated instrument to assess psychopathic personality from age three to 12, the Child Problematic Traits Inventory (CPTI). The reliability and validity of the CPTI was tested in a Swedish general population sample of 2,056 3- to 5-year-olds (mean age = 3.86; SD = .86; 53 % boys). The CPTI items loaded distinctively on three theoretically proposed factors: a Grandiose-Deceitful Factor, a Callous-Unemotional factor, and an Impulsive-Need for Stimulation factor. The three CPTI factors showed reliability in internal consistency and external validity, in terms of expected correlations with theoretically relevant constructs (e.g., fearlessness). The interaction between the three CPTI factors was a stronger predictor of concurrent conduct problems than any of the three individual CPTI factors, showing that it is important to assess all three factors of the psychopathic personality construct in early childhood. In conclusion, the CPTI seems to reliably and validly assess a constellation of traits that is similar to psychopathic personality as manifested in adolescence and adulthood
    corecore