128 research outputs found

    Global Qualitative Flow-Path Modeling for Local State Determination in Simulation and Analysis

    Get PDF
    For qualitative modeling and analysis, a general qualitative abstraction of power transmission variables (flow and effort) for elements of flow paths includes information on resistance, net flow, permissible directions of flow, and qualitative potential is discussed. Each type of component model has flow-related variables and an associated internal flow map, connected into an overall flow network of the system. For storage devices, the implicit power transfer to the environment is represented by "virtual" circuits that include an environmental junction. A heterogeneous aggregation method simplifies the path structure. A method determines global flow-path changes during dynamic simulation and analysis, and identifies corresponding local flow state changes that are effects of global configuration changes. Flow-path determination is triggered by any change in a flow-related device variable in a simulation or analysis. Components (path elements) that may be affected are identified, and flow-related attributes favoring flow in the two possible directions are collected for each of them. Next, flow-related attributes are determined for each affected path element, based on possibly conflicting indications of flow direction. Spurious qualitative ambiguities are minimized by using relative magnitudes and permissible directions of flow, and by favoring flow sources over effort sources when comparing flow tendencies. The results are output to local flow states of affected components

    Predicting System Accidents with Model Analysis During Hybrid Simulation

    Get PDF
    Standard discrete event simulation is commonly used to identify system bottlenecks and starving and blocking conditions in resources and services. The CONFIG hybrid discrete/continuous simulation tool can simulate such conditions in combination with inputs external to the simulation. This provides a means for evaluating the vulnerability to system accidents of a system's design, operating procedures, and control software. System accidents are brought about by complex unexpected interactions among multiple system failures , faulty or misleading sensor data, and inappropriate responses of human operators or software. The flows of resource and product materials play a central role in the hazardous situations that may arise in fluid transport and processing systems. We describe the capabilities of CONFIG for simulation-time linear circuit analysis of fluid flows in the context of model-based hazard analysis. We focus on how CONFIG simulates the static stresses in systems of flow. Unlike other flow-related properties, static stresses (or static potentials) cannot be represented by a set of state equations. The distribution of static stresses is dependent on the specific history of operations performed on a system. We discuss the use of this type of information in hazard analysis of system designs

    Semantic Annotation of Complex Text Structures in Problem Reports

    Get PDF
    Text analysis is important for effective information retrieval from databases where the critical information is embedded in text fields. Aerospace safety depends on effective retrieval of relevant and related problem reports for the purpose of trend analysis. The complex text syntax in problem descriptions has limited statistical text mining of problem reports. The presentation describes an intelligent tagging approach that applies syntactic and then semantic analysis to overcome this problem. The tags identify types of problems and equipment that are embedded in the text descriptions. The power of these tags is illustrated in a faceted searching and browsing interface for problem report trending that combines automatically generated tags with database code fields and temporal information

    The New ‘Hidden Abode’: Reflections on Value and Labour in the New Economy

    Get PDF
    In a pivotal section of Capital, volume 1, Marx (1976: 279) notes that, in order to understand the capitalist production of value, we must descend into the ‘hidden abode of production’: the site of the labour process conducted within an employment relationship. In this paper we argue that by remaining wedded to an analysis of labour that is confined to the employment relationship, Labour Process Theory (LPT) has missed a fundamental shift in the location of value production in contemporary capitalism. We examine this shift through the work of Autonomist Marxists like Hardt and Negri, Lazaratto and Arvidsson, who offer theoretical leverage to prize open a new ‘hidden abode’ outside employment, for example in the ‘production of organization’ and in consumption. Although they can open up this new ‘hidden abode’, without LPT's fine-grained analysis of control/resistance, indeterminacy and structured antagonism, these theorists risk succumbing to empirically naive claims about the ‘new economy’. Through developing an expanded conception of a ‘new hidden abode’ of production, the paper demarcates an analytical space in which both LPT and Autonomist Marxism can expand and develop their understanding of labour and value production in today's economy. </jats:p

    From the open road to the high seas? Piracy, damnation and resistance in academic consumption of publishing

    Get PDF
    Armin Beverungen conducts research on how universities retain their charitable status in a market environment, and on the teaching of ethics in business schools. Steffen Böhm has a particular interest in the economics and management of sustainability. He has also founded an open access journal and an open access press, MayFlyBooks. Christopher Land works on artists and the management of their creativity

    Array comparative genomic hybridisation (aCGH) analysis of premenopausal breast cancers from a nuclear fallout area and matched cases from Western New York

    Get PDF
    High-resolution array comparative genomic hybridisation (aCGH) analysis of DNA copy number aberrations (CNAs) was performed on breast carcinomas in premenopausal women from Western New York (WNY) and from Gomel, Belarus, an area exposed to fallout from the 1986 Chernobyl nuclear accident. Genomic DNA was isolated from 47 frozen tumour specimens from 42 patients and hybridised to arrays spotted with more than 3000 BAC clones. In all, 20 samples were from WNY and 27 were from Belarus. In total, 34 samples were primary tumours and 13 were lymph node metastases, including five matched pairs from Gomel. The average number of total CNAs per sample was 76 (range 35–134). We identified 152 CNAs (92 gains and 60 losses) occurring in more than 10% of the samples. The most common amplifications included gains at 8q13.2 (49%), at 1p21.1 (36%), and at 8q24.21 (36%). The most common deletions were at 1p36.22 (26%), at 17p13.2 (26%), and at 8p23.3 (23%). Belarussian tumours had more amplifications and fewer deletions than WNY breast cancers. HER2/neu negativity and younger age were also associated with a higher number of gains and fewer losses. In the five paired samples, we observed more discordant than concordant DNA changes. Unsupervised hierarchical cluster analysis revealed two distinct groups of tumours: one comprised predominantly of Belarussian carcinomas and the other largely consisting of WNY cases. In total, 50 CNAs occurred significantly more commonly in one cohort vs the other, and these included some candidate signature amplifications in the breast cancers in women exposed to significant radiation. In conclusion, our high-density aCGH study has revealed a large number of genetic aberrations in individual premenopausal breast cancer specimens, some of which had not been reported before. We identified a distinct CNA profile for carcinomas from a nuclear fallout area, suggesting a possible molecular fingerprint of radiation-associated breast cancer

    Identification and reconstruction of low-energy electrons in the ProtoDUNE-SP detector

    Full text link
    Measurements of electrons from νe\nu_e interactions are crucial for the Deep Underground Neutrino Experiment (DUNE) neutrino oscillation program, as well as searches for physics beyond the standard model, supernova neutrino detection, and solar neutrino measurements. This article describes the selection and reconstruction of low-energy (Michel) electrons in the ProtoDUNE-SP detector. ProtoDUNE-SP is one of the prototypes for the DUNE far detector, built and operated at CERN as a charged particle test beam experiment. A sample of low-energy electrons produced by the decay of cosmic muons is selected with a purity of 95%. This sample is used to calibrate the low-energy electron energy scale with two techniques. An electron energy calibration based on a cosmic ray muon sample uses calibration constants derived from measured and simulated cosmic ray muon events. Another calibration technique makes use of the theoretically well-understood Michel electron energy spectrum to convert reconstructed charge to electron energy. In addition, the effects of detector response to low-energy electron energy scale and its resolution including readout electronics threshold effects are quantified. Finally, the relation between the theoretical and reconstructed low-energy electron energy spectrum is derived and the energy resolution is characterized. The low-energy electron selection presented here accounts for about 75% of the total electron deposited energy. After the addition of lost energy using a Monte Carlo simulation, the energy resolution improves from about 40% to 25% at 50~MeV. These results are used to validate the expected capabilities of the DUNE far detector to reconstruct low-energy electrons.Comment: 19 pages, 10 figure

    Low exposure long-baseline neutrino oscillation sensitivity of the DUNE experiment

    Full text link
    The Deep Underground Neutrino Experiment (DUNE) will produce world-leading neutrino oscillation measurements over the lifetime of the experiment. In this work, we explore DUNE's sensitivity to observe charge-parity violation (CPV) in the neutrino sector, and to resolve the mass ordering, for exposures of up to 100 kiloton-megawatt-years (kt-MW-yr). The analysis includes detailed uncertainties on the flux prediction, the neutrino interaction model, and detector effects. We demonstrate that DUNE will be able to unambiguously resolve the neutrino mass ordering at a 3σ\sigma (5σ\sigma) level, with a 66 (100) kt-MW-yr far detector exposure, and has the ability to make strong statements at significantly shorter exposures depending on the true value of other oscillation parameters. We also show that DUNE has the potential to make a robust measurement of CPV at a 3σ\sigma level with a 100 kt-MW-yr exposure for the maximally CP-violating values \delta_{\rm CP}} = \pm\pi/2. Additionally, the dependence of DUNE's sensitivity on the exposure taken in neutrino-enhanced and antineutrino-enhanced running is discussed. An equal fraction of exposure taken in each beam mode is found to be close to optimal when considered over the entire space of interest

    A Gaseous Argon-Based Near Detector to Enhance the Physics Capabilities of DUNE

    Get PDF
    This document presents the concept and physics case for a magnetized gaseous argon-based detector system (ND-GAr) for the Deep Underground Neutrino Experiment (DUNE) Near Detector. This detector system is required in order for DUNE to reach its full physics potential in the measurement of CP violation and in delivering precision measurements of oscillation parameters. In addition to its critical role in the long-baseline oscillation program, ND-GAr will extend the overall physics program of DUNE. The LBNF high-intensity proton beam will provide a large flux of neutrinos that is sampled by ND-GAr, enabling DUNE to discover new particles and search for new interactions and symmetries beyond those predicted in the Standard Model
    corecore