3,207 research outputs found

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    The NASA SBIR product catalog

    Get PDF
    The purpose of this catalog is to assist small business firms in making the community aware of products emerging from their efforts in the Small Business Innovation Research (SBIR) program. It contains descriptions of some products that have advanced into Phase 3 and others that are identified as prospective products. Both lists of products in this catalog are based on information supplied by NASA SBIR contractors in responding to an invitation to be represented in this document. Generally, all products suggested by the small firms were included in order to meet the goals of information exchange for SBIR results. Of the 444 SBIR contractors NASA queried, 137 provided information on 219 products. The catalog presents the product information in the technology areas listed in the table of contents. Within each area, the products are listed in alphabetical order by product name and are given identifying numbers. Also included is an alphabetical listing of the companies that have products described. This listing cross-references the product list and provides information on the business activity of each firm. In addition, there are three indexes: one a list of firms by states, one that lists the products according to NASA Centers that managed the SBIR projects, and one that lists the products by the relevant Technical Topics utilized in NASA's annual program solicitation under which each SBIR project was selected

    Detector Description and Performance for the First Coincidence Observations between LIGO and GEO

    Get PDF
    For 17 days in August and September 2002, the LIGO and GEO interferometer gravitational wave detectors were operated in coincidence to produce their first data for scientific analysis. Although the detectors were still far from their design sensitivity levels, the data can be used to place better upper limits on the flux of gravitational waves incident on the earth than previous direct measurements. This paper describes the instruments and the data in some detail, as a companion to analysis papers based on the first data.Comment: 41 pages, 9 figures 17 Sept 03: author list amended, minor editorial change

    Signal processing for improved MPEG-based communication systems

    Get PDF

    Integrating the Supply Chain with RFID: A Technical and Business Analysis

    Get PDF
    This paper presents an in-depth analysis of the technical and business implications of adopting Radio Frequency Identification (RFID) in organizational settings. The year 2004 marked a significant shift toward adopting RFID because of mandates by large retailers and government organizations. The use of RFID technology is expected to increase rapidly in the next few years. At present, however, initial barriers against widespread adoption include standards, interoperability, costs, forward compatibility, and lack of familiarity. This paper describes basic components of an RFID system including tags, readers, and antennas and how they work together using an integrated supply chain model. Our analysis suggests that business needs to overcome human resource scarcity, security, legal and financial challenges and make informed decision regarding standards and process reengineering. The technology is not fully mature and suffers from issues of attenuation and interference. A laboratory experiment conducted by the authors\u27 shows that the middleware is not yet at a plug-and-play stage, which means that initial adopters need to spend considerable effort to integrate RFID into their existing business processes. Appendices contain a glossary of common RFID terms, a list of RFID vendors and detailed findings of the laboratory experiment. NOTE: BECAUSE OF THE ILLUSTRATIONS USED, THIS ARTICLE IS LONG; APPROXIMATELY 850KB IN BOTH JOURNAL AND ARTICLE VERSIO

    JUNO Conceptual Design Report

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine the neutrino mass hierarchy using an underground liquid scintillator detector. It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants in Guangdong, China. The experimental hall, spanning more than 50 meters, is under a granite mountain of over 700 m overburden. Within six years of running, the detection of reactor antineutrinos can resolve the neutrino mass hierarchy at a confidence level of 3-4σ\sigma, and determine neutrino oscillation parameters sin2θ12\sin^2\theta_{12}, Δm212\Delta m^2_{21}, and Δmee2|\Delta m^2_{ee}| to an accuracy of better than 1%. The JUNO detector can be also used to study terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard Model. The central detector contains 20,000 tons liquid scintillator with an acrylic sphere of 35 m in diameter. \sim17,000 508-mm diameter PMTs with high quantum efficiency provide \sim75% optical coverage. The current choice of the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of detected photoelectrons per MeV is larger than 1,100 and the energy resolution is expected to be 3% at 1 MeV. The calibration system is designed to deploy multiple sources to cover the entire energy range of reactor antineutrinos, and to achieve a full-volume position coverage inside the detector. The veto system is used for muon detection, muon induced background study and reduction. It consists of a Water Cherenkov detector and a Top Tracker system. The readout system, the detector control system and the offline system insure efficient and stable data acquisition and processing.Comment: 328 pages, 211 figure

    In Datacenter Performance, The Only Constant Is Change

    Full text link
    All computing infrastructure suffers from performance variability, be it bare-metal or virtualized. This phenomenon originates from many sources: some transient, such as noisy neighbors, and others more permanent but sudden, such as changes or wear in hardware, changes in the underlying hypervisor stack, or even undocumented interactions between the policies of the computing resource provider and the active workloads. Thus, performance measurements obtained on clouds, HPC facilities, and, more generally, datacenter environments are almost guaranteed to exhibit performance regimes that evolve over time, which leads to undesirable nonstationarities in application performance. In this paper, we present our analysis of performance of the bare-metal hardware available on the CloudLab testbed where we focus on quantifying the evolving performance regimes using changepoint detection. We describe our findings, backed by a dataset with nearly 6.9M benchmark results collected from over 1600 machines over a period of 2 years and 9 months. These findings yield a comprehensive characterization of real-world performance variability patterns in one computing facility, a methodology for studying such patterns on other infrastructures, and contribute to a better understanding of performance variability in general.Comment: To be presented at the 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid, http://cloudbus.org/ccgrid2020/) on May 11-14, 2020 in Melbourne, Victoria, Australi

    Ultra-Low Power Wake Up Receiver For Medical Implant Communications Service Transceiver

    Get PDF
    This thesis explores the specific requirements and challenges for the design of a dedicated wake-up receiver for medical implant communication services equipped with a novel “uncertain-IF†architecture combined with a high – Q filtering MEMS resonator and a free running CMOS ring oscillator as the RF LO. The receiver prototype, implements an IBM 0.18μm mixed-signal 7ML RF CMOS technology and achieves a sensitivity of -62 dBm at 404MHz while consuming \u3c100 μW from a 1 V supply

    Inductive activation of magnetite filled shape memory polymers

    Get PDF
    Thermally activated shape memory polymers are a desirable material for use in dynamic structures due to their large strain recovery, light weight, and tunable activation. The addition of ferromagnetic susceptor particles to a polymer matrix provides the ability to heat volumetrically and remotely via induction. Here, remote induction heating of magnetite filler particles dispersed in a thermoset matrix is used to activate shape memory polymer as both solid and foam composites. Bulk material properties and performance are characterized and compared over a range of filler parameters, induction parameters, and packaging configurations. Magnetite filler particles are investigated over a range of power input, in order to understand the effects of particle size and shape on heat generation and flux into the matrix. This investigation successfully activates shape memory polymers in 10 to 20 seconds, with no significant impact of filler particles up to 10wt% on mechanical properties of shape memory foam. Performance of different particle materials is dependent upon the amplitude of the driving magnetic field. There is a general improvement in heating performance for increased content of filler particles. Characterization indicates that heat transfer between the filler nanoparticles and the foam is the primary constraint in improved heating performance. The use of smaller, acicular particles as one way to improve heat transfer, by increasing interfacial area between filler and matrix, is further examined.M.S.Committee Chair: Garmestani, Hamid; Committee Member: Gall, Ken; Committee Member: Thadhani, Nares

    Taking “Data” (as a Topic): The Working Policies of Indifference, Purification and Differentiation

    Get PDF
    The recent surge of interest in e-science presents an opportune moment to re-examine the fundamental idea of “data”. This paper explores this topic by reporting on the different ways in which the idea of data is handled across many disciplines. From the accounts various disciplines themselves provide, these ways can be portrayed as the pursuit of three broad policies. The first policy is one of Indifference, which assumes the coherence of the data-concept, so that there is no need to explicate it further. The second policy is Purification, which identifies the essential characteristics of data according to the conventions of a particular discipline, with other modes systematically suppressed. The third policy allows for the Differentiation that is evident in the manifestations of data in various disciplines that utilise information systems. Greater appreciation among information professionals of the alternative approaches to data hopefully will enhance policy formulation and systems design
    corecore