2,164 research outputs found

    TRACTABLE DATA-FLOW ANALYSIS FOR DISTRIBUTED SYSTEMS

    No full text
    Automated behavior analysis is a valuable technique in the development and maintainence of distributed systems. In this paper, we present a tractable dataflow analysis technique for the detection of unreachable states and actions in distributed systems. The technique follows an approximate approach described by Reif and Smolka, but delivers a more accurate result in assessing unreachable states and actions. The higher accuracy is achieved by the use of two concepts: action dependency and history sets. Although the technique, does not exhaustively detect all possible errors, it detects nontrivial errors with a worst-case complexity quadratic to the system size. It can be automated and applied to systems with arbitrary loops and nondeterministic structures. The technique thus provides practical and tractable behavior analysis for preliminary designs of distributed systems. This makes it an ideal candidate for an interactive checker in software development tools. The technique is illustrated with case studies of a pump control system and an erroneous distributed program. Results from a prototype implementation are presented

    Analytical Challenges in Modern Tax Administration: A Brief History of Analytics at the IRS

    Get PDF

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    A Study of Representational Properties of Unsupervised Anomaly Detection in Brain MRI

    Full text link
    Anomaly detection in MRI is of high clinical value in imaging and diagnosis. Unsupervised methods for anomaly detection provide interesting formulations based on reconstruction or latent embedding, offering a way to observe properties related to factorization. We study four existing modeling methods, and report our empirical observations using simple data science tools, to seek outcomes from the perspective of factorization as it would be most relevant to the task of unsupervised anomaly detection, considering the case of brain structural MRI. Our study indicates that anomaly detection algorithms that exhibit factorization related properties are well capacitated with delineatory capabilities to distinguish between normal and anomaly data. We have validated our observations in multiple anomaly and normal datasets.Comment: Accepted at MICCAI Medical Applications with Disentanglements (MAD) Workshop 2022 https://mad.ikim.nrw

    Report of the panel on earth structure and dynamics, section 6

    Get PDF
    The panel identified problems related to the dynamics of the core and mantle that should be addressed by NASA programs. They include investigating the geodynamo based on observations of the Earth's magnetic field, determining the rheology of the mantle from geodetic observations of post-glacial vertical motions and changes in the gravity field, and determining the coupling between plate motions and mantle flow from geodetic observations of plate deformation. Also emphasized is the importance of support for interdisciplinary research to combine various data sets with models which couple rheology, structure and dynamics

    Methane Mitigation:Methods to Reduce Emissions, on the Path to the Paris Agreement

    Get PDF
    The atmospheric methane burden is increasing rapidly, contrary to pathways compatible with the goals of the 2015 United Nations Framework Convention on Climate Change Paris Agreement. Urgent action is required to bring methane back to a pathway more in line with the Paris goals. Emission reduction from “tractable” (easier to mitigate) anthropogenic sources such as the fossil fuel industries and landfills is being much facilitated by technical advances in the past decade, which have radically improved our ability to locate, identify, quantify, and reduce emissions. Measures to reduce emissions from “intractable” (harder to mitigate) anthropogenic sources such as agriculture and biomass burning have received less attention and are also becoming more feasible, including removal from elevated-methane ambient air near to sources. The wider effort to use microbiological and dietary intervention to reduce emissions from cattle (and humans) is not addressed in detail in this essentially geophysical review. Though they cannot replace the need to reach “net-zero” emissions of CO2, significant reductions in the methane burden will ease the timescales needed to reach required CO2 reduction targets for any particular future temperature limit. There is no single magic bullet, but implementation of a wide array of mitigation and emission reduction strategies could substantially cut the global methane burden, at a cost that is relatively low compared to the parallel and necessary measures to reduce CO2, and thereby reduce the atmospheric methane burden back toward pathways consistent with the goals of the Paris Agreement

    Integrated testing and verification system for research flight software design document

    Get PDF
    The NASA Langley Research Center is developing the MUST (Multipurpose User-oriented Software Technology) program to cut the cost of producing research flight software through a system of software support tools. The HAL/S language is the primary subject of the design. Boeing Computer Services Company (BCS) has designed an integrated verification and testing capability as part of MUST. Documentation, verification and test options are provided with special attention on real time, multiprocessing issues. The needs of the entire software production cycle have been considered, with effective management and reduced lifecycle costs as foremost goals. Capabilities have been included in the design for static detection of data flow anomalies involving communicating concurrent processes. Some types of ill formed process synchronization and deadlock also are detected statically
    • …
    corecore