517 research outputs found

    Essays on modeling and analysis of dynamic sociotechnical systems

    Get PDF
    A sociotechnical system is a collection of humans and algorithms that interact under the partial supervision of a decentralized controller. These systems often display in- tricate dynamics and can be characterized by their unique emergent behavior. In this work, we describe, analyze, and model aspects of three distinct classes of sociotech- nical systems: financial markets, social media platforms, and elections. Though our work is diverse in subject matter content, it is unified though the study of evolution- and adaptation-driven change in social systems and the development of methods used to infer this change. We first analyze evolutionary financial market microstructure dynamics in the context of an agent-based model (ABM). The ABM’s matching engine implements a frequent batch auction, a recently-developed type of price-discovery mechanism. We subject simple agents to evolutionary pressure using a variety of selection mech- anisms, demonstrating that quantile-based selection mechanisms are associated with lower market-wide volatility. We then evolve deep neural networks in the ABM and demonstrate that elite individuals are profitable in backtesting on real foreign ex- change data, even though their fitness had never been evaluated on any real financial data during evolution. We then turn to the extraction of multi-timescale functional signals from large panels of timeseries generated by sociotechnical systems. We introduce the discrete shocklet transform (DST) and associated similarity search algorithm, the shocklet transform and ranking (STAR) algorithm, to accomplish this task. We empirically demonstrate the STAR algorithm’s invariance to quantitative functional parameteri- zation and provide use case examples. The STAR algorithm compares favorably with Twitter’s anomaly detection algorithm on a feature extraction task. We close by using STAR to automatically construct a narrative timeline of societally-significant events using a panel of Twitter word usage timeseries. Finally, we model strategic interactions between the foreign intelligence service (Red team) of a country that is attempting to interfere with an election occurring in another country, and the domestic intelligence service of the country in which the election is taking place (Blue team). We derive subgame-perfect Nash equilibrium strategies for both Red and Blue and demonstrate the emergence of arms race inter- ference dynamics when either player has “all-or-nothing” attitudes about the result of the interference episode. We then confront our model with data from the 2016 U.S. presidential election contest, in which Russian military intelligence interfered. We demonstrate that our model captures the qualitative dynamics of this interference for most of the time under stud

    Adaptive heterogeneous parallelism for semi-empirical lattice dynamics in computational materials science.

    Get PDF
    With the variability in performance of the multitude of parallel environments available today, the conceptual overhead created by the need to anticipate runtime information to make design-time decisions has become overwhelming. Performance-critical applications and libraries carry implicit assumptions based on incidental metrics that are not portable to emerging computational platforms or even alternative contemporary architectures. Furthermore, the significance of runtime concerns such as makespan, energy efficiency and fault tolerance depends on the situational context. This thesis presents a case study in the application of both Mattsons prescriptive pattern-oriented approach and the more principled structured parallelism formalism to the computational simulation of inelastic neutron scattering spectra on hybrid CPU/GPU platforms. The original ad hoc implementation as well as new patternbased and structured implementations are evaluated for relative performance and scalability. Two new structural abstractions are introduced to facilitate adaptation by lazy optimisation and runtime feedback. A deferred-choice abstraction represents a unified space of alternative structural program variants, allowing static adaptation through model-specific exhaustive calibration with regards to the extrafunctional concerns of runtime, average instantaneous power and total energy usage. Instrumented queues serve as mechanism for structural composition and provide a representation of extrafunctional state that allows realisation of a market-based decentralised coordination heuristic for competitive resource allocation and the Lyapunov drift algorithm for cooperative scheduling

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202

    ViSUS: Visualization Streams for Ultimate Scalability

    Full text link

    A hybrid algorithm for Bayesian network structure learning with application to multi-label learning

    Get PDF
    We present a novel hybrid algorithm for Bayesian network structure learning, called H2PC. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges. The algorithm is based on divide-and-conquer constraint-based subroutines to learn the local structure around a target variable. We conduct two series of experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the most powerful state-of-the-art algorithm for Bayesian network structure learning. First, we use eight well-known Bayesian network benchmarks with various data sizes to assess the quality of the learned structure returned by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in terms of goodness of fit to new data and quality of the network structure with respect to the true dependence structure of the data. Second, we investigate H2PC's ability to solve the multi-label learning problem. We provide theoretical results to characterize and identify graphically the so-called minimal label powersets that appear as irreducible factors in the joint distribution under the faithfulness condition. The multi-label learning problem is then decomposed into a series of multi-class classification problems, where each multi-class variable encodes a label powerset. H2PC is shown to compare favorably to MMHC in terms of global classification accuracy over ten multi-label data sets covering different application domains. Overall, our experiments support the conclusions that local structural learning with H2PC in the form of local neighborhood induction is a theoretically well-motivated and empirically effective learning framework that is well suited to multi-label learning. The source code (in R) of H2PC as well as all data sets used for the empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author

    Vision 2040: A Roadmap for Integrated, Multiscale Modeling and Simulation of Materials and Systems

    Get PDF
    Over the last few decades, advances in high-performance computing, new materials characterization methods, and, more recently, an emphasis on integrated computational materials engineering (ICME) and additive manufacturing have been a catalyst for multiscale modeling and simulation-based design of materials and structures in the aerospace industry. While these advances have driven significant progress in the development of aerospace components and systems, that progress has been limited by persistent technology and infrastructure challenges that must be overcome to realize the full potential of integrated materials and systems design and simulation modeling throughout the supply chain. As a result, NASA's Transformational Tools and Technology (TTT) Project sponsored a study (performed by a diverse team led by Pratt & Whitney) to define the potential 25-year future state required for integrated multiscale modeling of materials and systems (e.g., load-bearing structures) to accelerate the pace and reduce the expense of innovation in future aerospace and aeronautical systems. This report describes the findings of this 2040 Vision study (e.g., the 2040 vision state; the required interdependent core technical work areas, Key Element (KE); identified gaps and actions to close those gaps; and major recommendations) which constitutes a community consensus document as it is a result of over 450 professionals input obtain via: 1) four society workshops (AIAA, NAFEMS, and two TMS), 2) community-wide survey, and 3) the establishment of 9 expert panels (one per KE) consisting on average of 10 non-team members from academia, government and industry to review, update content, and prioritize gaps and actions. The study envisions the development of a cyber-physical-social ecosystem comprised of experimentally verified and validated computational models, tools, and techniques, along with the associated digital tapestry, that impacts the entire supply chain to enable cost-effective, rapid, and revolutionary design of fit-for-purpose materials, components, and systems. Although the vision focused on aeronautics and space applications, it is believed that other engineering communities (e.g., automotive, biomedical, etc.) can benefit as well from the proposed framework with only minor modifications. Finally, it is TTT's hope and desire that this vision provides the strategic guidance to both public and private research and development decision makers to make the proposed 2040 vision state a reality and thereby provide a significant advancement in the United States global competitiveness

    Macro- and Micro-Level Effects on Responsive Financial Regulation

    Get PDF
    We are approaching the 20th anniversary of Ian Ayres’ and John Braithwaite’s 1992 book, Responsive Regulation. This paper, which was prepared for a September 2010 workshop at UBC, considers the implications of the recent financial crisis for Ayres’ and Braithwaite’s concept of “enforced self-regulation.” Its main thesis is that flexible and iterative regulatory strategies, such as enforced self-regulation and its progeny, are more porous to influence from different planes of action than prescriptive regulation would be. When focusing on technical regulatory design strategies, scholars should therefore be cautious about bracketing or underestimating the problem of power operating at the “macro level” of political and economic influence. In the context of financial regulation, this background power framework contributed to an under-ambitious regulatory agenda framed around the inevitability of complexity, the clear value of innovation, and the need to minimize the regulatory burden. Also, at the “micro” plane of implementation, the content of regulatory principles was poorly specified because of the lack of a robust regulatory presence, pervasive (and predictable) over-optimism, and an excessive reliance on computer modeling and code to assess risk and compliance. The paper argues that considerable regulatory autonomy and internal analytical capacity is required to make the precise form that flexible regulation takes reflect actual regulatory intention, rather than the influence of these “macro” and “micro” level forces. Among the various contemporary forms of flexible regulation, only meta-regulation – an approach that includes the updated version of Responsive Regulation, as articulated by John Braithwaite in a new article to be published in this volume – has been designed with this kind of systematic learning at the core of its regulatory project
    • …
    corecore