67,144 research outputs found

    On the importance of nonlinear modeling in computer performance prediction

    Full text link
    Computers are nonlinear dynamical systems that exhibit complex and sometimes even chaotic behavior. The models used in the computer systems community, however, are linear. This paper is an exploration of that disconnect: when linear models are adequate for predicting computer performance and when they are not. Specifically, we build linear and nonlinear models of the processor load of an Intel i7-based computer as it executes a range of different programs. We then use those models to predict the processor loads forward in time and compare those forecasts to the true continuations of the time seriesComment: Appeared in "Proceedings of the 12th International Symposium on Intelligent Data Analysis

    Layered architecture for quantum computing

    Full text link
    We develop a layered quantum computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface code quantum error correction. In doing so, we propose a new quantum computer architecture based on optical control of quantum dots. The timescales of physical hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum dot architecture we study could solve such problems on the timescale of days.Comment: 27 pages, 20 figure

    Time Series Analysis using Embedding Dimension on Heart Rate Variability

    Get PDF
    Heart Rate Variability (HRV) is the measurement sequence with one or more visible variables of an underlying dynamic system, whose state changes with time. In practice, it is difficult to know what variables determine the actual dynamic system. In this research, Embedding Dimension (ED) is used to find out the nature of the underlying dynamical system. False Nearest Neighbour (FNN) method of estimating ED has been adapted for analysing and predicting variables responsible for HRV time series. It shows that the ED can provide the evidence of dynamic variables which contribute to the HRV time series. Also, the embedding of the HRV time series into a four-dimensional space produced the smallest number of FNN. This result strongly suggests that the Autonomic Nervous System that drives the heart is a two features dynamic system: sympathetic and parasympathetic nervous system.Peer reviewedFinal Published versio

    Reduced order modeling of fluid flows: Machine learning, Kolmogorov barrier, closure modeling, and partitioning

    Full text link
    In this paper, we put forth a long short-term memory (LSTM) nudging framework for the enhancement of reduced order models (ROMs) of fluid flows utilizing noisy measurements. We build on the fact that in a realistic application, there are uncertainties in initial conditions, boundary conditions, model parameters, and/or field measurements. Moreover, conventional nonlinear ROMs based on Galerkin projection (GROMs) suffer from imperfection and solution instabilities due to the modal truncation, especially for advection-dominated flows with slow decay in the Kolmogorov width. In the presented LSTM-Nudge approach, we fuse forecasts from a combination of imperfect GROM and uncertain state estimates, with sparse Eulerian sensor measurements to provide more reliable predictions in a dynamical data assimilation framework. We illustrate the idea with the viscous Burgers problem, as a benchmark test bed with quadratic nonlinearity and Laplacian dissipation. We investigate the effects of measurements noise and state estimate uncertainty on the performance of the LSTM-Nudge behavior. We also demonstrate that it can sufficiently handle different levels of temporal and spatial measurement sparsity. This first step in our assessment of the proposed model shows that the LSTM nudging could represent a viable realtime predictive tool in emerging digital twin systems

    Recursive Motion and Structure Estimation with Complete Error Characterization

    Get PDF
    We present an algorithm that perfom recursive estimation of ego-motion andambient structure from a stream of monocular Perspective images of a number of feature points. The algorithm is based on an Extended Kalman Filter (EKF) that integrates over time the instantaneous motion and structure measurements computed by a 2-perspective-views step. Key features of our filter are (I) global observability of the model, (2) complete on-line characterization of the uncertainty of the measurements provided by the two-views step. The filter is thus guaranteed to be well-behaved regardless of the particular motion undergone by the observel: Regions of motion space that do not allow recovery of structure (e.g. pure rotation) may be crossed while maintaining good estimates of structure and motion; whenever reliable measurements are available they are exploited. The algorithm works well for arbitrary motions with minimal smoothness assumptions and no ad hoc tuning. Simulations are presented that illustrate these characteristics

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio
    • …
    corecore