2,651 research outputs found

    On the calculation of time alignment errors in data management platforms for distribution grid data

    Get PDF
    The operation and planning of distribution grids require the joint processing of measurements from different grid locations. Since measurement devices in low- and medium-voltage grids lack precise clock synchronization, it is important for data management platforms of distribution system operators to be able to account for the impact of nonideal clocks on measurement data. This paper formally introduces a metric termed Additive Alignment Error to capture the impact of misaligned averaging intervals of electrical measurements. A trace-driven approach for retrieval of this metric would be computationally costly for measurement devices, and therefore, it requires an online estimation procedure in the data collection platform. To overcome the need of transmission of high-resolution measurement data, this paper proposes and assesses an extension of a Markov-modulated process to model electrical traces, from which a closed-form matrix analytic formula for the Additive Alignment Error is derived. A trace-driven assessment confirms the accuracy of the model-based approach. In addition, the paper describes practical settings where the model can be utilized in data management platforms with significant reductions in computational demands on measurement devices

    Approximation of the time alignment error for measurements in electricity grids

    Get PDF

    Impact of time interval alignment on data quality in electricity grids

    Get PDF

    Activity recognition from videos with parallel hypergraph matching on GPUs

    Full text link
    In this paper, we propose a method for activity recognition from videos based on sparse local features and hypergraph matching. We benefit from special properties of the temporal domain in the data to derive a sequential and fast graph matching algorithm for GPUs. Traditionally, graphs and hypergraphs are frequently used to recognize complex and often non-rigid patterns in computer vision, either through graph matching or point-set matching with graphs. Most formulations resort to the minimization of a difficult discrete energy function mixing geometric or structural terms with data attached terms involving appearance features. Traditional methods solve this minimization problem approximately, for instance with spectral techniques. In this work, instead of solving the problem approximatively, the exact solution for the optimal assignment is calculated in parallel on GPUs. The graphical structure is simplified and regularized, which allows to derive an efficient recursive minimization algorithm. The algorithm distributes subproblems over the calculation units of a GPU, which solves them in parallel, allowing the system to run faster than real-time on medium-end GPUs

    Survivability model for security and dependability analysis of a vulnerable critical system

    Get PDF
    This paper aims to analyze transient security and dependability of a vulnerable critical system, under vulnerability-related attack and two reactive defense strategies, from a severe vulnerability announcement until the vulnerability is fully removed from the system. By severe, we mean that the vulnerability-based malware could cause significant damage to the infected system in terms of security and dependability while infecting more and more new vulnerable computer systems. We propose a Markov chain-based survivability model for capturing the vulnerable critical system behaviors during the vulnerability elimination process. A high-level formalism based on Stochastic Reward Nets is applied to automatically generate and solve the survivability model. Survivability metrics are defined to quantify system attributes. The proposed model and metrics not only enable us to quantitatively assess the system survivability in terms of security risk and dependability, but also provide insights on the system investment decision. Numerical experiments are constructed to study the impact of key parameters on system security, dependability and profit

    A Markov-Based Update Policy for Constantly Changing Database Systems

    Get PDF
    In order to maximize the value of an organization\u27s data assets, it is important to keep data in its databases up-to-date. In the era of big data, however, constantly changing data sources make it a challenging task to assure data timeliness in enterprise systems. For instance, due to the high frequency of purchase transactions, purchase data stored in an enterprise resource planning system can easily become outdated, affecting the accuracy of inventory data and the quality of inventory replenishment decisions. Despite the importance of data timeliness, updating a database as soon as new data arrives is typically not optimal because of high update cost. Therefore, a critical problem in this context is to determine the optimal update policy for database systems. In this study, we develop a Markov decision process model, solved via dynamic programming, to derive the optimal update policy that minimizes the sum of data staleness cost and update cost. Based on real-world enterprise data, we conduct experiments to evaluate the performance of the proposed update policy in relation to benchmark policies analyzed in the prior literature. The experimental results show that the proposed update policy outperforms fixed interval update policies and can lead to significant cost savings
    • …
    corecore