493,165 research outputs found
Analysis of Distributed Systems Dynamics with Erlang Performance Lab
Modern, highly concurrent and large-scale systems require new methods for design, testing and monitoring. Their dynamics and scale require real-time tools, providing a holistic view of the whole system and the ability of showing a more detailed view when needed. Such tools can help identifying the causes of unwanted states, which is hardly possible with static analysis or metrics-based approach. In this paper a new tool for analysis of distributed systems in Erlang is presented. It provides real-time monitoring of system dynamics on different levels of abstraction. The tool has been used for analyzing a large-scale urban traffic simulation system running on a cluster of 20 computing nodes
Behavior and event detection for annotation and surveillance
Visual surveillance and activity analysis is an active research
field of computer vision. As a result, there are several
different algorithms produced for this purpose. To obtain
more robust systems it is desirable to integrate the different algorithms. To achieve this goal, the paper presents results in automatic event detection in surveillance videos, and a distributed application framework for supporting these methods. Results in motion analysis for static and moving cameras, automatic fight detection, shadow segmentation, discovery of unusual motion patterns, indexing and retrieval will be presented. These applications perform real time, and are suitable for real life applications
Distributed Sensing of a Cantilever Beam and Plate Using a Fiber Optic Sensing System
As the capabilities of Fiber Optic Sensing Systems continue to improve, their application to real-time distributed sensing for structural analysis and control of flexible systems is increasingly feasible. This paper will report experimental results on the use of a Fiber Optic Sensing System for static and dynamic shape estimation of a cantilever beam and plate. Demonstrating the use of this sensor technology in benchtop experiments is the first step in effectively incorporating fiber optic sensors in the Integrated Adaptive Wing Technology Maturation aeroelastic half-span wind tunnel model for real-time shape sensing and feedback for drag optimization, maneuver load alleviation, gust load alleviation, and flutter suppression control laws. The effectiveness of the sensing system will be analyzed and the application of these results to aeroelasticity experimentation will be discussed
Extending Traditional Static Analysis Techniques to Support Development, Testing and Maintenance of Component-Based Solutions
Traditional static code analysis encompasses a mature set of techniques for helping understand and optimize programs, such as dead code elimination, program slicing, and partial evaluation (code specialization). It is well understood that compared to other program analysis techniques (e.g., dynamic analysis), static analysis techniques do a reasonable job for the cost associated with implementing them. Industry and government are moving away from more ‘traditional’ development approaches towards component-based approaches as ‘the norm.’ Component-based applications most often comprise a collection of distributed object-oriented components such as forms, code snippets, reports, modules, databases, objects, containers, and the like. These components are glued together by code typically written in a visual language. Some industrial experience shows that component-based development and the subsequent use of visual development environments, while reducing an application\u27s total development time, actually increase certain maintenance problems. This provides a motivation for using automated analysis techniques on such systems. The results of this research show that traditional static analysis techniques may not be sufficient for analyzing component-based systems. We examine closely the characteristics of a component-based system and document many of the issues that we feel make the development, analysis, testing and maintenance of such systems more difficult. By analyzing additional summary information for the components as well as any available source code for an application, we show ways in which traditional static analysis techniques may be augmented, thereby increasing the accuracy of static analysis results and ultimately making the maintenance of component-based systems a manageable task. We develop a technique to use semantic information about component properties obtained from type library and interface definition language files, and demonstrate this technique by extending a traditional unreachable code algorithm. To support more complex analysis, we then develop a technique for component developers to provide summary information about a component. This information can be integrated with several traditional static analysis techniques to analyze component-based systems more precisely. We then demonstrate the effectiveness of these techniques on several real Department of Defense (DoD) COTS component-based systems
A Generic Framework for Blackbox Components in WCET Computation
Validation of embedded hard real-time systems requires the computation of the Worst Case Execution Time (WCET). Although these systems make more and more use of Components Off The Shelf (COTS), the current WCET computation methods are usually applied to whole programs: these analysis methods require access to the whole system code, that is incompatible with the use of COTS. In this paper, after discussing the specific cases of the loop bounds estimation and the instruction cache analysis, we show in a generic way how static analysis involved in WCET computation can be pre-computed on COTS in order to obtain component partial results. These partial results can be distributed with the COTS, in order to compute the WCET in the context of a full application. We describe also the information items to include in the partial result, and we propose an XML exchange format to represent these data. Additionally, we show that the partial analysis enables us to reduce the analysis time while introducing very little pessimism
Schedulability analysis for systems with data and control dependencies
In this paper we present an approach to schedulability analysis for hard real-time systems with control and data dependencies. We consider distributed architectures consisting of multiple programmable processors, and the scheduling policy is based on a static priority preemptive strategy. Our model of the system captures both data and control dependencies, and the schedulability approach is able to reduce the pessimism of the analysis by using the knowledge about control and data dependencies. Extensive experiments as well as a real life example demonstrate the efficiency of our approach. 1
Profiling Distributed Virtual Environments by Tracing Causality
Real-time interactive systems such as virtual environments have high performance requirements, and profiling is a key part of the optimisation process to meet them. Traditional techniques based on metadata and static analysis have difficulty following causality in asynchronous systems. In this paper we explore a new technique for such systems. Timestamped samples of the system state are recorded at instrumentation points at runtime. These are assembled into a graph, and edges between dependent samples recovered. This approach minimises the invasiveness of the instrumentation, while retaining high accuracy. We describe how our instrumentation can be implemented natively in common environments, how its output can be processed into a graph describing causality, and how heterogeneous data sources can be incorporated into this to maximise the scope of the profiling. Across three case studies, we demonstrate the efficacy of this approach, and how it supports a variety of metrics for comprehensively bench-marking distributed virtual environments
Orientation measurement based on magnetic inductance by the extended distributed multi-pole model
This paper presents a novel method to calculate magnetic inductance with a fast-computing magnetic field model referred to as the extended distributed multi-pole (eDMP) model. The concept of mutual inductance has been widely applied for position/orientation tracking systems and applications, yet it is still challenging due to the high demands in robust modeling and efficient computation in real-time applications. Recently, numerical methods have been utilized in design and analysis of magnetic fields, but this often requires heavy computation and its accuracy relies on geometric modeling and meshing that limit its usage. On the other hand, an analytical method provides simple and fast-computing solutions but is also flawed due to its difficulties in handling realistic and complex geometries such as complicated designs and boundary conditions, etc. In this paper, the extended distributed multi-pole model (eDMP) is developed to characterize a time-varying magnetic field based on an existing DMP model analyzing static magnetic fields. The method has been further exploited to compute the mutual inductance between coils at arbitrary locations and orientations. Simulation and experimental results of various configurations of the coils are presented. Comparison with the previously published data shows not only good performance in accuracy, but also effectiveness in computation.open0
Recommended from our members
Computational Intelligence Applications in Smart Grids: Enabling Methodologies for Proactive and Self Organizing Power Systems
This book considers the emerging technologies and methodologies of the application of computational intelligence to smart grids.
From a conceptual point of view, the smart grid is the convergence of information and operational technologies applied to the electric grid, allowing sustainable options to customers and improved levels of security. Smart grid technologies include advanced sensing systems, two-way high-speed communications, monitoring and enterprise analysis software, and related services used to obtain location-specific and real-time actionable data for the provision of enhanced services for both system operators (i.e. distribution automation, asset management, advanced metering infrastructure) and end-users (i.e. demand side management, demand response).
In this context, a crucial issue is how to support the evolution of existing electrical grids from static hierarchal systems to self-organizing, highly scalable and pervasive networks. Modern trends are oriented toward the employment of computational intelligence techniques for deploying advanced control, protection and monitoring architectures that move away from the older centralized paradigm to systems distributed across the field with an increasing pervasion of intelligence devices. The large-scale deployment of computational intelligence technologies in smart grids could lead to a more efficient tasks distribution amongst energy resources and, consequently, to a sensible improvement of the electrical grid flexibility
- …