447 research outputs found
METrICS: a Measurement Environment For Multi-Core Time Critical Systems
International audienceWith the upcoming shift from single-core to multi-core COTS processor for safety critical products such as avionics, railway or space computer subsystems, the safety critical industry is facing a trade-off in term of performance versus predictability.In multi-core processors, concurrent accesses to shared hardware resources are generating inter-task or inter-application timing interference, breaking the timing isolation principles required by the standards for such critical software. Several solutions have been proposed in the literature to control or regulate these timing interferences, but most of these solutions require to perform some level of profiling, monitoring or dimensioning.As time-critical software is running on top of Real Time Operating Systems (ROTS), classical profiling techniques relying on interrupts, multi-threading, or OS modules are either not available or prohibited for predictability, safety or security reasons.In this paper we present METrICS, a measurement environment for multi-core time-critical systems running on top of the industry-standard PikeOS RTOS. Our framework proposes an accurate real-time runtime and resource usage measurement while having a negligible impact on timing behaviour, allowing us to fully observe and characterize timing interference.Beyond being able to characterize timing interference, we evaluated METrICS in term of accuracy of the timing and resource usage measurements, intrusiveness both in term of timing and impact on the legacy code, as well as adherence to the hardware. We also present a portfolio of the kind of measurements METrICS provides
Helios: A Scalable 3D Plant and Environmental Biophysical Modeling Framework.
This article presents an overview of Helios, a new three-dimensional (3D) plant and environmental modeling framework. Helios is a model coupling framework designed to provide maximum flexibility in integrating and running arbitrary 3D environmental system models. Users interact with Helios through a well-documented open-source C++ API. Version 1.0 comes with model plug-ins for radiation transport, the surface energy balance, stomatal conductance, photosynthesis, solar position, and procedural tree generation. Additional plug-ins are also available for visualizing model geometry and data and for processing and integrating LiDAR scanning data. Many of the plug-ins perform calculations on the graphics processing unit, which allows for efficient simulation of very large domains with high detail. An example modeling study is presented in which leaf-level heterogeneity in water usage and photosynthesis of an orchard is examined to understand how this leaf-scale variability contributes to whole-tree and -canopy fluxes
WallMon : Interactive distributed monitoring of process-level resource usage on display and compute clusters
To achieve low overhead, traditional cluster monitoring systems sample data at low frequencies and with coarse granularity. However, interactive monitoring requires frequent (up to 60 Hz) sampling of fine-grained data and visualization tools that can explore and display data in near real-time. This makes traditional cluster monitoring systems unsuited for interactive monitoring of distributed cluster applications, as they fail to capture short-duration events, making understanding the performance relationship between processes on the same or different nodes difficult. To address this issue, WallMon was developed, a tool for interactive visual exploration of performance behaviors in distributed systems. For gathering of data, WallMon is centered around an abstraction of collectors and handlers; collectors gathers data of interest, such as CPU and memory usage, and forwards it to handlers in a push-based fashion, while handlers take action upon the data. WallMon captures and visualizes data for every process on every node, as well as overall node statistics. Data is visualized using a technique inspired by the concept of information flocking. WallMon's design is based on the client-server model, and it is extensible through a module system that encapsulates functionality specific to monitoring (collectors) and visualization (handlers). A set of experiments have been carried out on a cluster of 29 nodes with 180 processes per node. Performance results show 7% (of 100) CPU usage at 64 Hz sampling rate when performing process-level monitoring with WallMon. Using WallMon's interactive visualization, we have observed interesting patterns in different parallel and distributed systems, such as unexpected ratio of user- and kernel-level execution among processes in a particular distributed system
µProfiler: A Concurrent Profiler for Concurrent C++ (µC++)
A concurrent program, unlike a sequential program, has multiple threads of execution, resulting in numerous advantages (e.g., faster execution), but also in complex and unpredictable interaction. As a consequence, a concurrent program can easily underutilize available parallelism, and performance can be extremely difficult for users to predict and analyze on their own.
A profiler is a tool that can help a user identify as well as locate potential performance problems in a program. Profiling is accomplished through monitoring of the program execution, and analyzing and visualizing the collected performance data. A profiler must display useful information in a way that allows a user to effectively and efficiently understand and analyze a program's behaviour.
This thesis describes the advancement in design and implementation of µProfiler, a profiler for sequential and concurrent programs written in µC++. µC++ is a concurrent dialect of the C++ programming language, which executes in uni-processor and multi-processor shared-memory environments. Major advancements to three µProfiler metrics are presented: the Execution State, the Exact Routine Call-Graph and the Statistical Routine Call-Graph. The Execution State metric charts each state for every thread over the entire execution of the program. With high overhead and perfect accuracy, the Exact Routine Call-Graph metric provides an exact call-graph profile of the program's dynamic execution, describing the control flow among routines. With low overhead and less accuracy, the Statistical Routine Call-Graph metric provides a statistical call-graph profile of the program's dynamic execution. For each metric, advancements were made throughout the profiling process (i.e., monitoring, analysis and visualization), addressing goals such as scalability, functionality, usability and performance. The metrics provide reasonable memory overhead and, based on the comparison to related work, are state-of-the-art in functionality and provide similar run-time performance
Interactive visualization of big data leveraging databases for scalable computation
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (pages 55-57).Modern database management systems (DBMS) have been designed to efficiently store, manage and perform computations on massive amounts of data. In contrast, many existing visualization systems do not scale seamlessly from small data sets to enormous ones. We have designed a three-tiered visualization system called ScalaR to deal with this issue. ScalaR dynamically performs resolution reduction when the expected result of a DBMS query is too large to be effectively rendered on existing screen real estate. Instead of running the original query, ScalaR inserts aggregation, sampling or filtering operations to reduce the size of the result. This thesis presents the design and implementation of ScalaR, and shows results for two example applications, visualizing earthquake records and satellite imagery data, stored in SciDB as the back-end DBMS.by Leilani Marie Battle.S.M
Recommended from our members
ASCI visualization tool evaluation, Version 2.0
The charter of the ASCI Visualization Common Tools subgroup was to investigate and evaluate 3D scientific visualization tools. As part of that effort, a Tri-Lab evaluation effort was launched in February of 1996. The first step was to agree on a thoroughly documented list of 32 features against which all tool candidates would be evaluated. These evaluation criteria were both gleaned from a user survey and determined from informed extrapolation into the future, particularly as concerns the 3D nature and extremely large size of ASCI data sets. The second step was to winnow a field of 41 candidate tools down to 11. The selection principle was to be as inclusive as practical, retaining every tool that seemed to hold any promise of fulfilling all of ASCI`s visualization needs. These 11 tools were then closely investigated by volunteer evaluators distributed across LANL, LLNL, and SNL. This report contains the results of those evaluations, as well as a discussion of the evaluation philosophy and criteria
- …