572,621 research outputs found
Recommended from our members
A Prototype Toolkit For Evaluating Indoor Environmental Quality In Commercial Buildings
Measurement of building environmental parameters is often complex, expensive, and not easily proceduralized in a manner that covers all commercial buildings. Evaluating building indoor environmental quality performance is therefore not standard practice. This project developed a prototype toolkit that addressed existing barriers to widespread indoor environmental quality performance evaluation. A toolkit with both hardware and software elements was designed for practitioners around the indoor environmental quality requirements of the American Society of Heating, Refrigeration and Air Conditioning Engineers / Chartered Institution of Building Services / United States Green Building Council Performance Measurement Protocols. This unique toolkit was built on a wireless mesh network with a web-based data collection, analysis, and reporting application. The toolkit provided a fast, robust deployment of sensors, real-time data analysis, Performance Measurement Protocol-based analysis methods and a scorecard and report generation tools. A web-enabled Geographic Information System-based metadata collection system also reduced field-study deployment time. The toolkit was evaluated through three case studies, which were discussed in this report
Performance Annotation Framework
Large scale applications developers have many tools at their disposal to optimize and verify their software. One of which is Caliper, an annotation-based performance measurement tool. Caliper is very powerful and versatile, however, can be cumbersome to apply to complex applications. To solve this problem, we have created a framework to automatically prepare an application for performance measurement. This framework provides a layer of abstraction between the user and the source-code annotations and library linking. As a result, the process of measuring the performance of an application can be fully automated away – a huge step towards automatic software optimization
Software Engineering Laboratory Ada performance study: Results and implications
The SEL is an organization sponsored by NASA/GSFC to investigate the effectiveness of software engineering technologies applied to the development of applications software. The SEL was created in 1977 and has three organizational members: NASA/GSFC, Systems Development Branch; The University of Maryland, Computer Sciences Department; and Computer Sciences Corporation, Systems Development Operation. The goals of the SEL are as follows: (1) to understand the software development process in the GSFC environments; (2) to measure the effect of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include the Ada Performance Study Report. This paper describes the background of Ada in the Flight Dynamics Division (FDD), the objectives and scope of the Ada Performance Study, the measurement approach used, the performance tests performed, the major test results, and the implications for future FDD Ada development efforts
Extending the Functionality of Score-P through Plugins: Interfaces and Use Cases
Performance measurement and runtime tuning tools are both vital in the HPC software ecosystem and use similar techniques: the analyzed application is interrupted at specific events and information on the current system state is gathered to be either recorded or used for tuning. One of the established performance measurement tools is Score-P. It supports numerous HPC platforms and parallel programming paradigms. To extend Score-P with support for different back-ends, create a common framework for measurement and tuning of HPC applications, and to enable the re-use of common software components such as implemented instrumentation techniques, this paper makes the following contributions: (I) We describe the Score-P metric plugin interface, which enables programmers to augment the event stream with metric data from supplementary data sources that are otherwise not accessible for Score-P. (II) We introduce the flexible Score-P substrate plugin interface that can be used for custom processing of the event stream according to the specific requirements of either measurement, analysis, or runtime tuning tasks. (III) We provide examples for both interfaces that extend Score-P’s functionality for monitoring and tuning purposes
Competency assessment : integrating COCOMO II and people-CMM for estimation improvement
"Human factor" is one of the most relevant and crucial aspects of software development projects management. Aiming at the performance improvement for software processes in organizations, a new model has been developed to diagnose people related processes. This new model is People-CMM and represents a complementary solution to CMM. On the other hand, existing estimation models in Software Engineering perfectly integrate those aspects related to personnel’s technical and general competence, but fail to integrate competence and performance measurement instruments when it comes to determine the precise value for each of the factors involved in the estimation process. After reviewing the already deployed initiatives and recommendations for competence measurement in the industrial environment and the most relevant estimation methods for personnel factors used in software development projects, this article presents a recommendation for the integration of each of the "human factor" related metrics in COCOMO II with the management tools proposed by People-CMM, which are widely implemented by existing commercial tools.Publicad
Automation of current academic performance measurement based on the model of course assessment tools
The article proves the relevance of automating the process of students' current academic progress measurement, and represents a model of course assessment tools that allocates each assessment component to a particular competence acquired by students while studying the course. This model has served as basis for developing software for current academic performance measurement that involves the common database, as well as desktop and mobile applications. The collected data help to monitor the academic process and assess the level of competence acquisition
ReMo3D – an open-source Python package for 2D and 3D simulation of normal and lateral resistivity logs
An open-source Python package is presented, ReMo3D, which allows the generation of synthetic normal and lateral resistivity logs for 2D and 3D models. The package is built around a finite element mesh generator Gmsh and a high-performance multiphysics finite element software Netgen/NGSolve and supports distributed-memory parallel computation. The examples included in the paper show that the developed software can accurately simulate the measurement process and produce detailed synthetic normal and lateral resistivity logs. In addition, basic information about normal and lateral tools such as tool configurations, measurement principles, nomenclature and a brief history of utilization is included in the paper
Automation of current academic performance measurement based on the model of course assessment tools
The article proves the relevance of automating the process of students' current academic progress measurement, and represents a model of course assessment tools that allocates each assessment component to a particular competence acquired by students while studying the course. This model has served as basis for developing software for current academic performance measurement that involves the common database, as well as desktop and mobile applications. The collected data help to monitor the academic process and assess the level of competence acquisition
An LLVM Instrumentation Plug-in for Score-P
Reducing application runtime, scaling parallel applications to higher numbers
of processes/threads, and porting applications to new hardware architectures
are tasks necessary in the software development process. Therefore, developers
have to investigate and understand application runtime behavior. Tools such as
monitoring infrastructures that capture performance relevant data during
application execution assist in this task. The measured data forms the basis
for identifying bottlenecks and optimizing the code. Monitoring infrastructures
need mechanisms to record application activities in order to conduct
measurements. Automatic instrumentation of the source code is the preferred
method in most application scenarios. We introduce a plug-in for the LLVM
infrastructure that enables automatic source code instrumentation at
compile-time. In contrast to available instrumentation mechanisms in
LLVM/Clang, our plug-in can selectively include/exclude individual application
functions. This enables developers to fine-tune the measurement to the required
level of detail while avoiding large runtime overheads due to excessive
instrumentation.Comment: 8 page
- …