265 research outputs found

    Change Request Prediction and Effort Estimation in an Evolving Software System

    Get PDF
    Prediction of software defects has been the focus of many researchers in empirical software engineering and software maintenance because of its significance in providing quality estimates from the project management perspective for an evolving legacy system. Software Reliability Growth Models (SRGM) have been used to predict future defects in a software release. Modern software engineering databases contain Change Requests (CR), which include both defects and other maintenance requests. Our goal is to use defect prediction methods to help predict CRs in an evolving legacy system. Limited research has been done in defect prediction using curve-fitting methods evolving software systems, with one or more change-points. Curve-fitting approaches have been successfully used to select a fitted reliability model among candidate models for defect prediction. This work demonstrates the use of curve-fitting defect prediction methods to predict CRs. It focuses on providing a curve-fit solution that deals with evolutionary software changes but yet considers long-term prediction of data in the full release. We compare three curve-fit solutions in terms of their ability to predict CRs. Our data show that the Time Transformation approach (TT) provides more accurate CR predictions and fewer under-predicted Change Requests than the other curve-fitting methods. In addition to CR prediction, we investigated the possibility of estimating effort as well. We found Lines of Code (added, deleted, modified, and auto-generated) associated with CRs do not necessarily predict the actual effort spent on CR resolution

    Langley aerospace test highlights, 1985

    Get PDF
    The role of the Langley Research Center is to perform basic and applied research necessary for the advancement of aeronautics and space flight, to generate new and advanced concepts for the accomplishment of related national goals, and to provide research advice, technological support, and assistance to other NASA installations, other government agencies, and industry. Significant tests which were performed during calendar year 1985 in Langley test facilities, are highlighted. Both the broad range of the research and technology activities at the Langley Research Center and the contributions of this work toward maintaining United States leadership in aeronautics and space research, are illustrated. Other highlights of Langley research and technology for 1985 are described in Research and Technology-1985 Annual Report of the Langley Research Center

    Cross-Layer Cloud Performance Monitoring, Analysis and Recovery

    Get PDF
    The basic idea of Cloud computing is to offer software and hardware resources as services. These services are provided at different layers: Software (Software as a Service: SaaS), Platform (Platform as a Service: PaaS) and Infrastructure (Infrastructure as a Service: IaaS). In such a complex environment, performance issues are quite likely and rather the norm than the exception. Consequently, performance-related problems may frequently occur at all layers. Thus, it is necessary to monitor all Cloud layers and analyze their performance parameters to detect and rectify related problems. This thesis presents a novel cross-layer reactive performance monitoring approach for Cloud computing environments, based on the methodology of Complex Event Processing (CEP). The proposed approach is called CEP4Cloud. It analyzes monitored events to detect performance-related problems and performs actions to fix them. The proposal is based on the use of (1) a novel multi-layer monitoring approach, (2) a new cross-layer analysis approach and (3) a novel recovery approach. The proposed monitoring approach operates at all Cloud layers, while collecting related parameters. It makes use of existing monitoring tools and a new monitoring approach for Cloud services at the SaaS layer. The proposed SaaS monitoring approach is called AOP4CSM. It is based on aspect-oriented programming and monitors quality-of-service parameters of the SaaS layer in a non-invasive manner. AOP4CSM neither modifies the server implementation nor the client implementation. The defined cross-layer analysis approach is called D-CEP4CMA. It is based on the methodology of Complex Event Processing (CEP). Instead of having to manually specify continuous queries on monitored event streams, CEP queries are derived from analyzing the correlations between monitored metrics across multiple Cloud layers. The results of the correlation analysis allow us to reduce the number of monitored parameters and enable us to perform a root cause analysis to identify the causes of performance-related problems. The derived analysis rules are implemented as queries in a CEP engine. D-CEP4CMA is designed to dynamically switch between different centralized and distributed CEP architectures depending on the load/memory of the CEP machine and network traffic conditions in the observed Cloud environment. The proposed recovery approach is based on a novel action manager framework. It applies recovery actions at all Cloud layers. The novel action manager framework assigns a set of repair actions to each performance-related problem and checks the success of the applied action. The results of several experiments illustrate the merits of the reactive performance monitoring approach and its main components (i.e., monitoring, analysis and recovery). First, experimental results show the efficiency of AOP4CSM (very low overhead). Second, obtained results demonstrate the benefits of the analysis approach in terms of precision and recall compared to threshold-based methods. They also show the accuracy of the analysis approach in identifying the causes of performance-related problems. Furthermore, experiments illustrate the efficiency of D-CEP4CMA and its performance in terms of precision and recall compared to centralized and distributed CEP architectures. Moreover, experimental results indicate that the time needed to fix a performance-related problem is reasonably short. They also show that the CPU overhead of using CEP4Cloud is negligible. Finally, experimental results demonstrate the merits of CEP4Cloud in terms of speeding up the repair and reducing the number of triggered alarms compared to baseline methods

    Platform-based design, test and fast verification flow for mixed-signal systems on chip

    Get PDF
    This research is providing methodologies to enhance the design phase from architectural space exploration and system study to verification of the whole mixed-signal system. At the beginning of the work, some innovative digital IPs have been designed to develop efficient signal conditioning for sensor systems on-chip that has been included in commercial products. After this phase, the main focus has been addressed to the creation of a re-usable and versatile test of the device after the tape-out which is close to become one of the major cost factor for ICs companies, strongly linking it to model’s test-benches to avoid re-design phases and multi-environment scenarios, producing a very effective approach to a single, fast and reliable multi-level verification environment. All these works generated different publications in scientific literature. The compound scenario concerning the development of sensor systems is presented in Chapter 1, together with an overview of the related market with a particular focus on the latest MEMS and MOEMS technology devices, and their applications in various segments. Chapter 2 introduces the state of the art for sensor interfaces: the generic sensor interface concept (based on sharing the same electronics among similar applications achieving cost saving at the expense of area and performance loss) versus the Platform Based Design methodology, which overcomes the drawbacks of the classic solution by keeping the generality at the highest design layers and customizing the platform for a target sensor achieving optimized performances. An evolution of Platform Based Design achieved by implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform is therefore presented. ISIF is a highly configurable mixed-signal chip which allows designers to perform an effective design space exploration and to evaluate directly on silicon the system performances avoiding the critical and time consuming analysis required by standard platform based approach. In chapter 3 we describe the design of a smart sensor interface for conditioning next generation MOEMS. The adoption of a new, high performance and high integrated technology allow us to integrate not only a versatile platform but also a powerful ARM processor and various IPs providing the possibility to use the platform not only as a conditioning platform but also as a processing unit for the application. In this chapter a description of the various blocks is given, with a particular emphasis on the IP developed in order to grant the highest grade of flexibility with the minimum area occupation. The architectural space evaluation and the application prototyping with ISIF has enabled an effective, rapid and low risk development of a new high performance platform achieving a flexible sensor system for MEMS and MOEMS monitoring and conditioning. The platform has been design to cover very challenging test-benches, like a laser-based projector device. In this way the platform will not only be able to effectively handle the sensor but also all the system that can be built around it, reducing the needed for further electronics and resulting in an efficient test bench for the algorithm developed to drive the system. The high costs in ASIC development are mainly related to re-design phases because of missing complete top-level tests. Analog and digital parts design flows are separately verified. Starting from these considerations, in the last chapter a complete test environment for complex mixed-signal chips is presented. A semi-automatic VHDL-AMS flow to provide totally matching top-level is described and then, an evolution for fast self-checking test development for both model and real chip verification is proposed. By the introduction of a Python interface, the designer can easily perform interactive tests to cover all the features verification (e.g. calibration and trimming) into the design phase and check them all with the same environment on the real chip after the tape-out. This strategy has been tested on a consumer 3D-gyro for consumer application, in collaboration with SensorDynamics AG

    Green Low-Carbon Technology for Metalliferous Minerals

    Get PDF
    Metalliferous minerals play a central role in the global economy. They will continue to provide the raw materials we need for industrial processes. Significant challenges will likely emerge if the climate-driven green and low-carbon development transition of metalliferous mineral exploitation is not managed responsibly and sustainably. Green low-carbon technology is vital to promote the development of metalliferous mineral resources shifting from extensive and destructive mining to clean and energy-saving mining in future decades. Global mining scientists and engineers have conducted a lot of research in related fields, such as green mining, ecological mining, energy-saving mining, and mining solid waste recycling, and have achieved a great deal of innovative progress and achievements. This Special Issue intends to collect the latest developments in the green low-carbon mining field, written by well-known researchers who have contributed to the innovation of new technologies, process optimization methods, or energy-saving techniques in metalliferous minerals development

    Syndromics: A Bioinformatics Approach for Neurotrauma Research

    Get PDF
    Substantial scientific progress has been made in the past 50 years in delineating many of the biological mechanisms involved in the primary and secondary injuries following trauma to the spinal cord and brain. These advances have highlighted numerous potential therapeutic approaches that may help restore function after injury. Despite these advances, bench-to-bedside translation has remained elusive. Translational testing of novel therapies requires standardized measures of function for comparison across different laboratories, paradigms, and species. Although numerous functional assessments have been developed in animal models, it remains unclear how to best integrate this information to describe the complete translational “syndrome” produced by neurotrauma. The present paper describes a multivariate statistical framework for integrating diverse neurotrauma data and reviews the few papers to date that have taken an information-intensive approach for basic neurotrauma research. We argue that these papers can be described as the seminal works of a new field that we call “syndromics”, which aim to apply informatics tools to disease models to characterize the full set of mechanistic inter-relationships from multi-scale data. In the future, centralized databases of raw neurotrauma data will enable better syndromic approaches and aid future translational research, leading to more efficient testing regimens and more clinically relevant findings

    Mathematical statistics vs machine learning: towards an intelligent modeling framework for soil and plant growth processes

    Get PDF
    Mestrado de dupla diplomação com a Kuban State Agrarian UniversityThe work described in this dissertation focuses on the methods for analyzing MS and ML that are used in PF. The purpose of the work is to investigate these methods on their practical application to a specific set of data. In the course of the work, the following tasks were completed: the current state of affairs in the field of PF was investigated, the theoretical foundations of the methods of MS and ML were investigated, which were subjected to practical tests on a specific set of data. Conclusions were drawn about the advantages and disadvantages of these methods. A selection of works of scientists engaged in research on the introduction of a specific set of nutrients into the soil was also investigated. The most important contributions to this work are the practical application of various methods of analysis, as well as the design of a DST designed to help farmers integrate PF into their pilot training farms.O trabalho descrito nesta dissertação versa sobre métodos e técnicas no âmbito da Estatística Matemática e de ML usados para efeitos de previsão de colheitas e tratamento de solos em agricultura de precisão. O objetivo do trabalho é investigar esses métodos em sua aplicação prática a um conjunto específico de dados. No decorrer do trabalho, foram realizadas as seguintes tarefas: investigou-se a situação atual no campo da agricultura de precisão, investigaram-se os fundamentos teóricos dos métodos e técnicas da estatística matemática e de ML. Estes métodos e técnicas foram submetidos a testes práticos em um conjunto específico de dados. Foram tiradas conclusões sobre as vantagens e desvantagens desses métodos: Uma seslção de trabalhos científicos relacionados com a investigação sobre a introdução de um conjunto específico de nutrientes no solo foram também investigados. As contribuições mais importantes para este trabalho são a aplicação prática de vários métodos de análise, bem como o projeto de uma ferramenta de apoio à decisão projetada para ajudar os agricultores a integrar a agricultura de precisão nas suas propriedades agrícolas

    Aeronautical engineering: A continuing bibliography with indexes (supplement 278)

    Get PDF
    This bibliography lists 414 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1992

    Nuclear Power - Control, Reliability and Human Factors

    Get PDF
    Advances in reactor designs, materials and human-machine interfaces guarantee safety and reliability of emerging reactor technologies, eliminating possibilities for high-consequence human errors as those which have occurred in the past. New instrumentation and control technologies based in digital systems, novel sensors and measurement approaches facilitate safety, reliability and economic competitiveness of nuclear power options. Autonomous operation scenarios are becoming increasingly popular to consider for small modular systems. This book belongs to a series of books on nuclear power published by InTech. It consists of four major sections and contains twenty-one chapters on topics from key subject areas pertinent to instrumentation and control, operation reliability, system aging and human-machine interfaces. The book targets a broad potential readership group - students, researchers and specialists in the field - who are interested in learning about nuclear power

    REMOTE SENSING DATA ANALYSIS FOR ENVIRONMENTAL AND HUMANITARIAN PURPOSES. The automation of information extraction from free satellite data.

    Get PDF
    This work is aimed at investigating technical possibilities to provide information on environmental parameters that can be used for risk management. The World food Program (WFP) is the United Nations Agency which is involved in risk management for fighting hunger in least-developed and low-income countries, where victims of natural and manmade disasters, refugees, displaced people and the hungry poor suffer from severe food shortages. Risk management includes three different phases (pre-disaster, response and post disaster) to be managed through different activities and actions. Pre disaster activities are meant to develop and deliver risk assessment, establish prevention actions and prepare the operative structures for managing an eventual emergency or disaster. In response and post disaster phase actions planned in the pre-disaster phase are executed focusing on saving lives and secondly, on social economic recovery. In order to optimally manage its operations in the response and post disaster phases, WFP needs to know, in order to estimate the impact an event will have on future food security as soon as possible, the areas affected by the natural disaster, the number of affected people, and the effects that the event can cause to vegetation. For this, providing easy-to-consult thematic maps about the affected areas and population, with adequate spatial resolution, time frequency and regular updating can result determining. Satellite remote sensed data have increasingly been used in the last decades in order to provide updated information about land surface with an acceptable time frequency. Furthermore, satellite images can be managed by automatic procedures in order to extract synthetic information about the ground condition in a very short time and can be easily shared in the web. The work of thesis, focused on the analysis and processing of satellite data, was carried out in cooperation with the association ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action), a center of research which works in cooperation with the WFP in order to provide IT products and tools for the management of food emergencies caused by natural disasters. These products should be able to facilitate the forecasting of the effects of catastrophic events, the estimation of the extension and location of the areas hit by the event, of the affected population and thereby the planning of interventions on the area that could be affected by food insecurity. The requested features of the instruments are: • Regular updating • Spatial resolution suitable for a synoptic analysis • Low cost • Easy consultation Ithaca is developing different activities to provide georeferenced thematic data to WFP users, such a spatial data infrastructure for storing, querying and manipulating large amounts of global geographic information, and for sharing it between a large and differentiated community; a system of early warning for floods, a drought monitoring tool, procedures for rapid mapping in the response phase in a case of natural disaster, web GIS tools to distribute and share georeferenced information, that can be consulted only by means of a web browser. The work of thesis is aimed at providing applications for the automatic production of base georeferenced thematic data, by using free global satellite data, which have characteristics suitable for analysis at a regional scale. In particular the main themes of the applications are water bodies and vegetation phenology. The first application aims at providing procedures for the automatic extraction of water bodies and will lead to the creation and update of an historical archive, which can be analyzed in order to catch the seasonality of water bodies and delineate scenarios of historical flooded areas. The automatic extraction of phenological parameters from satellite data will allow to integrate the existing drought monitoring system with information on vegetation seasonality and to provide further information for the evaluation of food insecurity in the post disaster phase. In the thesis are described the activities carried on for the development of procedures for the automatic processing of free satellite data in order to produce customized layers according to the exigencies in format and distribution of the final users. The main activities, which focused on the development of an automated procedure for the extraction of flooded areas, include the research of an algorithm for the classification of water bodies from satellite data, an important theme in the field of management of the emergencies due to flood events. Two main technologies are generally used: active sensors (radar) and passive sensors (optical data). Advantages for active sensors include the ability to obtain measurements anytime, regardless of the time of day or season, while passive sensors can only be used in the daytime cloud free conditions. Even if with radar technologies is possible to get information on the ground in all weather conditions, it is not possible to use radar data to obtain a continuous archive of flooded areas, because of the lack of a predetermined frequency in the acquisition of the images. For this reason the choice of the dataset went in favor of MODIS (Moderate Resolution Imaging Spectroradiometer), optical data with a daily frequency, a spatial resolution of 250 meters and an historical archive of 10 years. The presence of cloud coverage prevents from the acquisition of the earth surface, and the shadows due to clouds can be wrongly classified as water bodies because of the spectral response very similar to the one of water. After an analysis of the state of the art of the algorithms of automated classification of water bodies in images derived from optical sensors, the author developed an algorithm that allows to classify the data of reflectivity and to temporally composite them in order to obtain flooded areas scenarios for each event. This procedure was tested in the Bangladesh areas, providing encouraging classification accuracies. For the vegetation theme, the main activities performed, here described, include the review of the existing methodologies for phenological studies and the automation of the data flow between inputs and outputs with the use of different global free satellite datasets. In literature, many studies demonstrated the utility of the NDVI (Normalized Difference Vegetation Index) indices for the monitoring of vegetation dynamics, in the study of cultivations, and for the survey of the vegetation water stress. The author developed a procedure for creating layers of phenological parameters which integrates the TIMESAT software, produced by Lars Eklundh and Per Jönsson, for processing NDVI indices derived from different satellite sensors: MODIS (Moderate Resolution Imaging Spectroradiometer), AVHRR (Advanced Very High Resolution Radiometer) AND SPOT (Système Pour l'Observation de la Terre) VEGETATION. The automated procedure starts from data downloading, calls in a batch mode the software and provides customized layers of phenological parameters such as the starting of the season or length of the season and many others
    corecore