120,806 research outputs found

    Monitoring evolution of code complexity and magnitude of changes

    Get PDF
    Complexity management has become a crucial activity in continuous software development. While the overall perceived complexity of a product grows rather insignificantly, the small units, such as functions and files, can have noticeable complexity growth with every increment of product features. This kind of evolution triggers risks of escalating fault-proneness and deteriorating maintainability. The goal of this research was to develop a measurement system which enables effective monitoring of complexity evolution. An action research has been conducted in two large software development organizations. We have measured three complexity and two change properties of code for two large industrial products. The complexity growth has been measured for five consecutive releases of the products. Different patterns of growth have been identified and evaluated with software engineers in industry. The results show that monitoring cyclomatic complexity evolution of functions and number of revisions of files focuses the attention of designers to potentially problematic files and functions for manual assessment and improvement. A measurement system was developed at Ericsson to support the monitoring process

    The WHO Global Code of Practice on the International Recruitment of Health Personnel: The Evolution of Global Health Diplomacy

    Get PDF
    The May 2010 adoption of the World Health Organization Global Code of Practice on the International Recruitment of Health Personnel created a global architecture, including ethical norms and institutional and legal arrangements, to guide international cooperation and serve as a platform for continuing dialogue on the critical problem of health worker migration. Highlighting the contribution of non-binding instruments to global health governance, this article describes the Code negotiation process from its early stages to the formal adoption of the final text of the Code. Detailed are the vigorous negotiations amongst key stakeholders, including the active role of non-governmental organizations. The article emphasizes the importance of political leadership, appropriate sequencing, and support for capacity building of developing countries’ negotiating skills to successful global health negotiations. It also reflects on how the dynamics of the Code negotiation process evidence an evolution in global health negotiations amongst the WHO Secretariat, civil society, and WHO Member States

    A framework for effective management of condition based maintenance programs in the context of industrial development of E-Maintenance strategies

    Get PDF
    CBM (Condition Based Maintenance) solutions are increasingly present in industrial systems due to two main circumstances: rapid evolution, without precedents, in the capture and analysis of data and significant cost reduction of supporting technologies. CBM programs in industrial systems can become extremely complex, especially when considering the effective introduction of new capabilities provided by PHM (Prognostics and Health Management) and E-maintenance disciplines. In this scenario, any CBM solution involves the management of numerous technical aspects, that the maintenance manager needs to understand, in order to be implemented properly and effectively, according to the company’s strategy. This paper provides a comprehensive representation of the key components of a generic CBM solution, this is presented using a framework or supporting structure for an effective management of the CBM programs. The concept “symptom of failure”, its corresponding analysis techniques (introduced by ISO 13379-1 and linked with RCM/FMEA analysis), and other international standard for CBM open-software application development (for instance, ISO 13374 and OSA-CBM), are used in the paper for the development of the framework. An original template has been developed, adopting the formal structure of RCM analysis templates, to integrate the information of the PHM techniques used to capture the failure mode behaviour and to manage maintenance. Finally, a case study describes the framework using the referred template.Gobierno de Andalucía P11-TEP-7303 M

    Tracer test modeling for characterizing heterogeneity and local-scale residence time distribution in an artificial recharge site

    Get PDF
    Artificial recharge of aquifers is a technique for improving water quality and increasing groundwater resources. Understanding the fate of a potential contaminant requires knowledge of the residence time distribution (RTD) of the recharged water in the aquifer beneath. A simple way to obtain the RTDs is to perform a tracer test. We performed a pulse injection tracer test in an artificial recharge system through an infiltration basin to obtain the breakthrough curves, which directly yield the RTDs. The RTDs turned out to be very broad and we used a numerical model to interpret them, to characterize heterogeneity, and to extend the model to other flow conditions. The model comprised nine layers at the site scaled to emulate the layering of aquifer deposits. Two types of hypotheses were considered: homogeneous (all flow and transport parameters identical for every layer) and heterogeneous (diverse parameters for each layer). The parameters were calibrated against the head and concentration data in both model types, which were validated quite satisfactorily against 1,1,2-Trichloroethane and electrical conductivity data collected over a long period of time with highly varying flow conditions. We found that the broad RTDs can be attributed to the complex flow structure generated under the basin due to three-dimensionality and time fluctuations (the homogeneous model produced broad RTDs) and the heterogeneity of the media (the heterogeneous model yielded much better fits). We conclude that heterogeneity must be acknowledged to properly assess mixing and broad RTDs, which are required to explain the water quality improvement of artificial recharge basins.Peer ReviewedPostprint (published version

    Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy

    Get PDF
    Astrophysics and cosmology are rich with data. The advent of wide-area digital cameras on large aperture telescopes has led to ever more ambitious surveys of the sky. Data volumes of entire surveys a decade ago can now be acquired in a single night and real-time analysis is often desired. Thus, modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing with label and measurement noise. We argue that this makes astronomy a great domain for computer science research, as it pushes the boundaries of data analysis. In the following, we will present this exciting application area for data scientists. We will focus on exemplary results, discuss main challenges, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications

    Radio Emission from Supernovae

    Get PDF
    Study of radio supernovae over the past 27 years includes more than three dozen detected objects and more than 150 upper limits. From this work it is possible to identify classes of radio properties, demonstrate conformance to and deviations from existing models, estimate the density and structure of the circumstellar material and, by inference, the evolution of the presupemova stellar wind, and reveal the last stages of stellar evolution before explosion. It is also possible to detect ionized hydrogen along the line of sight, to demonstrate binary properties of the presupemova stellar system, and to detect dumpiness of the circumstellar material. Along with reviewing these general properties of the radio emission from supernovae, we present our extensive observations of the radio emission from supemova (SN) 1993J in M 81 (NGC 3031) made with the Very Large Array and other radio telescopes. The SN 1993J radio emission evolves regularly in both time and frequency, and the usual interpretation in terms of shock interaction with a circumstellar medium (CSM) formed by a pre-supernova stellar wind describes the observations rather well considering the complexity of the phenomenon. However: 1) The highest frequency measurements at 85 - 110 GHz at early times (< 40 days) are not well fitted by the parameterization which describes the cm wavelength measurements rather well. 2) At mid-cm wavelengths there is often deviation from the fitted radio light curves, particularly near the peak flux density, and considerable shorter term deviations in the declining portion when the emission has become optically thin. 3) At a time ~3100 days after shock breakout, the decline rate of the radio emission steepens from (t^(+β))β ~ 0.7 to β ~ —2.7 without change in the spectral index (v^(+α); α ~ -0.81). However, this decline is best described not as a power-law, but as an exponential decay starting at day ~3100 with an e-folding time of ~1100 days. 4) The best overall fit to all of the data is a model including both non-thermal synchrotron self-absorption (SSA) and a thermal free-free absorbing (FFA) components at early times, evolving to a constant spectral index, optically thin decline rate, until a break in that decline rate at day ~3100, as mentioned above. Moreover, neither a purely SSA nor a purely FFA absorbing model can provide a fit that simultaneously reproduces the light curves, the spectral index evolution, and the brightness temperature evolution

    Systems approaches to animal disease surveillance and resource allocation: methodological frameworks for behavioral analysis

    Get PDF
    While demands for animal disease surveillance systems are growing, there has been little applied research that has examined the interactions between resource allocation, cost-effectiveness, and behavioral considerations of actors throughout the livestock supply chain in a surveillance system context. These interactions are important as feedbacks between surveillance decisions and disease evolution may be modulated by their contextual drivers, influencing the cost-effectiveness of a given surveillance system. This paper identifies a number of key behavioral aspects involved in animal health surveillance systems and reviews some novel methodologies for their analysis. A generic framework for analysis is discussed, with exemplar results provided to demonstrate the utility of such an approach in guiding better disease control and surveillance decisions

    Rightsizing LISA

    Get PDF
    The LISA science requirements and conceptual design have been fairly stable for over a decade. In the interest of reducing costs, the LISA Project at NASA has looked for simplifications of the architecture, at downsizing of subsystems, and at descopes of the entire mission. This is a natural activity of the formulation phase, and one that is particularly timely in the current NASA budgetary context. There is, and will continue to be, enormous pressure for cost reduction from both ESA and NASA, reviewers and the broader research community. Here, the rationale for the baseline architecture is reviewed, and recent efforts to find simplifications and other reductions that might lead to savings are reported. A few possible simplifications have been found in the LISA baseline architecture. In the interest of exploring cost sensitivity, one moderate and one aggressive descope have been evaluated; the cost savings are modest and the loss of science is not.Comment: To be published in Classical and Quantum Gravity; Proceedings of the Seventh International LISA Symposium, Barcelona, Spain, 16-20 Jun. 2008; 10 pages, 1 figure, 3 table
    • …
    corecore