59 research outputs found

    Analysis of Snow Cover in the Sibillini Mountains in Central Italy

    Get PDF
    Research on solid precipitation and snow cover, especially in mountainous areas, suffers from problems related to the lack of on-site observations and the low reliability of measurements, which is often due to instruments that are not suitable for the environmental conditions. In this context, the study area is the Monti Sibillini National Park, and it is no exception, as it is a mountainous area located in central Italy, where the measurements are scarce and fragmented. The purpose of this research is to provide a characterization of the snow cover with regard to maximum annual snow depth, average snow depth during the snowy period, and days with snow cover on the ground in the Monti Sibillini National Park area, by means of ground weather stations, and also analyzing any trends over the last 30 years. For this research, in order to obtain reliable snow cover data, only data from weather stations equipped with a sonar system and manual weather stations, where the surveyor goes to the site each morning and checks the thickness of the snowpack and records, it were collected. The data were collected from 1 November to 30 April each year for 30 years, from 1991 to 2020; six weather stations were taken into account, while four more were added as of 1 January 2010. The longer period was used to assess possible ongoing trends, which proved to be very heterogeneous in the results, predominantly negative in the case of days with snow cover on the ground, while trends were predominantly positive for maximum annual snow depth and distributed between positive and negative for the average annual snow depth. The shorter period, 2010–2022, on the other hand, ensured the presence of a larger number of weather stations and was used to assess the correlation and presence of clusters between the various weather stations and, consequently, in the study area. Furthermore, in this way, an up-to-date nivometric classification of the study area was obtained (in terms of days with snow on the ground, maximum height of snowpack, and average height of snowpack), filling a gap where there had been no nivometric study in the aforementioned area. The interpolations were processed using geostatistical techniques such as co-kriging with altitude as an independent variable, allowing fairly precise spatialization, analyzing the results of cross-validation. This analysis could be a useful tool for hydrological modeling of the area, as well as having a clear use related to tourism and vegetation, which is extremely influenced by the nivometric variables in its phenology. In addition, this analysis could also be considered a starting point for the calibration of more recent satellite products dedicated to snow cover detection, in order to further improve the compiled climate characterizatio

    Automatic detection and repair of directive defects of Java APIs documentation

    Get PDF
    Application Programming Interfaces (APIs) represent key tools for software developers to build complex software systems. However, several studies have revealed that even major API providers tend to have incomplete or inconsistent API documentation. This can severely hamper the API comprehension and as a consequence the quality of the software built on them. In this paper, we propose DRONE (Detect and Repair of dOcumentatioN dEfects), a framework to automatically detect and repair defects from API documents by leveraging techniques from program analysis, natural language processing, and constraint solving. Specifically, we target at the directives of API documents, which are related to parameter constraints and exception handling declarations. Furthermore, in presence of defects, we also provide a prototypical repair recommendation system. We evaluate our approach on parts of the well-documented APIs of JDK 1.8 APIs (including javaFX) and Android 7.0 (level 24). Across the two empirical studies, our approach can detect API defects with an average F-measure of 79.9%, 71.7%, and 81.4%, respectively. The API repairing capability has also been evaluated on the generated recommendations in a further experiment. User judgements indicate that the constraint information is addressed correctly and concisely in the rendered directives

    Quality of life assessment in amyloid transthyretin (ATTR) amyloidosis

    Get PDF
    Background: Amyloid transthyretin (ATTR) amyloidosis is caused by the systemic deposition of transthyretin molecules, either normal (wild-type ATTR, ATTRwt) or mutated (variant ATTR, ATTRv). ATTR amyloidosis is a disease with a severe impact on patients’ quality of life (QoL). Nonetheless, limited attention has been paid to QoL so far, and no specific tools for QoL assessment in ATTR amyloidosis currently exist. QoL can be evaluated through patient-reported outcome measures (PROMs), which are completed by patients, or through scales, which are compiled by clinicians. The scales investigate QoL either directly or indirectly, i.e., by assessing the degree of functional impairment and limitations imposed by the disease. Design: Search for the measures of QoL evaluated in phase 2 and phase 3 clinical trials on ATTR amyloidosis. Results: Clinical trials on ATTR amyloidosis have used measures of general health status, such as the Short Form 36 Health Survey (SF-36), or tools developed in other disease settings such as the Kansas City Cardiomyopathy Questionnaire (KCCQ) or adaptations of other scales such as the modified Neuropathy Impairment Score +7 (mNIS+7). Conclusions: Scales or PROMs for ATTR amyloidosis would be useful to better characterize newly diagnosed patients and to assess disease progression and response to treatment. The ongoing ITALY (Impact of Transthyretin Amyloidosis on Life qualitY) study aims to develop and validate 2 PROMs encompassing the whole phenotypic spectrum of ATTRwt and ATTRv amyloidosis, that might be helpful for patient management and may serve as surrogate endpoints for clinical trials

    LIPS vs MOSA: a Replicated Empirical Study on Automated Test Case Generation

    Get PDF
    Replication is a fundamental pillar in the construction of scientific knowledge. Test data generation for procedural programs can be tackled using a single-target or a many-objective approach. The proponents of LIPS, a novel single-target test generator, conducted a preliminary empirical study to compare their approach with MOSA, an alternative many-objective test generator. However, their empirical investigation suffers from several external and internal validity threats, does not consider complex programs with many branches and does not include any qualitative analysis to interpret the results. In this paper, we report the results of a replication of the original study designed to address its major limitations and threats to validity. The new findings draw a completely different picture on the pros and cons of single-target vs many-objective approaches to test case generation

    Applying a Smoothing Filter to improve IR-based Traceability Recovery Processes: An Empirical Investigation

    No full text
    Context: Traceability relations among software artifacts often tend to be missing, outdated, or lost. For this reason, various traceability recovery approaches—based on Information Retrieval (IR) techniques—have been proposed. The performances of such approaches are often influenced by ‘‘noise’’ contained in software artifacts (e.g., recurring words in document templates or other words that do not contribute to the retrieval itself). Aim: As a complement and alternative to stop word removal approaches, this paper proposes the use of a smoothing filter to remove ‘‘noise’’ from the textual corpus of artifacts to be traced. Method: We evaluate the effect of a smoothing filter in traceability recovery tasks involving different kinds of artifacts from five software projects, and applying three different IR methods, namely Vector Space Models, Latent Semantic Indexing, and Jensen–Shannon similarity model. Results: Our study indicates that, with the exception of some specific kinds of artifacts (i.e., tracing test cases to source code) the proposed approach is able to significantly improve the performances of traceability recovery, and to remove ‘‘noise’’ that simple stop word filters cannot remove. Conclusions: The obtained results not only help to develop traceability recovery approaches able to work in presence of noisy artifacts, but also suggest that smoothing filters can be used to improve performances of other software engineering approaches based on textual analysis
    • …
    corecore