67 research outputs found

    Arhivske vijesti o pučkoj drami u srednjoj Dalmaciji

    Get PDF
    © 2017 Association for Computing Machinery. Context: The research literature on software development projects usually assumes that effort is a good proxy for cost. Practice, however, suggests that there are circumstances in which costs and effort should be distinguished. Objectives: We determine similarities and differences between size, effort, cost, duration, and number of defects of software projects. Method: We compare two established repositories (ISBSG and EBSPM) comprising almost 700 projects from industry. Results: We demonstrate a (log)-linear relation between cost on the one hand, and size, duration and number of defects on the other. This justifies conducting linear regression for cost. We establish that ISBSG is substantially different from EBSPM, in terms of cost (cheaper) and duration (faster), and the relation between cost and effort. We show that while in ISBSG effort is the most important cost factor, this is not the case in other repositories, such as EBSPM in which size is the dominant factor. Conclusion: Practitioners and researchers alike should be cautious when drawing conclusions from a single repository

    Improvement Opportunities and Suggestions for Benchmarking

    Full text link
    During the past 10 years, the amount of effort put on setting up benchmarking repositories has considerably increased at the organizational, national and even at international levels to help software managers to determine the performance of software activities and to make better software estimates. This has enabled a number of studies with an emphasis on the relationship between software product size, effort and cost factors in order to either measure the average performance for similar software projects or develop reliable estimation models and then refine them using the collected data. However, despite these efforts, none of those methods are yet deemed to be universally applicable and there is still no agreement on which cost factors are significant in the estimation process. This study discusses some of the possible reasons why in software engineering, practitioners and researchers have not yet been able to come up with well defined relationships between effort and cost drivers although considerable amounts of data on software projects have been collected.Volume 5891/200

    Contribution of Somatic Ras/Raf/Mitogen-Activated Protein Kinase Variants in the Hippocampus in Drug-Resistant Mesial Temporal Lobe Epilepsy

    Get PDF
    Importance: Mesial temporal lobe epilepsy (MTLE) is the most common focal epilepsy subtype and is often refractory to antiseizure medications. While most patients with MTLE do not have pathogenic germline genetic variants, the contribution of postzygotic (ie, somatic) variants in the brain is unknown. Objective: To test the association between pathogenic somatic variants in the hippocampus and MTLE. Design, Setting, and Participants: This case-control genetic association study analyzed the DNA derived from hippocampal tissue of neurosurgically treated patients with MTLE and age-matched and sex-matched neurotypical controls. Participants treated at level 4 epilepsy centers were enrolled from 1988 through 2019, and clinical data were collected retrospectively. Whole-exome and gene-panel sequencing (each genomic region sequenced more than 500 times on average) were used to identify candidate pathogenic somatic variants. A subset of novel variants was functionally evaluated using cellular and molecular assays. Patients with nonlesional and lesional (mesial temporal sclerosis, focal cortical dysplasia, and low-grade epilepsy-associated tumors) drug-resistant MTLE who underwent anterior medial temporal lobectomy were eligible. All patients with available frozen tissue and appropriate consents were included. Control brain tissue was obtained from neurotypical donors at brain banks. Data were analyzed from June 2020 to August 2022. Exposures: Drug-resistant MTLE. Main Outcomes and Measures: Presence and abundance of pathogenic somatic variants in the hippocampus vs the unaffected temporal neocortex. Results: Of 105 included patients with MTLE, 53 (50.5%) were female, and the median (IQR) age was 32 (26-44) years; of 30 neurotypical controls, 11 (36.7%) were female, and the median (IQR) age was 37 (18-53) years. Eleven pathogenic somatic variants enriched in the hippocampus relative to the unaffected temporal neocortex (median [IQR] variant allele frequency, 1.92 [1.5-2.7] vs 0.3 [0-0.9]; P =.01) were detected in patients with MTLE but not in controls. Ten of these variants were in PTPN11, SOS1, KRAS, BRAF, and NF1, all predicted to constitutively activate Ras/Raf/mitogen-activated protein kinase (MAPK) signaling. Immunohistochemical studies of variant-positive hippocampal tissue demonstrated increased Erk1/2 phosphorylation, indicative of Ras/Raf/MAPK activation, predominantly in glial cells. Molecular assays showed abnormal liquid-liquid phase separation for the PTPN11 variants as a possible dominant gain-of-function mechanism. Conclusions and Relevance: Hippocampal somatic variants, particularly those activating Ras/Raf/MAPK signaling, may contribute to the pathogenesis of sporadic, drug-resistant MTLE. These findings may provide a novel genetic mechanism and highlight new therapeutic targets for this common indication for epilepsy surgery

    Proceedings - Asia-Pacific Software Engineering Conference, APSEC

    Full text link
    CONTEXT: Several studies in effort estimation havefound that it can be effective to use only recent project data for building an effort estimation model. The generality of this timeaware approach has been explored across a variety of effort estimation model approaches, organizations and definitions of recency. However, other studies have shown that it is not alwayshelpful. A question arises: how can one tell whether the approachwould be effective for a given target project? OBJECTIVE: Toinvestigate a potential method to decide between selecting recentor all project data. METHOD: Using a single-company ISBSGdata set1 studied previously in similar research, we propose andevaluate a selection method. The method utilizes a variant ofcross-validation based on recent projects to make the decision.RESULTS: There are significant differences in the estimation accuracybetween using the proposed method and using the growingportfolio (always using all available data). The method could alsoselect the better approach on average. However, the differencein estimation accuracy between using the proposed method andalways using moving windows was not statistically significant.CONCLUSIONS: The selection method could select the betterapproach on average. The results contribute to developing amethod for suggesting a better approach for practitioners

    On the effectiveness of weighted moving windows: Experiment on linear regression based software effort estimation

    Full text link
    In construction of an effort estimation model, it seems effective to use a window of training data so that the model is trained with only recent projects. Considering the chronological order of projects within the window, and weighting projects according to their order within the window, may also affect estimation accuracy. In this study, we examined the effects of weighted moving windows on effort estimation accuracy. We compared weighted and non-weighted moving windows under the same experimental settings. We confirmed that weighting methods significantly improved estimation accuracy in larger windows, although the methods also significantly worsened accuracy in smaller windows. This result contributes to understanding properties of moving windows. Copyright © 2014 John Wiley & Sons, Ltd

    Dynamic stopping criteria for search-based test data generation for path testing

    Full text link
    Context Evolutionary algorithms have proved to be successful for generating test data for path coverage testing. However in this approach, the set of target paths to be covered may include some that are infeasible. It is impossible to find test data to cover those paths. Rather than searching indefinitely, or until a fixed limit of generations is reached, it would be desirable to stop searching as soon it seems likely that feasible paths have been covered and all remaining un-covered target paths are infeasible. Objective The objective is to develop criteria to halt the evolutionary test data generation process as soon as it seems not worth continuing, without compromising testing confidence level. Method Drawing on software reliability growth models as an analogy, this paper proposes and evaluates a method for determining when it is no longer worthwhile to continue searching for test data to cover un-covered target paths. We outline the method, its key parameters, and how it can be used as the basis for different decision rules for early termination of a search. Twenty-one test programs from the SBSE path testing literature are used to evaluate the method. Results Compared to searching for a standard number of generations, an average of 30-75% of total computation was avoided in test programs with infeasible paths, and no feasible paths were missed due to early termination. The extra computation in programs with no infeasible paths was negligible. Conclusions The method is effective and efficient. It avoids the need to specify a limit on the number of generations for searching. It can help to overcome problems caused by infeasible paths in search-based test data generation for path testing. © 2014 Elsevier B.V. All rights reserved

    Realistic assessment of software effort estimation models

    Get PDF
    Context: It is unclear that current approaches to evaluating or comparing competing software cost or effort models give a realistic picture of how they would perform in actual use. Specifically, we’re concerned that the usual practice of using all data with some holdout strategy is at variance with the reality of a data set growing as projects complete. Objective: This study investigates the impact of using unrealistic, though possibly convenient to the researchers, ways to compare models on commercial data sets. Our questions are does this lead to different conclusions in terms of the comparisons and if so,are the results biased e.g., more optimistic than those that might realistically be achieved in practice. Method: We compare a traditional approach based on leave one out cross-validation with growing the data set chronologically using the Finnish and Desharnais data sets. Results: Our realistic, time-based approach to validation is significantly more conservative than leave-one-out cross-validation (LOOCV) for both data sets. Conclusion: If we want our research to lead to actionable findings it’s incumbent upon the researchers to evaluate their models in realistic ways. This means a departure from LOOCV techniques, while further investigation is needed for other validation techniques, such as k-fold validation
    corecore