699 research outputs found

    Effort estimation of FLOSS projects: A study of the Linux kernel

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2011 SpringerEmpirical research on Free/Libre/Open Source Software (FLOSS) has shown that developers tend to cluster around two main roles: “core” contributors differ from “peripheral” developers in terms of a larger number of responsibilities and a higher productivity pattern. A further, cross-cutting characterization of developers could be achieved by associating developers with “time slots”, and different patterns of activity and effort could be associated to such slots. Such analysis, if replicated, could be used not only to compare different FLOSS communities, and to evaluate their stability and maturity, but also to determine within projects, how the effort is distributed in a given period, and to estimate future needs with respect to key points in the software life-cycle (e.g., major releases). This study analyses the activity patterns within the Linux kernel project, at first focusing on the overall distribution of effort and activity within weeks and days; then, dividing each day into three 8-hour time slots, and focusing on effort and activity around major releases. Such analyses have the objective of evaluating effort, productivity and types of activity globally and around major releases. They enable a comparison of these releases and patterns of effort and activities with traditional software products and processes, and in turn, the identification of company-driven projects (i.e., working mainly during office hours) among FLOSS endeavors. The results of this research show that, overall, the effort within the Linux kernel community is constant (albeit at different levels) throughout the week, signalling the need of updated estimation models, different from those used in traditional 9am–5pm, Monday to Friday commercial companies. It also becomes evident that the activity before a release is vastly different from after a release, and that the changes show an increase in code complexity in specific time slots (notably in the late night hours), which will later require additional maintenance efforts

    On the Importance of Countergradients for the Development of Retinotopy: Insights from a Generalised Gierer Model

    Get PDF
    During the development of the topographic map from vertebrate retina to superior colliculus (SC), EphA receptors are expressed in a gradient along the nasotemporal retinal axis. Their ligands, ephrin-As, are expressed in a gradient along the rostrocaudal axis of the SC. Countergradients of ephrin-As in the retina and EphAs in the SC are also expressed. Disruption of any of these gradients leads to mapping errors. Gierer's (1981) model, which uses well-matched pairs of gradients and countergradients to establish the mapping, can account for the formation of wild type maps, but not the double maps found in EphA knock-in experiments. I show that these maps can be explained by models, such as Gierer's (1983), which have gradients and no countergradients, together with a powerful compensatory mechanism that helps to distribute connections evenly over the target region. However, this type of model cannot explain mapping errors found when the countergradients are knocked out partially. I examine the relative importance of countergradients as against compensatory mechanisms by generalising Gierer's (1983) model so that the strength of compensation is adjustable. Either matching gradients and countergradients alone or poorly matching gradients and countergradients together with a strong compensatory mechanism are sufficient to establish an ordered mapping. With a weaker compensatory mechanism, gradients without countergradients lead to a poorer map, but the addition of countergradients improves the mapping. This model produces the double maps in simulated EphA knock-in experiments and a map consistent with the Math5 knock-out phenotype. Simulations of a set of phenotypes from the literature substantiate the finding that countergradients and compensation can be traded off against each other to give similar maps. I conclude that a successful model of retinotopy should contain countergradients and some form of compensation mechanism, but not in the strong form put forward by Gierer

    Clonal hematopoiesis and therapy-related myeloid neoplasms following neuroblastoma treatment.

    Get PDF
    Therapy-related myeloid neoplasms (TMN) constitute one of the most challengingcomplications of cancer treatment.1 Whilst understanding of TMN pathogenesis remains fragmentary, genomic studies in adults have thus far refuted the notion that TMN simply result from cytotoxin-induced DNA damage.2–4 Analysis of the preclinical evolution of a limited number of adult TMN have retraced the majority of cases to clonal haematopoiesis (CH) that predates cytotoxic treatment and lacks the mutational footprint of genotoxic therapies.2–6 Balanced translocations, generally attributed to treatment with topoisomerase II inhibitors, are implicated in a minority of TMN.1 TMN is a leading cause of premature death in childhood cancer survivors, and affects 7-11% of children treated for high-risk neuroblastoma and sarcoma.7,8 However, the origin of pediatric TMN remains unclear. Targeted sequencing of known cancer genes detects CH in ~4% of children following cytotoxic treatment,6,9 whereas CH is vanishingly rare in young individuals in the general population.10,11 Moreover, to our knowledge, no cases of childhood TMN have been retraced to pretreatment CH. In light of these observations, we asked whether a broader driver landscape had eluded targeted CH screens in pediatric cancer patients and/or whether therapy-induced mutagenesis may be an under-recognised catalyst of CH and TMN in this patient group

    A teleofunctional account of evolutionary mismatch.

    Get PDF
    This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1007/s10539-016-9527-1When the environment in which an organism lives deviates in some essential way from that to which it is adapted, this is described as "evolutionary mismatch," or "evolutionary novelty." The notion of mismatch plays an important role, explicitly or implicitly, in evolution-informed cognitive psychology, clinical psychology, and medicine. The evolutionary novelty of our contemporary environment is thought to have significant implications for our health and well-being. However, scientists have generally been working without a clear definition of mismatch. This paper defines mismatch as deviations in the environment that render biological traits unable, or impaired in their ability, to produce their selected effects (i.e., to perform their proper functions in Neander's sense). The machinery developed by Millikan in connection with her account of proper function, and with her related teleosemantic account of representation, is used to identify four major types, and several subtypes, of evolutionary mismatch. While the taxonomy offered here does not in itself resolve any scientific debates, the hope is that it can be used to better formulate empirical hypotheses concerning the effects of mismatch. To illustrate, it is used to show that the controversial hypothesis that general intelligence evolved as an adaptation to handle evolutionary novelty can, contra some critics, be formulated in a conceptually coherent way

    Z' Bosons at Colliders: a Bayesian Viewpoint

    Get PDF
    We revisit the CDF data on di-muon production to impose constraints on a large class of Z' bosons occurring in a variety of E_6 GUT based models. We analyze the dependence of these limits on various factors contributing to the production cross-section, showing that currently systematic and theoretical uncertainties play a relatively minor role. Driven by this observation, we emphasize the use of the Bayesian statistical method, which allows us to straightforwardly (i) vary the gauge coupling strength, g', of the underlying U(1)'; (ii) include interference effects with the Z' amplitude (which are especially important for large g'); (iii) smoothly vary the U(1)' charges; (iv) combine these data with the electroweak precision constraints as well as with other observables obtained from colliders such as LEP 2 and the LHC; and (v) find preferred regions in parameter space once an excess is seen. We adopt this method as a complementary approach for a couple of sample models and find limits on the Z' mass, generally differing by only a few percent from the corresponding CDF ones when we follow their approach. Another general result is that the interference effects are quite relevant if one aims at discriminating between models. Finally, the Bayesian approach frees us of any ad hoc assumptions about the number of events needed to constitute a signal or exclusion limit for various actual and hypothetical reference energies and luminosities at the Tevatron and the LHC.Comment: PDFLaTeX, 24 pages, 7 figures. Version with improved tables and figure

    Strategic Planning for Local Tourism Destinations: An Analysis of Tourism

    Get PDF
    This paper reports on a study of the planning practices of local tourism destinations. The tourism plans of 30 local tourism destinations in Queensland, Australia were analyzed to determine the extent to which sustainability principles, namely strategic planning and stakeholder participation, were integrated into the planning process. Utilizing a tourism planning process evaluation instrument developed by Simpson (2001), it was found that local tourism destinations are not integrating sustainability principles in their planning processes

    Perspective from a Younger Generation -- The Astro-Spectroscopy of Gisbert Winnewisser

    Full text link
    Gisbert Winnewisser's astronomical career was practically coextensive with the whole development of molecular radio astronomy. Here I would like to pick out a few of his many contributions, which I, personally, find particularly interesting and put them in the context of newer results.Comment: 14 pages. (Co)authored by members of the MPIfR (Sub)millimeter Astronomy Group. To appear in the Proceedings of the 4th Cologne-Bonn-Zermatt-Symposium "The Dense Interstellar Medium in Galaxies" eds. S. Pfalzner, C. Kramer, C. Straubmeier, & A. Heithausen (Springer: Berlin

    Detection of Gamma-Ray Emission from the Starburst Galaxies M82 and NGC 253 with the Large Area Telescope on Fermi

    Full text link
    We report the detection of high-energy gamma-ray emission from two starburst galaxies using data obtained with the Large Area Telescope on board the Fermi Gamma-ray Space Telescope. Steady point-like emission above 200 MeV has been detected at significance levels of 6.8 sigma and 4.8 sigma respectively, from sources positionally coincident with locations of the starburst galaxies M82 and NGC 253. The total fluxes of the sources are consistent with gamma-ray emission originating from the interaction of cosmic rays with local interstellar gas and radiation fields and constitute evidence for a link between massive star formation and gamma-ray emission in star-forming galaxies.Comment: Submitted to ApJ Letter
    corecore