250 research outputs found

    Coordination Implications of Software Coupling in Open Source Projects

    Get PDF
    The effect of software coupling on the quality of software has been studied quite widely since the seminal paper on software modularity by Parnas [1]. However, the effect of the increase in software coupling on the coordination of the developers has not been researched as much. In commercial software development environments there normally are coordination mechanisms in place to manage the coordination requirements due to software dependencies. But, in the case of Open Source software such coordination mechanisms are harder to implement, as the developers tend to rely solely on electronic means of communication. Hence, an understanding of the changing coordination requirements is essential to the management of an Open Source project. In this paper we study the effect of changes in software coupling on the coordination requirements in a case study of a popular Open Source project called JBoss

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury

    Constraint Handling in Efficient Global Optimization

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this record.Real-world optimization problems are often subject to several constraints which are expensive to evaluate in terms of cost or time. Although a lot of effort is devoted to make use of surrogate models for expensive optimization tasks, not many strong surrogate-assisted algorithms can address the challenging constrained problems. Efficient Global Optimization (EGO) is a Kriging-based surrogate-assisted algorithm. It was originally proposed to address unconstrained problems and later was modified to solve constrained problems. However, these type of algorithms still suffer from several issues, mainly: (1) early stagnation, (2) problems with multiple active constraints and (3) frequent crashes. In this work, we introduce a new EGO-based algorithm which tries to overcome these common issues with Kriging optimization algorithms. We apply the proposed algorithm on problems with dimension d ≤ 4 from the G-function suite [16] and on an airfoil shape example.This research was partly funded by Tekes, the Finnish Funding Agency for Innovation (the DeCoMo project), and by the Engineering and Physical Sciences Research Council [grant numbers EP/N017195/1, EP/N017846/1]

    The Secret to Successful User Communities: An Analysis of Computer Associates’ User Groups

    Get PDF
    This paper provides the first large scale study that examines the impact of both individual- and group-specific factors on the benefits users obtain from their user communities. By empirically analysing 924 survey responses from individuals in 161 Computer Associates' user groups, this paper aims to identify the determinants of successful user communities. To measure success, the amount of time individual members save through having access to their user networks is used. As firms can significantly profit from successful user communities, this study proposes four key implications of the empirical results for the management of user communities

    The Comparative Toxicogenomics Database: update 2011

    Get PDF
    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the interaction of environmental chemicals with gene products, and their effects on human health. Biocurators at CTD manually curate a triad of chemical–gene, chemical–disease and gene–disease relationships from the literature. These core data are then integrated to construct chemical–gene–disease networks and to predict many novel relationships using different types of associated data. Since 2009, we dramatically increased the content of CTD to 1.4 million chemical–gene–disease data points and added many features, statistical analyses and analytical tools, including GeneComps and ChemComps (to find comparable genes and chemicals that share toxicogenomic profiles), enriched Gene Ontology terms associated with chemicals, statistically ranked chemical–disease inferences, Venn diagram tools to discover overlapping and unique attributes of any set of chemicals, genes or disease, and enhanced gene pathway data content, among other features. Together, this wealth of expanded chemical–gene–disease data continues to help users generate testable hypotheses about the molecular mechanisms of environmental diseases. CTD is freely available at http://ctd.mdibl.org

    What Does It Take to Develop a Million Lines of Open Source Code?

    Get PDF
    This article presents a preliminary and exploratory study of the relationship between size, on the one hand, and effort, duration and team size, on the other, for 11 Free/Libre/Open Source Software (FLOSS) projects with current size ranging between between 0.6 and 5.3 million lines of code (MLOC). Effort was operationalised based on the number of active committers per month. The extracted data did not fit well an early version of the closed-source cost estimation model COCOMO for proprietary software, overall suggesting that, at least to some extent, FLOSS communities are more productive than closedsource teams. This also motivated the need for FLOSS-specific effort models. As a first approximation, we evaluated 16 linear regression models involving different pairs of attributes. One of our experiments was to calculate the net size, that is, to remove any suspiciously large outliers or jumps in the growth trends. The best model we found involved effort against net size, accounting for 79 percent of the variance. This model was based on data excluding a possible outlier (Eclipse), the largest project in our sample. This suggests that different effort models may be needed for certain categories of FLOSS projects. Incidentally, for each of the 11 individual FLOSS projects we were able to model the net size trends with very high accuracy (R 2 ≥ 0.98). Of the 11 projects, 3 have grown superlinearly, 5 linearly and 3 sublinearly, suggesting that in the majority of the cases accumulated complexity is either well controlled or don't constitute a growth constraining factor

    PD-1 Dynamically Regulates Inflammation and Development of Brain-Resident Memory CD8 T Cells During Persistent Viral Encephalitis

    Get PDF
    Programmed cell death-1 (PD-1) receptor signaling dampens the functionality of T cells faced with repetitive antigenic stimulation from chronic infections or tumors. Using intracerebral (i.c.) inoculation with mouse polyomavirus (MuPyV), we have shown that CD8 T cells establish a PD-1hi, tissue-resident memory population in the brains (bTRM) of mice with a low-level persistent infection. In MuPyV encephalitis, PD-L1 was expressed on infiltrating myeloid cells, microglia and astrocytes, but not on oligodendrocytes. Engagement of PD-1 on anti-MuPyV CD8 T cells limited their effector activity. NanoString gene expression analysis showed that neuroinflammation was higher in PD-L1−/− than wild type mice at day 8 post-infection, the peak of the MuPyV-specific CD8 response. During the persistent phase of infection, however, the absence of PD-1 signaling was found to be associated with a lower inflammatory response than in wild type mice. Genetic disruption and intracerebroventricular blockade of PD-1 signaling resulted in an increase in number of MuPyV-specific CD8 bTRM and the fraction of these cells expressing CD103, the αE integrin commonly used to define tissue-resident T cells. However, PD-L1−/− mice persistently infected with MuPyV showed impaired virus control upon i.c. re-infection with MuPyV. Collectively, these data reveal a temporal duality in PD-1-mediated regulation of MuPyV-associated neuroinflammation. PD-1 signaling limited the severity of neuroinflammation during acute infection but sustained a level of inflammation during persistent infection for maintaining control of virus re-infection

    Autonomous discovery in the chemical sciences part II: Outlook

    Full text link
    This two-part review examines how automation has contributed to different aspects of discovery in the chemical sciences. In this second part, we reflect on a selection of exemplary studies. It is increasingly important to articulate what the role of automation and computation has been in the scientific process and how that has or has not accelerated discovery. One can argue that even the best automated systems have yet to ``discover'' despite being incredibly useful as laboratory assistants. We must carefully consider how they have been and can be applied to future problems of chemical discovery in order to effectively design and interact with future autonomous platforms. The majority of this article defines a large set of open research directions, including improving our ability to work with complex data, build empirical models, automate both physical and computational experiments for validation, select experiments, and evaluate whether we are making progress toward the ultimate goal of autonomous discovery. Addressing these practical and methodological challenges will greatly advance the extent to which autonomous systems can make meaningful discoveries.Comment: Revised version available at 10.1002/anie.20190998
    corecore