12 research outputs found

    Transcriptomic Evidence That Longevity of Acquired Plastids in the Photosynthetic Slugs Elysia timida and Plakobranchus ocellatus Does Not Entail Lateral Transfer of Algal Nuclear Genes

    Get PDF
    Sacoglossan sea slugs are unique in the animal kingdom in that they sequester and maintain active plastids that they acquire from the siphonaceous algae upon which they feed, making the animals photosynthetic. Although most sacoglossan species digest their freshly ingested plastids within hours, four species from the family Plakobranchidae retain their stolen plastids (kleptoplasts) in a photosynthetically active state on timescales of weeks to months. The molecular basis of plastid maintenance within the cytosol of digestive gland cells in these photosynthetic metazoans is yet unknown but is widely thought to involve gene transfer from the algal food source to the slugs based upon previous investigations of single genes. Indeed, normal plastid development requires hundreds of nuclear-encoded proteins, with protein turnover in photosystem II in particular known to be rapid under various conditions. Moreover, only algal plastids, not the algal nuclei, are sequestered by the animals during feeding. If algal nuclear genes are transferred to the animal either during feeding or in the germ line, and if they are expressed, then they should be readily detectable with deep-sequencing methods. We have sequenced expressed mRNAs from actively photosynthesizing, starved individuals of two photosynthetic sea slug species, Plakobranchus ocellatus Van Hasselt, 1824 and Elysia timida Risso, 1818. We find that nuclear-encoded, algal-derived genes specific to photosynthetic function are expressed neither in P. ocellatus nor in E. timida. Despite their dramatic plastid longevity, these photosynthetic sacoglossan slugs do not express genes acquired from algal nuclei in order to maintain plastid function

    RobotPerf: An Open-Source, Vendor-Agnostic, Benchmarking Suite for Evaluating Robotics Computing System Performance

    Full text link
    We introduce RobotPerf, a vendor-agnostic benchmarking suite designed to evaluate robotics computing performance across a diverse range of hardware platforms using ROS 2 as its common baseline. The suite encompasses ROS 2 packages covering the full robotics pipeline and integrates two distinct benchmarking approaches: black-box testing, which measures performance by eliminating upper layers and replacing them with a test application, and grey-box testing, an application-specific measure that observes internal system states with minimal interference. Our benchmarking framework provides ready-to-use tools and is easily adaptable for the assessment of custom ROS 2 computational graphs. Drawing from the knowledge of leading robot architects and system architecture experts, RobotPerf establishes a standardized approach to robotics benchmarking. As an open-source initiative, RobotPerf remains committed to evolving with community input to advance the future of hardware-accelerated robotics

    The Pervasiveness of Global Data in Evolving Software Systems

    Full text link
    Abstract. In this research, we investigate the role of common coupling in evolving software systems. It can be argued that most software de-velopers understand that the use of global data has many harmful side-effects, and thus should be avoided. We are therefore interested in the answer to the following question: if global data does exist within a soft-ware project, how does global data usage evolve over a software project’s lifetime? Perhaps the constant refactoring and perfective maintenance eliminates global data usage, or conversely, perhaps the constant addi-tion of features and rapid development introduce an increasing reliance on global data? We are also interested in identifying if global data usage patterns are useful as a software metric that is indicative of an interesting or significant event in the software’s lifetime. The focus of this research is twofold: first to develop an effective and automatic technique for studying global data usage over the lifetime of large software systems and secondly, to leverage this technique in a case-study of global data usage for several large and evolving software systems in an effort to reach answers to these questions.

    Can developer-module networks predict failures?

    No full text
    Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant

    Change analysis with evolizer and ChangeDistiller

    Full text link

    Code of conduct in open source projects

    No full text
    Open source projects rely on collaboration of members from all around the world using web technologies like GitHub and Gerrit. This mixture of people with a wide range of backgrounds including minorities like women, ethnic minorities, and people with disabilities may increase the risk of offensive and destroying behaviours in the community, potentially leading affected project members to leave towards a more welcoming and friendly environment. To counter these effects, open source projects increasingly are turning to codes of conduct, in an attempt to promote their expectations and standards of ethical behaviour. In this first of its kind empirical study of codes of conduct in open source software projects, we investigated the role, scope and influence of codes of conduct through a mixture of quantitative and qualitative analysis, supported by interviews with practitioners. We found that the top codes of conduct are adopted by hundreds to thousands of projects, while all of them share 5 common dimensions

    Interactive views for analyzing problem reports

    No full text
    Note: Accepted for publication in the Proceedings of the International Conference on Software Maintenance (ICSM), 2009, IEEE Computer Society Issue tracking repositories contain a wealth of information for reasoning about various aspects of software development processes. In this paper, we focus on bug triaging and provide visual means to explore the effort estimation quality and the bug life-cycle of reported problems. Our approach follows the Micro/Macro reading technique and uses a combination of graphical views to investigate details of individual problem reports while maintaining the context provided by the surrounding data population. This enables the detection and detailed analysis of hidden patterns and facilitates the analysis of problem report outliers. In an industrial study, we use our approach in various problem report analysis scenarios and answer questions related to effort estimation and resource planning.Software Computer TechnologyElectrical Engineering, Mathematics and Computer Scienc

    A framework for semi-automated software evolution analysis composition

    Full text link
    Software evolution data stored in repositories such as version control, bug and issue tracking, or mailing lists is crucial to better understand a software system and assess its quality. A myriad of analyses exploiting such data have been proposed throughout the years. However, easy and straight forward synergies between these analyses rarely exist. To tackle this problem we have investigated the concept of Software Analysis as a Service and devised SOFAS, a distributed and collaborative software evolution analysis platform. Software analyses are offered as services that can be accessed, composed into workflows, and executed over the Internet. This paper presents our framework for composing these analyses into workflows, consisting of a custom-made modeling language and a composition infrastructure for the service offerings. The framework exploits the RESTful nature of our analysis service architecture and comes with a service composer to enable semi-automated service compositions by a user. We validate our framework by showcasing two different approaches built on top of it that support different stakeholders in gaining a deeper insight into a project history and evolution. As a result, our framework has shown its applicability to deliver diverse, complex analyses across system and tool boundaries
    corecore