26,279 research outputs found

    Asterias: a parallelized web-based suite for the analysis of expression and aCGH data

    Get PDF
    Asterias (\url{http://www.asterias.info}) is an integrated collection of freely-accessible web tools for the analysis of gene expression and aCGH data. Most of the tools use parallel computing (via MPI). Most of our applications allow the user to obtain additional information for user-selected genes by using clickable links in tables and/or figures. Our tools include: normalization of expression and aCGH data; converting between different types of gene/clone and protein identifiers; filtering and imputation; finding differentially expressed genes related to patient class and survival data; searching for models of class prediction; using random forests to search for minimal models for class prediction or for large subsets of genes with predictive capacity; searching for molecular signatures and predictive genes with survival data; detecting regions of genomic DNA gain or loss. The capability to send results between different applications, access to additional functional information, and parallelized computation make our suite unique and exploit features only available to web-based applications.Comment: web based application; 3 figure

    Faults in Linux 2.6

    Get PDF
    In August 2011, Linux entered its third decade. Ten years before, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired numerous efforts on improving the reliability of driver code. Today, Linux is used in a wider range of environments, provides a wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? To answer this question, we have transported Chou et al.'s experiments to all versions of Linux 2.6; released between 2003 and 2011. We find that Linux has more than doubled in size during this period, but the number of faults per line of code has been decreasing. Moreover, the fault rate of drivers is now below that of other directories, such as arch. These results can guide further development and research efforts for the decade to come. To allow updating these results as Linux evolves, we define our experimental protocol and make our checkers available

    A Screening Test for Disclosed Vulnerabilities in FOSS Components

    Get PDF
    Free and Open Source Software (FOSS) components are ubiquitous in both proprietary and open source applications. Each time a vulnerability is disclosed in a FOSS component, a software vendor using this an application must decide whether to update the FOSS component, patch the application itself, or just do nothing as the vulnerability is not applicable to the older version of the FOSS component used. This is particularly challenging for enterprise software vendors that consume thousands of FOSS components and offer more than a decade of support and security fixes for their applications. Moreover, customers expect vendors to react quickly on disclosed vulnerabilities—in case of widely discussed vulnerabilities such as Heartbleed, within hours. To address this challenge, we propose a screening test: a novel, automatic method based on thin slicing, for estimating quickly whether a given vulnerability is present in a consumed FOSS component by looking across its entire repository. We show that our screening test scales to large open source projects (e.g., Apache Tomcat, Spring Framework, Jenkins) that are routinely used by large software vendors, scanning thousands of commits and hundred thousands lines of code in a matter of minutes. Further, we provide insights on the empirical probability that, on the above mentioned projects, a potentially vulnerable component might not actually be vulnerable after all

    Open Source Software: From Open Science to New Marketing Models

    Get PDF
    -Open source Software; Intellectual Property; Licensing; Business Model.

    National Security Space Launch

    Get PDF
    The United States Space Force’s National Security Space Launch (NSSL) program, formerly known as the Evolved Expendable Launch Vehicle (EELV) program, was first established in 1994 by President William J. Clinton’s National Space Transportation Policy. The policy assigned the responsibility for expendable launch vehicles to the Department of Defense (DoD), with the goals of lowering launch costs and ensuring national security access to space. As such, the United States Air Force Space and Missile Systems Center (SMC) started the EELV program to acquire more affordable and reliable launch capability for valuable U.S. military satellites, such as national reconnaissance satellites that cost billions per satellite. In March 2019, the program name was changed from EELV to NSSL, which reflected several important features: 1.) The emphasis on “assured access to space,” 2.) transition from the Russian-made RD-180 rocket engine used on the Atlas V to a US-sourced engine (now scheduled to be complete by 2022), 3.) adaptation to manifest changes (such as enabling satellite swaps and return of manifest to normal operations both within 12 months of a need or an anomaly), and 4.) potential use of reusable launch vehicles. As of August 2019, Blue Origin, Northrop Grumman Innovation Systems, SpaceX, and United Launch Alliance (ULA) have all submitted proposals. From these, the U.S. Air Force will be selecting two companies to fulfill approximately 34 launches over a period of five years, beginning in 2022. This paper will therefore first examine the objectives for the NSSL as presented in the 2017 National Security Strategy, Fiscal Year 2019, Fiscal Year 2020, and Fiscal Year 2021 National Defense Authorization Acts (NDAA), and National Presidential Directive No. 40. The paper will then identify areas of potential weakness and gaps that exist in space launch programs as a whole and explore the security implications that impact the NSSL specifically. Finally, the paper will examine how the trajectory of the NSSL program could be adjusted in order to facilitate a smooth transition into new launch vehicles, while maintaining mission success, minimizing national security vulnerabilities, and clarifying the defense acquisition process.No embargoAcademic Major: EnglishAcademic Major: International Studie

    Proceedings of the ECCS 2005 satellite workshop: embracing complexity in design - Paris 17 November 2005

    Get PDF
    Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr). Embracing complexity in design is one of the critical issues and challenges of the 21st century. As the realization grows that design activities and artefacts display properties associated with complex adaptive systems, so grows the need to use complexity concepts and methods to understand these properties and inform the design of better artifacts. It is a great challenge because complexity science represents an epistemological and methodological swift that promises a holistic approach in the understanding and operational support of design. But design is also a major contributor in complexity research. Design science is concerned with problems that are fundamental in the sciences in general and complexity sciences in particular. For instance, design has been perceived and studied as a ubiquitous activity inherent in every human activity, as the art of generating hypotheses, as a type of experiment, or as a creative co-evolutionary process. Design science and its established approaches and practices can be a great source for advancement and innovation in complexity science. These proceedings are the result of a workshop organized as part of the activities of a UK government AHRB/EPSRC funded research cluster called Embracing Complexity in Design (www.complexityanddesign.net) and the European Conference in Complex Systems (complexsystems.lri.fr)

    Large-Scale Identification and Analysis of Factors Impacting Simple Bug Resolution Times in Open Source Software Repositories

    Get PDF
    One of the most prominent issues the ever-growing open-source software community faces is the abundance of buggy code. Well-established version control systems and repository hosting services such as GitHub and Maven provide a checks-and-balances structure to minimize the amount of buggy code introduced. Although these platforms are effective in mitigating the problem, it still remains. To further the efforts toward a more effective and quicker response to bugs, we must understand the factors that affect the time it takes to fix one. We apply a custom traversal algorithm to commits made for open source repositories to determine when “simple stupid bugs” were first introduced to projects and explore the factors that drive the time it takes to fix them. Using the commit history from the main development branch, we are able to identify the commit that first introduced 13 different types of simple stupid bugs in 617 of the top Java projects on GitHub. Leveraging a statistical survival model and other non-parametric statistical tests, we found that there were two main categories of categorical variables that affect a bug’s life; Time Factors and Author Factors. We find that bugs are fixed quicker if they are introduced and resolved by the same developer. Further, we discuss how the day of the week and time of day a buggy code was written and fixed affects its resolution time. These findings will provide vital insight to help the open-source community mitigate the abundance of code and can be used in future research to aid in bug-finding programs
    • 

    corecore