24,894 research outputs found

    DeSyRe: on-Demand System Reliability

    No full text
    The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints

    Vendors are Undermining the Structure of U.S. Elections

    Get PDF
    As we approach the 2008 general election, the structure of elections in the United States -- once reliant on local representatives accountable to the public -- has become almost wholly dependent on large corporations, which are not accountable to the public. Most local officials charged with running elections are now unable to administer elections without the equipment, services, and trade-secret software of a small number of corporations. If the vendors withdrew their support for elections now, our election structure would collapse. Case studies presented in this report give examples of the pervasive control voting system vendors now have over election administration in almost every state, and the consequences some jurisdictions are already experiencing.However, some states and localities are recognizing the threat that vendor-dependency poses to elections. They are using ingenuity and determination to begin reversing the direction. This report examines the situation, how we got here, and steps we can take to limit corporate control of our elections in 2008 and reduce it even further in the future

    Change decision support:extraction and analysis of late architecture changes using change characterization and software metrics

    Get PDF
    Software maintenance is one of the most crucial aspects of software development. Software engineering researchers must develop practical solutions to handle the challenges presented in maintaining mature software systems. Research that addresses practical means of mitigating the risks involved when changing software, reducing the complexity of mature software systems, and eliminating the introduction of preventable bugs is paramount to today’s software engineering discipline. Giving software developers the information that they need to make quality decisions about changes that will negatively affect their software systems is a key aspect to mitigating those risks. This dissertation presents work performed to assist developers to collect and process data that plays a role in change decision-making during the maintenance phase. To address these problems, developers need a way to better understand the effects of a change prior to making the change. This research addresses the problems associated with increasing architectural complexity caused by software change using a twoold approach. The first approach is to characterize software changes to assess their architectural impact prior to their implementation. The second approach is to identify a set of architecture metrics that correlate to system quality and maintainability and to use these metrics to determine the level of difficulty involved in making a change. The two approaches have been combined and the results presented provide developers with a beneficial analysis framework that offers insight into the change process

    A review of information flow diagrammatic models for product-service systems

    Get PDF
    A product-service system (PSS) is a combination of products and services to create value for both customers and manufacturers. Modelling a PSS based on function orientation offers a useful way to distinguish system inputs and outputs with regards to how data are consumed and information is used, i.e. information flow. This article presents a review of diagrammatic information flow tools, which are designed to describe a system through its functions. The origin, concept and applications of these tools are investigated, followed by an analysis of information flow modelling with regards to key PSS properties. A case study of selection laser melting technology implemented as PSS will then be used to show the application of information flow modelling for PSS design. A discussion based on the usefulness of the tools in modelling the key elements of PSS and possible future research directions are also presented

    An Empirical Study of Cohesion and Coupling: Balancing Optimisation and Disruption

    Get PDF
    Search based software engineering has been extensively applied to the problem of finding improved modular structures that maximise cohesion and minimise coupling. However, there has, hitherto, been no longitudinal study of developers’ implementations, over a series of sequential releases. Moreover, results validating whether developers respect the fitness functions are scarce, and the potentially disruptive effect of search-based remodularisation is usually overlooked. We present an empirical study of 233 sequential releases of 10 different systems; the largest empirical study reported in the literature so far, and the first longitudinal study. Our results provide evidence that developers do, indeed, respect the fitness functions used to optimise cohesion/coupling (they are statistically significantly better than arbitrary choices with p << 0.01), yet they also leave considerable room for further improvement (cohesion/coupling can be improved by 25% on average). However, we also report that optimising the structure is highly disruptive (on average more than 57% of the structure must change), while our results reveal that developers tend to avoid such disruption. Therefore, we introduce and evaluate a multi-objective evolutionary approach that minimises disruption while maximising cohesion/coupling improvement. This allows developers to balance reticence to disrupt existing modular structure, against their competing need to improve cohesion and coupling. The multi-objective approach is able to find modular structures that improve the cohesion of developers’ implementations by 22.52%, while causing an acceptably low level of disruption (within that already tolerated by developers)

    Modeling User-Affected Software Properties for Open Source Software Supply Chains

    Get PDF
    Background: Open Source Software development community relies heavily on users of the software and contributors outside of the core developers to produce top-quality software and provide long-term support. However, the relationship between a software and its contributors in terms of exactly how they are related through dependencies and how the users of a software affect many of its properties are not very well understood. Aim: My research covers a number of aspects related to answering the overarching question of modeling the software properties affected by users and the supply chain structure of software ecosystems, viz. 1) Understanding how software usage affect its perceived quality; 2) Estimating the effects of indirect usage (e.g. dependent packages) on software popularity; 3) Investigating the patch submission and issue creation patterns of external contributors; 4) Examining how the patch acceptance probability is related to the contributors\u27 characteristics. 5) A related topic, the identification of bots that commit code, aimed at improving the accuracy of these and other similar studies was also investigated. Methodology: Most of the Research Questions are addressed by studying the NPM ecosystem, with data from various sources like the World of Code, GHTorrent, and the GiHub API. Different supervised and unsupervised machine learning models, including Regression, Random Forest, Bayesian Networks, and clustering, were used to answer appropriate questions. Results: 1) Software usage affects its perceived quality even after accounting for code complexity measures. 2) The number of dependents and dependencies of a software were observed to be able to predict the change in its popularity with good accuracy. 3) Users interact (contribute issues or patches) primarily with their direct dependencies, and rarely with transitive dependencies. 4) A user\u27s earlier interaction with the repository to which they are contributing a patch, and their familiarity with related topics were important predictors impacting the chance of a pull request getting accepted. 5) Developed BIMAN, a systematic methodology for identifying bots. Conclusion: Different aspects of how users and their characteristics affect different software properties were analyzed, which should lead to a better understanding of the complex interaction between software developers and users/ contributors
    • …
    corecore