396,966 research outputs found

    Preliminary design of the redundant software experiment

    Get PDF
    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle

    User engineering: A new look at system engineering

    Get PDF
    User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design

    Development of a Modular Biosensor System for Rapid Pathogen Detection

    Get PDF
    Progress in the field of pathogen detection relies on at least one of the following three qualities: selectivity, speed, and cost-effectiveness. Here, we demonstrate a proof of concept for an optical biosensing system for the detection of the opportunistic human pathogen Pseudomonas aeruginosa while addressing the abovementioned traits through a modular design. The biosensor detects pathogen-specific quorum sensing molecules and generates a fluorescence signal via an intracellular amplifier. Using a tailored measurement device built from low-cost components, the image analysis software detected the presence of P. aeruginosa in 42 min of incubation. Due to its modular design, individual components can be optimized or modified to specifically detect a variety of different pathogens. This biosensor system represents a successful integration of synthetic biology with software and hardware engineering

    Integrating the semantics of events, processes and tasks across requirements engineering layers

    Get PDF
    Today, software should be more flexible, adaptable and more cost effective than ever before. There are indications that event-based architectures improve the flexibility, adaptability and cost effectiveness of software. Events are crucial concepts in event-based architectures, however, the concept of event has different interpretations in modeling techniques, which makes it difficult to integrate the use of different techniques during early and late requirements engineering. This paper outlines a PhD intended to develop an event-based requirements engineering methodology which supports the specification, development and verification of event-based systems. More specifically, this PhD strives to further develop the concept of event in requirements engineering and provide it with a formally defined semantics. The event concept is positioned with respect to existing concepts for modeling dynamic aspects of a system. A major goal is to keep the complexity of the modeling method at an acceptable level and enable a smooth transition of event-based architectures from requirements to implementation level. Finally, by performing an ontological analysis, using the BWW ontology and UFO, a set of orthogonal dimensions of the concept of event could be found

    Communicative Agents for Software Development

    Full text link
    Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of agents, such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development.Comment: 25 pages, 9 figures, 2 table

    Enhancing the ESIM (Embedded Systems Improving Method) by Combining Information Flow Diagram with Analysis Matrix for Efficient Analysis of Unexpected Obstacles in Embedded Software

    Get PDF
    In order to improve the quality of embedded software, this paper proposes an enhancement to the ESIM (Embedded Systems Improving Method) by combining an IFD (Information Flow Diagram) with an Analysis Matrix to analyze unexpected obstacles in the software. These obstacles are difficult to predict in the software specification. Recently, embedded systems have become larger and more complicated. Theoretically therefore, the development cycle of these systems should be longer. On the contrary, in practice the cycle has been shortened. This trend in industry has resulted in the oversight of unexpected obstacles, and consequently affected the quality of embedded software. In order to prevent the oversight of unexpected obstacles, we have already proposed two methods for requirements analysis: the ESIM using an Analysis Matrix and a method that uses an IFD. In order to improve the efficiency of unexpected obstacle analysis at reasonable cost, we now enhance the ESIM by combining an IFD with an Analysis Matrix. The enhancement is studied from the following three viewpoints. First, a conceptual model comprising both the Analysis Matrix and IFD is defined. Then, a requirements analysis procedure is proposed, that uses both the Analysis Matrix and IFD, and assigns each specific role to either an expert or non-expert engineer. Finally, to confirm the effectiveness of this enhancement, we carry out a description experiment using an IFD.14th Asia-Pacific Software Engineering Conference (APSEC\u2707), 4-7 Dec. 2007, Aichi, Japa

    Alignment of requirements and services with user feedback

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.It is widely acknowledged that software reuse reduces the cost and effort of software development. Over the years many solutions have emerged that propose methodologies to support software reusability. Service oriented software engineering (SOSE) advocates software reuse while aiming to achieve better alignment of software solutions to business requirements. Service orientation has evolved from Object Oriented Analysis and Design (OOAD) and Component Based Software Development (CBSD), the major difference being that reusable artefacts are in the form of services rather than objects or packaged components. Although SOSE is considered a new architectural style of software development that addresses some of the shortcomings of previous approaches, it has also inherited some of the challenges of CBSD, and OOAD, in particular in the requirements engineering process. In Service Oriented Requirements Engineering (SORE) an analyst has an additional challenging task of aligning requirements and services to select the optimally matched service from an increasingly large set of available online services. Much of the existing empirical research in SORE has focused mainly on the technical aspects while the human related issues are yet to be fully explored and addressed. The lack of empirical evidence to investigate the human related issues in SORE provides the overall motivation for the research covered in this thesis. User involvement in software development has been the focus of significant research and has been intuitively and axiomatically accepted to play a positive role in users’ satisfaction thus leading to system success. More recently, past users’ feedback, reviews and comments from online sources are considered a form of user involvement. These offer valuable information to assist analysts in increasing their knowledge for making more informed decision for service selection. In service oriented paradigm the full extent of the benefits of this form of user involvement has not been empirically investigated. This thesis addresses three important high level research goals: (1) to investigate and identify the most important challenges of SORE, (2) to design an innovative and flexible method to address the top challenge of SORE, focusing specifically on the important relationship between user involvement and system success, and (3) to evaluate the applicability and effectiveness of the proposed method in an empirical study. This thesis presents research conducted in three parts for achieving each of the stated goals respectively: problem analysis, solution analysis and implementation analysis. For problem analysis a mixed method approach is used, i.e. literature review, quantitative online survey, and qualitative industrial interview study. For solution analysis a Systematic Literature Review (SLR) is conducted to analyse the existing empirical studies about the relationship between user involvement and system success. Inspired by the results of this SLR, I designed the ARISE (Alignment of RequIrement and SErvices) method, following Situational Method Engineering to make it flexible for adoption in various project contexts. The ARISE method aims to exploit the benefits of experiences of past users for service selection. For implementation analysis, the ARISE method was instantiated in a case study with real life data with two objectives in mind: (1) validation of the effectiveness of ARISE in overcoming the challenges of alignment, and (2) improvement and refinement of the ARISE method. Analysis of the results of this validation revealed the need for automated tool support for the ARISE method. This automation is achieved through the design and implementation of software tools created for supporting the analysts in service selection. The systematic and mixed method research approach of the problem analysis phase identified that alignment of requirements and services was the top challenge for practitioners in SORE. It also increased our understanding of why this alignment is considered the most challenging task. The findings of the SLR confirmed that the effective user involvement in software development in general, and in requirements engineering in particular could lead to system success. In SORE, the past users of services can be involved through their feedback and sentiments about the services from online sources. These concepts were the basis for the design of the ARISE method. The results of the case study complemented by the experimentation with the automated tools revealed that past users’ feedback and sentiments are indeed valuable sources of information that can assist analysts in overcoming the challenges of alignment between requirements and services thus making a more informed decision in service selection

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment
    • …
    corecore