10,354 research outputs found

    Formalization of the prediction and ranking of software development life cycle models

    Get PDF
    The study of software engineering professional practices includes the use of the formal methodology in a software development. Identifying the appropriate methodology will not only reduce the failure of software but will also help to deliver the software in accordance with the predetermined budget and schedule. In literature, few works have been developed a tool for prediction of the most appropriate methodology for the specific software project. In this paper, a method for selecting an appropriate software development life cycle (SDLC) model based on a ranking manner from the highest to the lowest scoring is presented. The selection and ranking of appropriate SDLC elaborate the related SDLC’s critical factors, these factors are given different weights according to the SDLC, then these weights are used by the proposed mathematical method. The proposed approach has been extensively experimented on a dataset by software practitioners who are working in the software industry. Experimental results show that, the proposed method represents an applicable tool in predicting and ranking suitable SDLC models on various types of projects, such as: life-critical systems, commercial uses systems, and entertainment applications

    Design-time performance analysis of component-based real-time systems

    Get PDF
    In current real-time systems, performance metrics are one of the most challenging properties to specify, predict and measure. Performance properties depend on various factors, like environmental context, load profile, middleware, operating system, hardware platform and sharing of internal resources. Performance failures and not satisfying related requirements cause delays, cost overruns, and even abandonment of projects. In order to avoid these performancerelated project failures, the performance properties should be obtained and analyzed already at the early design phase of a project. In this thesis we employ principles of component-based software engineering (CBSE), which enable building software systems from individual components. The advantage of CBSE is that individual components can be modeled, reused and traded. The main objective of this thesis is to develop a method that enables to predict the performance properties of a system, based on the performance properties of the involved individual components. The prediction method serves rapid prototyping and performance analysis of the architecture or related alternatives, without performing the usual testing and implementation stages. The involved research questions are as follows. How should the behaviour and performance properties of individual components be specified in order to enable automated composition of these properties into an analyzable model of a complete system? How to synthesize the models of individual components into a model of a complete system in an automated way, such that the resulting system model can be analyzed against the performance properties? The thesis presents a new framework called DeepCompass, which realizes the concept of predictable assembly throughout all phases of the system design. The cornerstones of the framework are the composable models of individual software components and hardware blocks. The models are specified at the component development time and shipped in a component package. At the component composition phase, the models of the constituent components are synthesized into an executable system model. Since the thesis focuses on performance properties, we introduce performance-related types of component models, such as behaviour, performance and resource models. The dynamics of the system execution are captured in scenario models. The essential advantage of the introduced models is that, through the behaviour of individual components and scenario models, the behaviour of the complete system is synthesized in the executable system model. Further simulation-based analysis of the obtained executable system model provides application-specific and system-specific performance property values. To support the performance analysis, we have developed a CARAT software toolkit that provides and automates the algorithms for model synthesis and simulation. Besides this, the toolkit provides graphical tools for designing alternative architectures and visualization of obtained performance properties. We have conducted an empirical case study on the use of scenarios in the industry to analyze the system performance at the early design phase. It was found that industrial architects make extensive use of scenarios for performance evaluation. Based on the inputs of the architects, we have provided a set of guidelines for identification and use of performance-critical scenarios. At the end of this thesis, we have validated the DeepCompass framework by performing three case studies on performance prediction of real-time systems: an MPEG-4 video decoder, a Car Radio Navigation system and a JPEG application. For each case study, we have constructed models of the individual components, defined the SW/HW architecture, and used the CARAT toolkit to synthesize and simulate the executable system model. The simulation provided the predicted performance properties, which we later compared with the actual performance properties of the realized systems. With respect to resource usage properties and average task latencies, the variation of the prediction error showed to be within 30% of the actual performance. Concerning the pick loads on the processor nodes, the actual values were sometimes three times larger than the predicted values. As a conclusion, the framework has proven to be effective in rapid architecture prototyping and performance analysis of a complete system. This is valid, as in the case studies we have spent not more than 4-5 days on the average for the complete iteration cycle, including the design of several architecture alternatives. The framework can handle different architectural styles, which makes it widely applicable. A conceptual limitation of the framework is that it assumes that the models of individual components are already available at the design phase

    Analytical Survey of Construction Change Systems: Gaps & Opportunities

    Get PDF
    AbstractThis paper surveys the studies on construction change systems and reveals some of the potential future works. It is tried to pick up the critical works to derive a true timeline of the systems. The findings show that leaping from best practice guides in late 1990s and generic process models in early 2000s to very advanced modelling environments in mid 2000s and early 2010s have made gaps along with opportunities for change researchers in order to develop some more easy and applicable models. Another finding is that there is a compelling similarity between the change and risk prediction models. So, integrating these two concepts, specifically from proactive management point of view, may lead to a synergy and help project teams avoid rework. Also, the findings show that exploitation of cause- effect relationship models, in order to facilitate the dispute resolutions, seems to be an interesting field for future works

    Analysis of operations headcount in the new product introduction of servers and workstations

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 40).An optimal coordination between a design team and a manufacturing team is necessary to minimize the overall cost of a project and to remain competitive. The type of coordination can range from one way communication to highly interactive teams. Within the workstation development group at Intel, a dedicated operations team coordinates the activity between the design team and the manufacturing team during a new product introduction. The goal of this thesis is to examine that role with particular attention to understanding the operations staffing level required to support a given development effort. This project analyzed the operations team's implementation of the coordination mechanism and derived a methodology for estimating the appropriate staffing level of the operations team. This methodology combined the experiences of the senior members of the group into a single objective representation. The model found that the project complexity was the primary driver for determining staffing levels. It also found a trend for future projects to be staffed at lower levels than similar past projects. This thesis also presents an academic framework for characterizing the mechanisms used to coordinate activity between a design group and a manufacturing group based on the level of interaction between the two groups. It casts the present activities of the operations group onto this framework to identify potential areas for improvement. Using this framework, we find that the complexity of the project determines not only the operations effort levels required to support a project, but also the type of activity which is optimal for supporting that project. From this we conclude that different projects require different implementations of the product development process.by Vincent Eugene Hummel.S.M

    and Cost/Benefits Opportunities

    Get PDF
    Acquisition Research Program Sponsored Report SeriesSponsored Acquisition Research & Technical ReportsThe acquisition of artificial intelligence (AI) systems is a relatively new challenge for the U.S. Department of Defense (DoD). Given the potential for high-risk failures of AI system acquisitions, it is critical for the acquisition community to examine new analytical and decision-making approaches to managing the acquisition of these systems in addition to the existing approaches (i.e., Earned Value Management, or EVM). In addition, many of these systems reside in small start-up or relatively immature system development companies, further clouding the acquisition process due to their unique business processes when compared to the large defense contractors. This can lead to limited access to data, information, and processes that are required in the standard DoD acquisition approach (i.e., the 5000 series). The well-known recurring problems in acquiring information technology automation within the DoD will likely be exacerbated in acquiring complex and risky AI systems. Therefore, more robust, agile, and analytically driven acquisition methodologies will be required to help avoid costly disasters in acquiring these kinds of systems. This research provides a set of analytical tools for acquiring organically developed AI systems through a comparison and contrast of the proposed methodologies that will demonstrate when and how each method can be applied to improve the acquisitions lifecycle for AI systems, as well as provide additional insights and examples of how some of these methods can be applied. This research identifies, reviews, and proposes advanced quantitative, analytically based methods within the integrated risk management (IRM)) and knowledge value added (KVA) methodologies to complement the current EVM approach. This research examines whether the various methodologies—EVM, KVA, and IRM—could be used within the Defense Acquisition System (DAS) to improve the acquisition of AI. While this paper does not recommend one of these methodologies over the other, certain methodologies, specifically IRM, may be more beneficial when used throughout the entire acquisition process instead of within a portion of the system. Due to this complexity of AI system, this research looks at AI as a whole and not specific types of AI.Approved for public release; distribution is unlimited.Approved for public release; distribution is unlimited

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
    • …
    corecore