162,279 research outputs found
Development of the Integrated Model of the Automotive Product Quality Assessment
Issues on building an integrated model of the automotive product quality assessment are studied herein basing on widely applicable methods and models of the quality assessment. A conceptual model of the automotive product quality system meeting customer requirements has been developed. Typical characteristics of modern industrial production are an increase in the production dynamism that determines the product properties; a continuous increase in the volume of information required for decision-making, an increased role of knowledge and high technologies implementing absolutely new scientific and technical ideas. To solve the problem of increasing the automotive product quality, a conceptual structural and hierarchical model is offered to ensure its quality as a closed system with feedback between the regulatory, manufacturing, and information modules, responsible for formation of the product quality at all stages of its life cycle. The three module model of the system of the industrial product quality assurance is considered to be universal and to give the opportunity to explore processes of any complexity while solving theoretical and practical problems of the quality assessment and prediction for products for various purposes, including automotive
Warranty Data Analysis: A Review
Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA
Definition of the on-time delivery indicator in rapid software development
Rapid software development (RSD) is an approach for developing software in rapid iterations. One of the critical success factors of an RSD project is to deliver the product releases on time and with the planned features. In this paper, we elaborate an exploratory definition of the On-Time Delivery strategic indicator in RSD based on the literature and interviews with four companies. This indicator supports decision-makers to detect development problems in order to avoid delays and to estimate the additional time needed when requirements, and specifically quality requirements, are considered.Peer ReviewedPostprint (author's final draft
Measuring Software Process: A Systematic Mapping Study
Context: Measurement is essential to reach predictable performance and high capability processes. It provides
support for better understanding, evaluation, management, and control of the development process
and project, as well as the resulting product. It also enables organizations to improve and predict its process’s
performance, which places organizations in better positions to make appropriate decisions. Objective:
This study aims to understand the measurement of the software development process, to identify studies,
create a classification scheme based on the identified studies, and then to map such studies into the scheme
to answer the research questions. Method: Systematic mapping is the selected research methodology for this
study. Results: A total of 462 studies are included and classified into four topics with respect to their focus
and into three groups based on the publishing date. Five abstractions and 64 attributes were identified,
25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the
most measured process attributes, while effort and performance were the most measured project attributes.
Goal Question Metric and Capability Maturity Model Integration were the main methods and models used
in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently
identified research contexts.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-RMinisterio de Economía y Competitividad TIN2016-76956-C3-2- RMinisterio de Economía y Competitividad TIN2015-71938-RED
Modeling Adoption and Usage of Competing Products
The emergence and wide-spread use of online social networks has led to a
dramatic increase on the availability of social activity data. Importantly,
this data can be exploited to investigate, at a microscopic level, some of the
problems that have captured the attention of economists, marketers and
sociologists for decades, such as, e.g., product adoption, usage and
competition.
In this paper, we propose a continuous-time probabilistic model, based on
temporal point processes, for the adoption and frequency of use of competing
products, where the frequency of use of one product can be modulated by those
of others. This model allows us to efficiently simulate the adoption and
recurrent usages of competing products, and generate traces in which we can
easily recognize the effect of social influence, recency and competition. We
then develop an inference method to efficiently fit the model parameters by
solving a convex program. The problem decouples into a collection of smaller
subproblems, thus scaling easily to networks with hundred of thousands of
nodes. We validate our model over synthetic and real diffusion data gathered
from Twitter, and show that the proposed model does not only provides a good
fit to the data and more accurate predictions than alternatives but also
provides interpretable model parameters, which allow us to gain insights into
some of the factors driving product adoption and frequency of use
Recommended from our members
Estimating software project effort using analogies
Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterise projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardised so all dimensions have equal weight. The known effort values of the nearest neighbours to the new project are then used as the basis for the prediction. The process is automated using a PC based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques
How reliable are systematic reviews in empirical software engineering?
BACKGROUND – the systematic review is becoming a more commonly employed research instrument in
empirical software engineering. Before undue reliance is placed on the outcomes of such reviews it would seem useful to consider the robustness of the approach in this particular research context.
OBJECTIVE – the aim of this study is to assess the reliability of systematic reviews as a research instrument. In particular we wish to investigate the consistency of process and the stability of outcomes.
METHOD – we compare the results of two independent reviews under taken with a common research question.
RESULTS – the two reviews find similar answers to the research question, although the means of arriving at those answers vary.
CONCLUSIONS – in addressing a well-bounded research question, groups of researchers with similar domain experience can arrive at the same review outcomes, even though they may do so in different ways.
This provides evidence that, in this context at least, the systematic review is a robust research method
Prediction intervals for reliability growth models with small sample sizes
Engineers and practitioners contribute to society through their ability to apply basic scientific principles to real problems in an effective and efficient manner. They must collect data to test their products every day as part of the design and testing process and also after the product or process has been rolled out to monitor its effectiveness. Model building, data collection, data analysis and data interpretation form the core of sound engineering practice.After the data has been gathered the engineer must be able to sift them and interpret them correctly so that meaning can be exposed from a mass of undifferentiated numbers or facts. To do this he or she must be familiar with the fundamental concepts of correlation, uncertainty, variability and risk in the face of uncertainty. In today's global and highly competitive environment, continuous improvement in the processes and products of any field of engineering is essential for survival. Many organisations have shown that the first step to continuous improvement is to integrate the widespread use of statistics and basic data analysis into the manufacturing development process as well as into the day-to-day business decisions taken in regard to engineering processes.The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved
Recommended from our members
Predicting with sparse data
It is well known that effective prediction of project cost related factors is an important aspect of software engineering. Unfortunately, despite extensive research over more than 30 years, this remains a significant problem for many practitioners. A major obstacle is the absence of reliable and systematic historic data, yet this is a sine qua non for almost all proposed methods: statistical, machine learning or calibration of existing models. In this paper we describe our sparse data method (SDM) based upon a pairwise comparison technique and Saaty's Analytic Hierarchy Process (AHP). Our minimum data requirement is a single known point. The technique is supported by a software tool known as DataSalvage. We show, for data from two companies, how our approach — based upon expert judgement — adds value to expert judgement by producing significantly more accurate and less biased results. A sensitivity analysis shows that our approach is robust to pairwise comparison errors. We then describe the results of a small usability trial with a practising project manager. From this empirical work we conclude that the technique is promising and may help overcome some of the present barriers to effective project prediction
- …