1,089 research outputs found

    Uncertainty analysis in product service system: Bayesian network modelling for availability contract

    Get PDF
    There is an emerging trend of manufacturing companies offering combined products and services to customers as integrated solutions. Availability contracts are an apt instance of such offerings, where product use is guaranteed to customer and is enforced by incentive-penalty schemes. Uncertainties in such an industry setting, where all stakeholders are striving to achieve their respective performance goals and at the same time collaborating intensively, is increased. Understanding through-life uncertainties and their impact on cost is critical to ensure sustainability and profitability of the industries offering such solutions. In an effort to address this challenge, the aim of this research study is to provide an approach for the analysis of uncertainties in Product Service System (PSS) delivered in business-to-business application by specifying a procedure to identify, characterise and model uncertainties with an emphasis to provide decision support and prioritisation of key uncertainties affecting the performance outcomes. The thesis presents a literature review in research areas which are at the interface of topics such as uncertainty, PSS and availability contracts. From this seven requirements that are vital to enhance the understanding and quantification of uncertainties in Product Service System are drawn. These requirements are synthesised into a conceptual uncertainty framework. The framework prescribes four elements, which include identifying a set of uncertainties, discerning the relationships between uncertainties, tools and techniques to treat uncertainties and finally, results that could ease uncertainty management and analysis efforts. The conceptual uncertainty framework was applied to an industry case study in availability contracts, where each of the four elements was realised. This application phase of the research included the identification of uncertainties in PSS, development of a multi-layer uncertainty classification, deriving the structure of Bayesian Network and finally, evaluation and validation of the Bayesian Network. The findings suggest that understanding uncertainties from a system perspective is essential to capture the network aspect of PSS. This network comprises of several stakeholders, where there is increased flux of information and material flows and this could be effectively represented using Bayesian Networks

    Using Bayesian belief networks for reliability management : construction and evaluation: a step by step approach

    Get PDF
    In the capital goods industry, there is a growing need to manage reliability throughout the product development process. A number of trends can be identified that have a strong effect on the way in which reliability prediction and management is approached, i.e.: - The lifecycle costs approach that is becoming increasingly important for original equipment manufacturers - The increasing product complexity - The growth in customer demands - The pressure of shortening times to market - The increasing globalization of markets and production Reliability management is typically based on the insights, views, and perceptions of the real world of the people that are involved in the process of decision making. These views are unique and specific for each involved individual that looks at the management process and can be represented using soft systems methodology. Since soft systems methodology is based on insights, view and perceptions, it is especially suitable in the context of reliability prediction and management early in the product development process as studied in this thesis (where there is no objective data available (yet)). Two research objectives are identified through examining market trends and applying soft systems methodology. The first research objective focuses on the identification or development of a method for reliability prediction and management that meets the following criteria: - It should support decision making for reliability management - It should be able to also take non-technical factors into account - It has to be usable throughout the product development process and especially in the early phases of the process. - It should be able to capture and handle uncertainty This first research objective is addressed through a literature study of traditional approaches (failure mode and effects analysis, fault tree analysis and database methods), and more recent approaches to reliability prediction and reliability management (REMM, PREDICT and TRACS). The conclusion of the literature study is that traditional methods, although able to support decision making to some extent, take a technical point of view, and are usable only in a limited part of the product development process. The traditional methods are capable of taking uncertainty into account, but only uncertainty about the occurrence of single faults or failure modes. The recent approaches are able to meet the criteria to a greater extent: REMM is able to provide decision support, but mainly on a technical level, by prioritizing the elimination of design concerns. The reliability estimate provided by REMM can be updated over time and is clearly usable throughout the product development process. Uncertainty is incorporated in the reliability estimate as well as in the occurrence of concerns. PREDICT provides decision support for processes as well as components, but it focuses on the technical contribution of the component or process to reliability. As in REMM, PREDICT provides an updateable estimate, and incorporates uncertainty as a probability. TRACS uses Bayesian belief networks and provides decision support both in technical and non-technical terms. In the TRACS tool, estimates can be updated and uncertainty is incorporated using probabilities. Since TRACS is developed for one specific case, and an extensive discussion on the implementation process is missing, it is not readily applicable for reliability management in general. The discussion of literature leads to the choice of Bayesian belief networks as an effective modelling technique for reliability prediction and management. It also indicates that Bayesian belief networks are particularly well suited in the early stages of the product development process, because of their ability to make the influences of the product development process on reliability already explicit from the early stages of the product development process onwards. The second research objective is the development of a clear, systematic approach to build and use Bayesian belief networks in the context of reliability prediction and management. Although Bayesian belief network construction is widely described in the literature as having three generic steps (problem structuring, instantiation and inference), how the steps are to be made in practice is described only summarily. No systematic, coherent and structured approach for the construction of a Bayesian belief network can be found in literature. The second objective therefore concerns the identification and definition of model boundaries, model variables, and model structure. The methodology developed to meet this second objective is an adaptation of Grounded Theory, a method widely used in the social sciences. Grounded Theory is an inductive rather than deductive method (focusing on building rather than testing theory). Grounded Theory is adapted by adopting Bayesian network idioms (Neil, Fenton & Nielson, 2000) into the approach. Furthermore, the canons of the Grounded Theory methodology (Corbin & Strauss, 1990) were not strictly followed because of their limited suitability for the subject, and for practical reasons. Grounded Theory has been adapted as a methodology for structuring problems modelled with Bayesian belief networks. The adapted Grounded Theory approach is applied in a case study in a business unit of a company that develops and produces medical scanning equipment. Once the Bayesian belief net model variables, structure and boundaries have been determined the network must be instantiated. For instantiation, a probability elicitation protocol has been developed. This protocol includes a training, preparation for the elicitation, a direct elicitation process, and feedback on the elicitation. The instantiation is illustrated as part of the case study. The combination of the adapted Grounded Theory method for problem structuring, and the probability elicitation protocol for instantiation together form an algorithm for Bayesian belief network construction (consisting of data gathering, problem structuring, instantiation, and feedback) that consists of the following 9 steps (see Table 1). Table 1: Bayesian belief network construction algorithm 1. Gather information regarding the way in which the topic under discussion is influenced by conducting interviews 2. Identify the factors (i.e. nodes) that influence the topic, by analyzing and coding the interviews 3. Define the variables by identifying the different possible states (state-space) of the variables through coding and direct conversation with experts 4. Characterize the relationships between the different nodes using the idioms through analysis and coding of the interviews 5. Control the number of conditional probabilities that has to be elicited using the definitional/synthesis idiom (Neil, Fenton & Nielson, 2000) 6. Evaluate the Bayesian belief network, possibly leading to a repetition of (a number of) the first 5 steps 7. Identify and define the conditional probability tables that define the relationships in the Bayesian belief network 8. Fill in the conditional probability tables, in order to define the relationships in the Bayesian belief network 9. Evaluate the Bayesian belief network, possibly leading to a repetition of (a number of) earlier steps A Bayesian belief network for reliability prediction and management was constructed using the algorithm. The model’s problem structure and the model behaviour are validated during and at the end of the construction process. A survey was used to validate the problem structure and the model behaviour was validated through a focus group meeting. Unfortunately, the results of the survey were limited, because of the low response rate (35%). The results of the focus group meeting indicated that the model behaviour was realistic, implying that application of the adapted Grounded Theory approach results in a realistic model for reliability management. The adapted Grounded Theory approach developed in this thesis provides a scientific and practical contribution to model building and use in the face of limited availability of information. The scientific contribution lies in the provision of the systematic and coherent approach to Bayesian belief network construction described above. The practical contribution lies in the application of this approach in the context of reliability prediction and management and in the structured and algorithmic approach to model building. The case study in this thesis shows the construction and use of an effective model that enables reliability prediction, and provides decision support for reliability management throughout the product development process from the earliest stages of the process. Bayesian belief networks provide a strong basis for reliability management, giving qualitative and quantitative insights in relationships between influential variables and reliabilit

    Automating lead scoring with machine learning: An experimental study

    Get PDF
    Companies often gather a tremendous amount of data, such as browsing behavior, email activities and other contact data. This data can be the source of important competitive advantage by utilizing it in estimating a contact\u27s purchase probability using predictive analytics. The calculated purchase probability can then be used by companies to solve different business problems, such as optimizing their sales processes. The purpose of this article is to study how machine learning can be used to perform lead scoring as a special application case of making use of purchase probability. Historical behavioral data is used as training data for the classification algorithm, and purchase moments are used to limit the behavioral data for the contacts that have purchased a product in the past. Different ways of aggregating time-series data are tested to ensure that limiting the activities for buyers does not result in model bias. The results suggest that it is possible to estimate the purchase probability of leads using supervised learning algorithms, such as random forest, and that it is possible to obtain business insights from the results using visual analytic

    Challenges in Survey Research

    Full text link
    While being an important and often used research method, survey research has been less often discussed on a methodological level in empirical software engineering than other types of research. This chapter compiles a set of important and challenging issues in survey research based on experiences with several large-scale international surveys. The chapter covers theory building, sampling, invitation and follow-up, statistical as well as qualitative analysis of survey data and the usage of psychometrics in software engineering surveys.Comment: Accepted version of chapter in the upcoming book on Contemporary Empirical Methods in Software Engineering. Update includes revision of typos and additional figures. Last update includes fixing two small issues and typo

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing

    Improving water asset management when data are sparse

    Get PDF
    Ensuring the high of assets in water utilities is critically important and requires continuous improvement. This is due to the need to minimise risk of harm to human health and the environment from contaminated drinking water. Continuous improvement and innovation in water asset management are therefore, necessary and are driven by (i) increased regulatory requirements on serviceability; (ii) high maintenance costs, (iii) higher customer expectations, and (iv) enhanced environmental and health/safety requirements. High quality data on asset failures, maintenance, and operations are key requirements for developing reliability models. However, a literature search revealed that, in practice, there is sometimes limited data in water utilities - particularly for over-ground assets. Perhaps surprisingly, there is often a mismatch between the ambitions of sophisticated reliability tools and the availability of asset data water utilities are able to draw upon to implement them in practice. This research provides models to support decision-making in water utility asset management when there is limited data. Three approaches for assessing asset condition, maintenance effectiveness and selecting maintenance regimes for specific asset groups were developed. Expert elicitation was used to test and apply the developed decision-support tools. A major regional water utility in England was used as a case study to investigate and test the developed approaches. The new approach achieved improved precision in asset condition assessment (Figure 3–3a) - supporting the requirements of the UK Capital Maintenance Planning Common Framework. Critically, the thesis demonstrated that, on occasion, assets were sometimes misallocated by more than 50% between condition grades when using current approaches. Expert opinions were also sought for assessing maintenance effectiveness, and a new approach was tested with over-ground assets. The new approach’s value was demonstrated by the capability to account for finer measurements (as low as 10%) of maintenance effectiveness (Table 4-4). An asset maintenance regime selection approach was developed to support decision-making when data are sparse. The value of the approach is its versatility in selecting different regimes for different asset groups, and specifically accounting for the assets unique performance variables
    corecore