10 research outputs found

    A Risk-Based IoT Decision-Making Framework Based on Literature Review with Human Activity Recognition Case Studies

    Get PDF
    The Internet of Things (IoT) is a key and growing technology for many critical real-life applications, where it can be used to improve decision making. The existence of several sources of uncertainty in the IoT infrastructure, however, can lead decision makers into taking inappropriate actions. The present work focuses on proposing a risk-based IoT decision-making framework in order to effectively manage uncertainties in addition to integrating domain knowledge in the decision-making process. A structured literature review of the risks and sources of uncertainty in IoT decision-making systems is the basis for the development of the framework and Human Activity Recognition (HAR) case studies. More specifically, as one of the main targeted challenges, the potential sources of uncertainties in an IoT framework, at different levels of abstraction, are firstly reviewed and then summarized. The modules included in the framework are detailed, with the main focus given to a novel risk-based analytics module, where an ensemble-based data analytic approach, called Calibrated Random Forest (CRF), is proposed to extract useful information while quantifying and managing the uncertainty associated with predictions, by using confidence scores. Its output is subsequently integrated with domain knowledge-based action rules to perform decision making in a cost-sensitive and rational manner. The proposed CRF method is firstly evaluated and demonstrated on a HAR scenario in a Smart Home environment in case study I and is further evaluated and illustrated with a remote health monitoring scenario for a diabetes use case in case study II. The experimental results indicate that using the framework’s raw sensor data can be converted into meaningful actions despite several sources of uncertainty. The comparison of the proposed framework to existing approaches highlights the key metrics that make decision making more rational and transparent

    Risk Evaluation in Failure Mode and Effects Analysis Based on D Numbers Theory

    Get PDF
    Failure mode and effects analysis (FMEA) is a useful technology for identifying the potential faults or errors in system, and simultaneously preventing them from occurring. In FMEA, risk evaluation is a vital procedure. Many methods are proposed to address this issue but they have some deficiencies, such as the complex calculation and two adjacent evaluation ratings being considered to be mutually exclusive. Aiming at these problems, in this paper, A novel method to risk evaluation based on D numbers theory is proposed. In the proposed method, for one thing, the assessments of each failure mode are aggregated through D numbers theory. For another, the combination usage of risk priority number (RPN) and the risk coefficient newly defined not only achieve less computation complexity compared with other methods, but also overcome the shortcomings of classical RPN. Furthermore, a numerical example is illustrated to demonstrate the effectiveness and superiority of the proposed method

    Optimisation Method for Training Deep Neural Networks in Classification of Non- functional Requirements

    Get PDF
    Non-functional requirements (NFRs) are regarded critical to a software system's success. The majority of NFR detection and classification solutions have relied on supervised machine learning models. It is hindered by the lack of labelled data for training and necessitate a significant amount of time spent on feature engineering. In this work we explore emerging deep learning techniques to reduce the burden of feature engineering. The goal of this study is to develop an autonomous system that can classify NFRs into multiple classes based on a labelled corpus. In the first section of the thesis, we standardise the NFRs ontology and annotations to produce a corpus based on five attributes: usability, reliability, efficiency, maintainability, and portability. In the second section, the design and implementation of four neural networks, including the artificial neural network, convolutional neural network, long short-term memory, and gated recurrent unit are examined to classify NFRs. These models, necessitate a large corpus. To overcome this limitation, we proposed a new paradigm for data augmentation. This method uses a sort and concatenates strategy to combine two phrases from the same class, resulting in a two-fold increase in data size while keeping the domain vocabulary intact. We compared our method to a baseline (no augmentation) and an existing approach Easy data augmentation (EDA) with pre-trained word embeddings. All training has been performed under two modifications to the data; augmentation on the entire data before train/validation split vs augmentation on train set only. Our findings show that as compared to EDA and baseline, NFRs classification model improved greatly, and CNN outperformed when trained using our suggested technique in the first setting. However, we saw a slight boost in the second experimental setup with just train set augmentation. As a result, we can determine that augmentation of the validation is required in order to achieve acceptable results with our proposed approach. We hope that our ideas will inspire new data augmentation techniques, whether they are generic or task specific. Furthermore, it would also be useful to implement this strategy in other languages

    Human-AI Collaboration to Mitigate Decision Noise in Financial Underwriting: A Study on FinTech Innovation in a Lending Firm

    Get PDF
    Financial institutions have recognized the value of collaborating human expertise and AI to create high-performance augmented decision-support systems. Stakeholders at lending firms have increasingly acknowledged that plugging data into AI algorithms and eliminating the role of human underwriters by automation, with the expectation of immediate returns on investment from business process automation, is a flawed strategy. This research emphasizes the necessity of auditing the consistency of decisions (or professional judgment) made by human underwriters and monitoring the ability of data to capture the lending policies of a firm to lay a strong foundation for a legitimate system before investing millions in AI projects. The judgments made by experts in the past re-emerge in the future as outcomes or labels in the data used to train and evaluate algorithms. This paper presents Evidential Reasoning-eXplainer, a methodology to estimate probability mass as an extent of support for a given decision on a loan application by jointly assessing multiple independent and conflicting pieces of evidence. It quantifies variability in past decisions by comparing the subjective judgments of underwriters during manual financial underwriting with outcomes estimated from data. The consistency analysis improves decision quality by bridging the gap between past inconsistent decisions and desired ultimate-true decisions. A case study on a specialist lending firm demonstrates the strategic work plan adapted to configure underwriters and developers to capture the correct data and audit the quality of decisions

    Evidential reasoning for preprocessing uncertain categorical data for trustworthy decisions: An application on healthcare and finance

    Get PDF
    The uncertainty attributed by discrepant data in AI-enabled decisions is a critical challenge in highly regulated domains such as health care and finance. Ambiguity and incompleteness due to missing values in output and input attributes, respectively, is ubiquitous in these domains. It could have an adverse impact on a certain unrepresented set of people in the training data without a developer's intention to discriminate. The inherently non-numerical nature of categorical attributes than numerical attributes and the presence of incomplete and ambiguous categorical attributes in a dataset increases the uncertainty in decision-making. This paper addresses the challenges in handling categorical attributes as it is not addressed comprehensively in previous research. Three sources of uncertainties in categorical attributes are recognised in this research. The informational uncertainty, unforeseeable uncertainty in the decision task environment, and the uncertainty due to lack of pre-modelling explainability in categorical attributes are addressed in the proposed methodology on maximum likelihood evidential reasoning (MAKER). It can transform and impute incomplete and ambiguous categorical attributes into interpretable numerical features. It utilises a notion of weight and reliability to include subjective expert preference over a piece of evidence and the quality of evidence in a categorical attribute, respectively. The MAKER framework strives to integrate the recognised uncertainties in the transformed input data that allow a model to perceive data limitations during the training regime and acknowledge doubtful predictions by supporting trustworthy pre-modelling and post modelling explainability. The ability to handle uncertainty and its impact on explainability is demonstrated on a real-world healthcare and finance data for different missing data scenarios in three types of AI algorithms: deep-learning, tree-based, and rule-based model
    corecore