68 research outputs found

    Decision Support Systems

    Get PDF
    Decision support systems (DSS) have evolved over the past four decades from theoretical concepts into real world computerized applications. DSS architecture contains three key components: knowledge base, computerized model, and user interface. DSS simulate cognitive decision-making functions of humans based on artificial intelligence methodologies (including expert systems, data mining, machine learning, connectionism, logistical reasoning, etc.) in order to perform decision support functions. The applications of DSS cover many domains, ranging from aviation monitoring, transportation safety, clinical diagnosis, weather forecast, business management to internet search strategy. By combining knowledge bases with inference rules, DSS are able to provide suggestions to end users to improve decisions and outcomes. This book is written as a textbook so that it can be used in formal courses examining decision support systems. It may be used by both undergraduate and graduate students from diverse computer-related fields. It will also be of value to established professionals as a text for self-study or for reference

    Decision support systems for adoption in dental clinics: a survey

    Get PDF
    While most dental clinicians use some sort of information system, they are involved with administrative functions, despite the advisory potential of some of these systems. This paper outlines some current decision support systems (DSS) and the common barriers facing dentists in adopting them within their workflow. These barriers include lack of perceived usefulness, complicated social and economic factors, and the difficulty for users to interpret the advice given by the system. A survey of current systems found that although there are systems that suggest treatment options, there is no real-time integration with other knowledge bases. Additionally, advice on drug prescription at point-of-care is absent from such systems, which is a significant omission, in consideration of the fact that disease management and drug prescription are common in the workflow of a dentist. This paper also addresses future trends in the research and development of dental clinical DSS, with specific emphasis on big data, standards and privacy issues to fulfil the vision of a robust, user-friendly and scalable personalised DSS for dentists. The findings of this study will offer strategies in design, research and development of a DSS with sufficient perceived usefulness to attract adoption and integration by dentists within their routine clinical workflow, thus resulting in better health outcomes for patients and increased productivity for the clinic

    Data quality assurance for strategic decision making in Abu Dhabi's public organisations

    Get PDF
    “A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Master of Philosophy”.Data quality is an important aspect of an organisation’s strategies for supporting decision makers in reaching the best decisions possible and consequently attaining the organisation’s objectives. In the case of public organisations, decisions ultimately concern the public and hence further diligence is required to make sure that these decisions do, for instance, preserve economic resources, maintain public health, and provide national security. The decision making process requires a wealth of information in order to achieve efficient results. Public organisations typically acquire great amounts of data generated by public services. However, the vast amount of data stored in public organisations’ databases may be one of the main reasons for inefficient decisions made by public organisations. Processing vast amounts of data and extracting accurate information are not easy tasks. Although technology helps in this respect, for example, the use of decision support systems, it is not sufficient for improving decisions to a significant level of assurance. The research proposed using data mining to improve results obtained by decision support systems. However, more considerations are needed than the mere technological aspects. The research argues that a complete data quality framework is needed in order to improve data quality and consequently the decision making process in public organisations. A series of surveys conducted in seven public organisations in Abu Dhabi Emirate of the United Arab Emirates contributed to the design of a data quality framework. The framework comprises elements found necessary to attain the quality of data reaching decision makers. The framework comprises seven elements ranging from technical to human-based found important to attain data quality in public organisations taking Abu Dhabi public organisations as the case. The interaction and integration of these elements contributes to the quality of data reaching decision makers and hence to the efficiency of decisions made by public organisations. The framework suggests that public organisations may need to adopt a methodological basis to support the decision making process. This includes more training courses and supportive bodies of the organisational units, such as decision support centres, information security and strategic management. The framework also underscores the importance of acknowledging human and cultural factors involved in the decision making process. Such factors have implications for how training and raising awareness are implemented to lead to effective methods of system development

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    A reliable neural network-based decision support system for breast cancer prediction

    Get PDF
    PhD ThesisAxillary lymph node (ALN) metastasis status is an important prognostic marker in breast cancer and is widely employed for tumour staging and defining an adjuvant therapy. In an attempt to avoid invasive procedures which are currently employed for the diagnosis of nodal metastasis, several markers have been identified and tested for the prediction of ALN metastasis status in recent years. However, the nonlinear and complex relationship between these markers and nodal status has inhibited the effectiveness of conventional statistical methods as classification tools for diagnosing metastasis to ALNs. The aim of this study is to propose a reliable artificial neural network (ANN) based decision support system for ALN metastasis status prediction. ANNs have been chosen in this study for their special characteristics including nonlinear modelling, robustness to inter-class variability and having adaptable weights which makes them suitable for data driven analysis without making any prior assumptions about the underlying data distributions. To achieve this aim, the probabilistic neural network (PNN) evaluated with the .632 bootstrap is investigated and proposed as an effective and reliable tool for prediction of ALN metastasis. For this purpose, results are compared with the multilayer perceptron (MLP) neural network and two network evaluation methods: holdout and cross validation (CV). A set of six markers have been identified and analysed in detail for this purpose. These markers include tumour size, oestrogen receptor (ER), progesterone receptor (PR), p53, Ki-67 and age. The outcome of each patient is defined as metastasis or non-metastasis, diagnosed by surgery. This study makes three contributions: firstly it suggests the application of the PNN as a classifier for predicting the ALN metastasis, secondly it proposes a the .632 bootstrap evaluation of the ANN outcome, as a reliable tool for the purpose of ALN status prediction, and thirdly it proposes a novel set of markers for accurately predicting the state of nodal metastasis in breast cancer. Results reveal that PNN provides better sensitivity, specificity and accuracy in most marker combinations compared to MLP. The results of evaluation methods’ comparison demonstrate the high variability and the existence of outliers when using the holdout and 5-fold CV methods. This variability is reduced when using the .632 bootstrap. The best prediction accuracy, obtained by combining ER, p53, Ki-67 and age was 69% while tumour size and p53 were the most significant individual markers. The classification accuracy of this panel of markers emphasises their potential for predicting nodal spread in individual patients. This approach could significantly reduce the need for invasive procedures, and reduce post-operative stress and morbidity. Moreover, it can reduce the time lag between investigation and decision making in patient management.ORS Award Schem

    Prescriptive Analytics:A Survey of Emerging Trends And Technologies

    Get PDF

    A framework for managing global risk factors affecting construction cost performance

    Get PDF
    Poor cost performance of construction projects has been a major concern for both contractors and clients. The effective management of risk is thus critical to the success of any construction project and the importance of risk management has grown as projects have become more complex and competition has increased. Contractors have traditionally used financial mark-ups to cover the risk associated with construction projects but as competition increases and margins have become tighter they can no longer rely on this strategy and must improve their ability to manage risk. Furthermore, the construction industry has witnessed significant changes particularly in procurement methods with clients allocating greater risks to contractors. Evidence shows that there is a gap between existing risk management techniques and tools, mainly built on normative statistical decision theory, and their practical application by construction contractors. The main reason behind the lack of use is that risk decision making within construction organisations is heavily based upon experience, intuition and judgement and not on mathematical models. This thesis presents a model for managing global risk factors affecting construction cost performance of construction projects. The model has been developed using behavioural decision approach, fuzzy logic technology, and Artificial Intelligence technology. The methodology adopted to conduct the research involved a thorough literature survey on risk management, informal and formal discussions with construction practitioners to assess the extent of the problem, a questionnaire survey to evaluate the importance of global risk factors and, finally, repertory grid interviews aimed at eliciting relevant knowledge. There are several approaches to categorising risks permeating construction projects. This research groups risks into three main categories, namely organisation-specific, global and Acts of God. It focuses on global risk factors because they are ill-defined, less understood by contractors and difficult to model, assess and manage although they have huge impact on cost performance. Generally, contractors, especially in developing countries, have insufficient experience and knowledge to manage them effectively. The research identified the following groups of global risk factors as having significant impact on cost performance: estimator related, project related, fraudulent practices related, competition related, construction related, economy related and political related factors. The model was tested for validity through a panel of validators (experts) and crosssectional cases studies, and the general conclusion was that it could provide valuable assistance in the management of global risk factors since it is effective, efficient, flexible and user-friendly. The findings stress the need to depart from traditional approaches and to explore new directions in order to equip contractors with effective risk management tools

    A risk management system for healthcare facilities service operators

    Get PDF
    The 24-hour post-modern society in which the NHS delivers healthcare today in the UK as a business has resulted in purchasers and providers of non clinical/FM services continuing to face more and more service delivery and operational risks (Payne and Rees, 1999). These business risks are mainly caused by uncertainties in customer supply and demand service chain, limited support resources (human, capital, modern healthcare facilities and information technology) and the dynamic NHS service scape (environment). This has resulted in non clinical service decisions being reached in an ad-hoc manner and often with no effective business strategy. Furthermore, this approach has led to disastrous business planning and caring consequences, particularly in a highly politicised and consumer-sensitive environment like healthcare service provision (Wagstaff, 1997). These risks are also mainly attributed to the apparent lack of best practice guidelines that are available to assist FM service operators in identifying and managing non-clinical service operations effectively. In addition, there is evidence from NHS literature that clearly indicates the lack of best practice models for managing business risks associated with hotel, estates and site (non-clinical/FM) services delivery (Okoroh et al., 2000; DoH, 1999; CFM, 1993; Smith, 1997; Featherstone, 1999; HFN 17,1998). To date, no research has been carried out in the NHS using FM service operators' (domain experts) knowledge to develop an integrated risk management system for managing non-clinical services using modern business approaches. This thesis presents research findings from healthcare executives and FM experts on business risks faced by service operators (purchasers and providers) when managing non- clinical services effectively in the UK NHS. The research methodology used were, a detailed analysis of a best practice hospital case study, structured interviews with domain healthcare FM experts, pilot and major questionnaire surveys and Repertory Grid interviews. The research has established that in managing non clinical/FM services in the NHS, there are seven major common management-related risk classes identified as critical; customer care; financial and economic; commercial; legal; facility-transmitted; business transfer and corporate. Further research using second factor analysis established that these classical non-clinical risk factors could further be subdivided into forty-eight (48) constructs/sub-attributes highly rated by healthcare facilities executives. Using these risks factors and sub-attributes the research has developed a decision support system for risk management that can be used by FM operators to manage business risks in NHS trust hospitals

    Temporal Information in Data Science: An Integrated Framework and its Applications

    Get PDF
    Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems.Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems
    • …
    corecore