663 research outputs found
Approaches to decision making
This book is designed as a brief introduction to the understanding of decision making in work settings. It is designed for use in graduate courses and should be supported by a wide range of additional reading materials and practical exercises. The approach is multi-disciplinary and pluralistic: there are many perspectives from which decision making may be viewed. Similarly, there are many differences in decision making between individuals and between contexts.
The book is intended to contribute to a raised awareness of the many issues and high complexity attaching to important decisions. It may or may not help the reader to become a better decision maker. That outcome depends on personal desire and availability of resources, including time and pressure, as much as anything else. However it is hoped that those readers who are accustomed to the traditional focus on \u27rational\u27 decision making will quickly learn that decision making is a complex and many faceted activity.
The text is divided into six modules or parts, each looking at a specific aspect of decision making in organisations. Module 1 looks at some important philosophical issues, and introduces the \u27convential\u27 theories based in economics and sociology. Theoretical and empirical explanations of the decision process are examined in Module 2. Module 3 explores some of the aids to decision making. The individual as decision maker is the subject of Module 4, and Module 5 examines group decision making behaviours. Module 6 is a review, and suggests some of the implications and consequences of a course of study into decision making..
Recommended from our members
Interpretable Deep Learning: Beyond Feature-Importance with Concept-based Explanations
Deep Neural Network (DNN) models are challenging to interpret because of their highly complex and non-linear nature. This lack of interpretability (1) inhibits adoption within safety critical applications, (2) makes it challenging to debug existing models, and (3) prevents us from extracting valuable knowledge. Explainable AI (XAI) research aims to increase the transparency of DNN model behaviour to improve interpretability. Feature importance explanations are the most popular interpretability approaches. They show the importance of each input feature (e.g., pixel, patch, word vector) to the model’s prediction. However, we hypothesise that feature importance explanations have two main shortcomings concerning their inability to describe the complexity of a DNN behaviour with sufficient (1) fidelity and (2) richness. Fidelity and richness are essential because different tasks, users, and data types require specific levels of trust and understanding.
The goal of this thesis is to showcase the shortcomings of feature importance explanations and to develop explanation techniques that describe the DNN behaviour with greater richness. We design an adversarial explanation attack to highlight the infidelity and inadequacy of feature importance explanations. Our attack modifies the parameters of a pre-trained model. It uses fairness as a proxy measure for the fidelity of an explanation method to demonstrate that the apparent importance of a feature does not reveal anything reliable about the fairness of a model. Hence, regulators or auditors should not rely on feature importance explanations to measure or enforce standards of fairness.
As one solution, we formulate five different levels of the semantic richness of explanations to evaluate explanations and propose two function decomposition frameworks (DGINN and CME) to extract explanations from DNNs at a semantically higher level than feature importance explanations. Concept-based approaches provide explanations in terms of atomic human-understandable units (e.g., wheel or door) rather than individual raw features (e.g., pixels or characters). Our function decomposition frameworks can extract specific class representations from 5% of the network parameters and concept representations with an average-per-concept F1 score of 86%. Finally, the CME framework makes it possible to compare concept-based explanations, contributing to the scientific rigour of evaluating interpretability methods.The author would like to appreciate the generous sponsorship of the Engineering and Physical Sciences Research Council (EPSRC), The Department of Computer Science and Technology at the University of Cambridge, and Tenyks, Inc
From Data to Software to Science with the Rubin Observatory LSST
The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) dataset
will dramatically alter our understanding of the Universe, from the origins of
the Solar System to the nature of dark matter and dark energy. Much of this
research will depend on the existence of robust, tested, and scalable
algorithms, software, and services. Identifying and developing such tools ahead
of time has the potential to significantly accelerate the delivery of early
science from LSST. Developing these collaboratively, and making them broadly
available, can enable more inclusive and equitable collaboration on LSST
science.
To facilitate such opportunities, a community workshop entitled "From Data to
Software to Science with the Rubin Observatory LSST" was organized by the LSST
Interdisciplinary Network for Collaboration and Computing (LINCC) and partners,
and held at the Flatiron Institute in New York, March 28-30th 2022. The
workshop included over 50 in-person attendees invited from over 300
applications. It identified seven key software areas of need: (i) scalable
cross-matching and distributed joining of catalogs, (ii) robust photometric
redshift determination, (iii) software for determination of selection
functions, (iv) frameworks for scalable time-series analyses, (v) services for
image access and reprocessing at scale, (vi) object image access (cutouts) and
analysis at scale, and (vii) scalable job execution systems.
This white paper summarizes the discussions of this workshop. It considers
the motivating science use cases, identified cross-cutting algorithms,
software, and services, their high-level technical specifications, and the
principles of inclusive collaborations needed to develop them. We provide it as
a useful roadmap of needs, as well as to spur action and collaboration between
groups and individuals looking to develop reusable software for early LSST
science.Comment: White paper from "From Data to Software to Science with the Rubin
Observatory LSST" worksho
Deep Interpretability Methods for Neuroimaging
Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training.
We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain.
This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial
Human Factors Considerations in System Design
Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments
Recommended from our members
Security, Privacy, and Transparency Guarantees for Machine Learning Systems
Machine learning (ML) is transforming a wide range of applications, promising to bring immense economic and social benefits. However, it also raises substantial security, privacy, and transparency challenges. ML workloads indeed push companies toward aggressive data collection and loose data access policies, placing troves of sensitive user information at risk if the company is hacked. ML also introduces new attack vectors, such as adversarial example attacks, which can completely nullify models’ accuracy under attack. Finally, ML models make complex data-driven decisions, which are opaque to the end-users, and difficult to inspect for programmers. In this dissertation we describe three systems we developed. Each system addresses a dimension of the previous challenges, by combining new practical systems techniques with rigorous theory to achieve a guaranteed level of protection, and make systems easier to understand. First we present Sage, a differentially private ML platform that enforces a meaningful protection semantic for the troves of personal information amassed by today’s companies. Second we describe PixelDP, a defense against adversarial examples that leverages differential privacy theory to provide a guaranteed level of accuracy under attack. Third we introduce Sunlight, a tool to enhance the transparency of opaque targeting services, using rigorous causal inference theory to explain targeting decisions to end-users
- …