10,909 research outputs found
Optimization under Uncertainty: Machine Learning Approach
Data is the new oil. From the beginning of the 21st century, data is similar to what oil was in the 18th century, an immensely untapped valuable asset. This paper reviews recent advances in the field of optimization under uncertainty via a modern data lens and highlights key research challenges and the promise of data-driven optimization that organically integrates machine learning and mathematical programming for decision-making under uncertainty. A brief review of classical mathematical programming techniques for hedging against uncertainty is first presented, along with their wide spectrum of applications in Process Systems Engineering. we provide an introduction to the topic of uncertainty in machine learning as well as an overview of attempts so far at handling uncertainty in general and formalizing this distinction in particular. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems, and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a modelās development, not just explanations of the model itself
Overcoming the Digital Tsunami in e-Discovery: is Visual Analysis the Answer?
New technologies are generating potentially discoverable evidence in electronic form in ever increasing volumes. As a result, traditional techniques of document search and retrieval in pursuit of electronic discovery in litigation are becoming less viable. One potential new technological solution to the e-discovery search and retrieval challenge is Visual Analysis (VA). VA is a technology that combines the computational power of the computer with graphical representations of large datasets to enable interactive analytic capabilities. This article provides an overview of VA technology and how it is being applied in the analysis of e-mail and other electronic documents in the field of e-discovery, as well as discussing several challenges and limitations of the technology. The article concludes that VA has the potential to overcome some of the limitations of current search and retrieval techniques, but that addressing the digital tsunami is more likely to be achieved by using VA in combination with other search and retrieval technologies in the context of creating an effective data governance program
A Nine Month Report on Progress Towards a Framework for Evaluating Advanced Search Interfaces considering Information Retrieval and Human Computer Interaction
This is a nine month progress report detailing my research into supporting users in their search for information, where the questions, results or even thei
Recommended from our members
Modelling prognostic trajectories in Alzheimerās disease
Progression to dementia due to Alzheimerās Disease (AD) is a long and protracted process that involves multiple pathways of disease pathophysiology. Predicting these dynamic changes has major implications for timely and effective clinical management in AD. There are two reasons why at present we lack appropriate tools to make such predictions. First, a key feature of AD is the interactive nature of the relationships between biomarkers, such as accumulation of Ī²-amyloid -a peptide that builds plaques between nerve cells-, tau -a protein found in the axons of nerve cells- and widespread neurodegeneration. Current models fail to capture these relationships because they are unable to successfully reduce the high dimensionality of biomarkers while exploiting informative multivariate relationships. Second, current models focus on simply predicting in a binary manner whether an individual will develop dementia due to AD or not, without informing clinicians about their predicted disease trajectory. This can result in administering inefficient treatment plans and hindering appropriate stratification for clinical trials. In this thesis, we overcome these challenges by using applied machine learning to build predictive models of patient disease trajectories in the earliest stages of AD. Specifically, to exploit the multi-dimensionality of biomarker data, we used a novel feature generation methodology Partial Least Squares regression with recursive feature elimination (PLSr-RFE). This method applies a hybrid-feature selection and feature construction method that captures co-morbidities in cognition and pathophysiology, resulting in an index of Alzheimerās disease atrophy from structural MRI. We validated our choice of biomarker and the efficacy of our methodology by showing that the learnt pattern of grey matter atrophy is highly predictive of tau accumulation in an independent sample. Next, to go beyond predicting binary outcomes to deriving individualised prognostic scores of cognitive decline due to AD, we used a novel trajectory modelling approach (Generalised Metric Learning Vector Quantization ā Scalar projection) that mines multimodal data from large AD research cohorts. Using this approach, we derive individualised prognostic scores of cognitive decline due to AD, revealing interactive cognitive, and biological factors that improve prediction accuracy. Next, we extended our machine learning framework to classify and stage early AD individuals based on future pathological tau accumulation. Our results show that the characteristic spreading pattern of tau in early AD can be predicted by baseline biomarkers, particularly when stratifying groups using multimodal data. Further, we showed that our prognostic index predicts individualised rates of future tau accumulation with high accuracy and regional specificity in an independent sample of cognitively unimpaired individuals. Overall, our work used machine learning to combine continuous information from AD biomarkers predicting pathophysiological changes at different stages in the AD cascade. The approaches presented in this thesis provide an excellent framework to support personalised clinical interventions and guide effective drug discovery trials
Games of Collaboration : an ethnographic examination of experts acting seriously
This paper looks at the theme of collaboration through the prism of game design, and especially the example of serious games. At its heart is a consideration of two collaborative projects between experts. The first is a current; collaboration between computer scientists, game designers and a theatre company in Scotland, in which the author is also a collaborator and the projectās ethnographer. The second is perhaps the largest and most high-profile collaborative project recently led and documented by anthropologists, Meridian 180, which aims to experiment with the norms of collaboration itself; and which has already been theorised and extensively reflected upon by one its founders, Annelise Riles. The paper aims to put these two collaborations into some kind of conversation in order to throw each into productive relief and to ask some new questions about how we think about both the exercise of collaboration and the deliberate subversion of its norms.Peer reviewe
Recommended from our members
A framework for knowledge discovery within business intelligence for decision support
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Business Intelligence (BI) techniques provide the potential to not only efficiently manage but further analyse and apply the collected information in an effective manner. Benefiting from research both within industry and academia, BI provides functionality for accessing, cleansing, transforming, analysing and reporting organisational datasets. This provides further opportunities for the data to be explored and assist organisations in the discovery of correlations, trends and patterns that exist hidden within the data. This hidden information can be employed to provide an insight into opportunities to make an organisation more competitive by allowing manager to make more informed decisions and as a result, corporate resources optimally utilised. This potential insight provides organisations with an unrivalled opportunity to remain abreast of market trends. Consequently, BI techniques provide significant opportunity for integration with Decision Support Systems (DSS). The gap which was identified within the current body of knowledge and motivated this research, revealed that currently no suitable framework for BI, which can be applied at a meta-level and is therefore tool, technology and domain independent, currently exists. To address the identified gap this study proposes a meta-level framework: - āKDDS-BIā, which can be applied at an abstract level and therefore structure a BI investigation, irrespective of the end user. KDDS-BI not only facilitates the selection of suitable techniques for BI investigations, reducing the reliance upon ad-hoc investigative approaches which rely upon ātrial and errorā, yet further integrates Knowledge Management (KM) principles to ensure the retention and transfer of knowledge due to a structured approach to provide DSS that are based upon the principles of BI.
In order to evaluate and validate the framework, KDDS-BI has been investigated through three distinct case studies. First KDDS-BI facilitates the integration of BI within āDirect Marketingā to provide innovative solutions for analysis based upon the most suitable BI technique. Secondly, KDDS-BI is investigated within sales promotion, to facilitate the selection of tools and techniques for more focused in store marketing campaigns and increase revenue through the discovery of hidden data, and finally, operations management is analysed within a highly dynamic and unstructured environment of the London Underground Ltd. network through unique a BI solution to organise and manage resources, thereby increasing the efficiency of business processes. The three case studies provide insight into not only how KDDS-BI provides structure to the integration of BI within business process, but additionally the opportunity to analyse the performance of KDDS-BI within three independent environments for distinct purposes provided structure through KDDS-BI thereby validating and corroborating the proposed framework and adding value to business processes
Games of Collaboration: An Ethnographic Examination of Experts Acting Seriously
This paper looks at the theme of collaboration through the prism of game design, and especially the example of serious games. At its heart, this is a considerationĀ of two collaborative projects between experts. The first is a current collaboration between computer scientists, game designers, and a theatre company in Scotland, in which the author is also a collaborator and the projectās ethnographer. The second is perhaps the largest and most high-profile collaborative project recently led and documented by anthropologists, Meridian 180, which aims to experiment with the norms of collaboration itself, and which has already been theorised and extensively reflected upon by one of its founders, Annelise Riles. The paper aims to put these two collaborations into some kind of conversation in order to throw each into productive relief and to ask some new questions about how we think about both the exercise of collaboration and the deliberate subversion of its norms.
Keywords: collaboration, serious games, co-operation, experts, rules, friendshi
- ā¦