6,552 research outputs found

    Augmented Human Machine Intelligence for Distributed Inference

    Get PDF
    With the advent of the internet of things (IoT) era and the extensive deployment of smart devices and wireless sensor networks (WSNs), interactions of humans and machine data are everywhere. In numerous applications, humans are essential parts in the decision making process, where they may either serve as information sources or act as the final decision makers. For various tasks including detection and classification of targets, detection of outliers, generation of surveillance patterns and interactions between entities, seamless integration of the human and the machine expertise is required where they simultaneously work within the same modeling environment to understand and solve problems. Efficient fusion of information from both human and sensor sources is expected to improve system performance and enhance situational awareness. Such human-machine inference networks seek to build an interactive human-machine symbiosis by merging the best of the human with the best of the machine and to achieve higher performance than either humans or machines by themselves. In this dissertation, we consider that people often have a number of biases and rely on heuristics when exposed to different kinds of uncertainties, e.g., limited information versus unreliable information. We develop novel theoretical frameworks for collaborative decision making in complex environments when the observers may include both humans and physics-based sensors. We address fundamental concerns such as uncertainties, cognitive biases in human decision making and derive human decision rules in binary decision making. We model the decision-making by generic humans working in complex networked environments that feature uncertainties, and develop new approaches and frameworks facilitating collaborative human decision making and cognitive multi-modal fusion. The first part of this dissertation exploits the behavioral economics concept Prospect Theory to study the behavior of human binary decision making under cognitive biases. Several decision making systems involving humans\u27 participation are discussed, and we show the impact of human cognitive biases on the decision making performance. We analyze how heterogeneity could affect the performance of collaborative human decision making in the presence of complex correlation relationships among the behavior of humans and design the human selection strategy at the population level. Next, we employ Prospect Theory to model the rationality of humans and accurately characterize their behaviors in answering binary questions. We design a weighted majority voting rule to solve classification problems via crowdsourcing while considering that the crowd may include some spammers. We also propose a novel sequential task ordering algorithm to improve system performance for classification in crowdsourcing composed of unreliable human workers. In the second part of the dissertation, we study the behavior of cognitive memory limited humans in binary decision making and develop efficient approaches to help memory constrained humans make better decisions. We show that the order in which information is presented to the humans impacts their decision making performance. Next, we consider the selfish behavior of humans and construct a unified incentive mechanism for IoT based inference systems while addressing the selfish concerns of the participants. We derive the optimal amount of energy that a selfish sensor involved in the signal detection task must spend in order to maximize a certain utility function, in the presence of buyers who value the result of signal detection carried out by the sensor. Finally, we design a human-machine collaboration framework that blends both machine observations and human expertise to solve binary hypothesis testing problems semi-autonomously. In networks featuring human-machine teaming/collaboration, it is critical to coordinate and synthesize the operations of the humans and machines (e.g., robots and physical sensors). Machine measurements affect human behaviors, actions, and decisions. Human behavior defines the optimal decision-making algorithm for human-machine networks. In today\u27s era of artificial intelligence, we not only aim to exploit augmented human-machine intelligence to ensure accurate decision making; but also expand intelligent systems so as to assist and improve such intelligence

    Beliefs and expertise in sequential decision making

    Full text link
    This work explores a sequential decision making problem with agents having diverse expertise and mismatched beliefs. We consider an N-agent sequential binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on previous agents’ decisions. In addition, the agents have their own beliefs instead of the true prior, and have varying expertise in terms of the noise variance in the private signal. We focus on the risk of the last-acting agent, where precedent agents are selfish. Thus, we call this advisor(s)-advisee sequential decision making. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The impact of diverse noise levels (which means diverse expertise levels) in the two-agent case is also considered and the analytical properties of the optimal belief curves are given. These curves, for certain cases, resemble probability weighting functions from cumulative prospect theory, and so we also discuss the choice of Prelec weighting functions as an approximation for the optimal beliefs, and the possible psychophysical optimality of human beliefs. Next, we consider an advisor selection problem where in the advisee of a certain belief chooses an advisor from a set of candidates with varying beliefs. We characterize the decision region for choosing such an advisor and argue that an advisee with beliefs varying from the true prior often ends up selecting a suboptimal advisor, indicating the need for a social planner. We close with a discussion on the implications of the study toward designing artificial intelligence systems for augmenting human intelligence.https://arxiv.org/abs/1812.04419First author draf

    Internet source evaluation: The role of implicit associations and psychophysiological self-regulation

    Get PDF
    This study focused on middle school students\u2019 source evaluation skills as a key component of digital literacy. Specifically, it examined the role of two unexplored individual factors that may affect the evaluation of sources providing information about the controversial topic of the health risks associated with the use of mobile phones. The factors were the implicit association of mobile phone with health or no health, and psychophysiological self-regulation as reflected in basal Heart Rate Variability (HRV). Seventy-two seventh graders read six webpages that provided contrasting information on the unsettled topic of the potential health risks related to the use of mobile phones. Then they were asked to rank-order the six websites along the dimension of reliability (source evaluation). Findings revealed that students were able to discriminate between the most and least reliable websites, justifying their ranking in light of different criteria. However, overall, they were little accurate in rank-ordering all six Internet sources. Both implicit associations and HRV correlated with source evaluation. The interaction between the two individual variables was a significant predictor of participants\u2019 performance in rank-ordering the websites for reliability. A slope analysis revealed that when students had an average psychophysiological self-regulation, the stronger their association of the mobile phone with health, the better their performance on source evaluation. Theoretical and educational significances of the study are discussed

    Types of approximation for probabilistic cognition : sampling and variational

    Get PDF
    A basic challenge for probabilistic models of cognition is explaining how probabilistically correct solutions are approximated by the limited brain, and how to explain mismatches with human behavior. An emerging approach to solving this problem is to use the same approximation algorithms that were been developed in computer science and statistics for working with complex probabilistic models. Two types of approximation algorithms have been used for this purpose: sampling algorithms, such as importance sampling and Markov chain Monte Carlo, and variational algorithms, such as mean-field approximations and assumed density filtering. Here I briefly review this work, outlining how the algorithms work, how they can explain behavioral biases, and how they might be implemented in the brain. There are characteristic differences between how these two types of approximation are applied in brain and behavior, which points to how they could be combined in future research

    The dual process account of reasoning: historical roots, problems and perspectives.

    Get PDF
    Despite the great effort that has been dedicated to the attempt to redefine expected utility theory on the grounds of new assumptions, modifying or moderating some axioms, none of the alternative theories propounded so far had a statistical confirmation over the full domain of applicability. Moreover, the discrepancy between prescriptions and behaviors is not limited to expected utility theory. In two other fundamental fields, probability and logic, substantial evidence shows that human activities deviate from the prescriptions of the theoretical models. The paper suggests that the discrepancy cannot be ascribed to an imperfect axiomatic description of human choice, but to some more general features of human reasoning and assumes the “dual-process account of reasoning” as a promising explanatory key. This line of thought is based on the distinction between the process of deliberate reasoning and that of intuition; where in a first approximation, “intuition” denotes a mental activity largely automatized and inaccessible from conscious mental activity. The analysis of the interactions between these two processes provides the basis for explaining the persistence of the gap between normative and behavioral patterns. This view will be explored in the following pages: central consideration will be given to the problem of the interactions between rationality and intuition, and the correlated “modularity” of the thought.

    Training methods for facial image comparison: a literature review

    Get PDF
    This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, BĂŒlthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie

    Efficient Beam Alignment in Millimeter Wave Systems Using Contextual Bandits

    Full text link
    In this paper, we investigate the problem of beam alignment in millimeter wave (mmWave) systems, and design an optimal algorithm to reduce the overhead. Specifically, due to directional communications, the transmitter and receiver beams need to be aligned, which incurs high delay overhead since without a priori knowledge of the transmitter/receiver location, the search space spans the entire angular domain. This is further exacerbated under dynamic conditions (e.g., moving vehicles) where the access to the base station (access point) is highly dynamic with intermittent on-off periods, requiring more frequent beam alignment and signal training. To mitigate this issue, we consider an online stochastic optimization formulation where the goal is to maximize the directivity gain (i.e., received energy) of the beam alignment policy within a time period. We exploit the inherent correlation and unimodality properties of the model, and demonstrate that contextual information improves the performance. To this end, we propose an equivalent structured Multi-Armed Bandit model to optimally exploit the exploration-exploitation tradeoff. In contrast to the classical MAB models, the contextual information makes the lower bound on regret (i.e., performance loss compared with an oracle policy) independent of the number of beams. This is a crucial property since the number of all combinations of beam patterns can be large in transceiver antenna arrays, especially in massive MIMO systems. We further provide an asymptotically optimal beam alignment algorithm, and investigate its performance via simulations.Comment: To Appear in IEEE INFOCOM 2018. arXiv admin note: text overlap with arXiv:1611.05724 by other author

    Faster than thought: Detecting sub-second activation sequences with sequential fMRI pattern analysis

    No full text

    Computational Rationality: Linking Mechanism and Behavior Through Bounded Utility Maximization

    Full text link
    We propose a framework for including information‐processing bounds in rational analyses. It is an application of bounded optimality (Russell & Subramanian, 1995) to the challenges of developing theories of mechanism and behavior. The framework is based on the idea that behaviors are generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself. We call the framework computational rationality to emphasize the incorporation of computational mechanism into the definition of rational action. Theories are specified as optimal program problems , defined by an adaptation environment, a bounded machine, and a utility function. Such theories yield different classes of explanation, depending on the extent to which they emphasize adaptation to bounds, and adaptation to some ecology that differs from the immediate local environment. We illustrate this variation with examples from three domains: visual attention in a linguistic task, manual response ordering, and reasoning. We explore the relation of this framework to existing “levels” approaches to explanation, and to other optimality‐based modeling approaches.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/106911/1/tops12086.pd
    • 

    corecore