3,176 research outputs found

    Do Healthcare Workers Need Cognitive Computing Technologies? A Qualitative Study Involving IBM Watson and Dutch Professionals

    Get PDF
    The healthcare ecosystem continually produces huge volumes of structured and unstructured data. Cognitive computing, a new computing paradigm, promises to effectively help healthcare researchers and practitioners to derive precious information from data. Arguably, the most famous cognitive computing system is called IBM Watson, which has been adapted to different domains, including healthcare. In this paper, we investigate whether there is a natural demand for cognitive computing systems coming from healthcare workers. Specifically, using the technology acceptance model to guide our efforts, we study different perceptions from healthcare professionals from the Netherlands regarding IBM Watson. The results from our interviews show that virtually all the perceptions are very negative. We list several reasons underlying these perceptions alongside potential ways of changing them. We believe our results are of great value to health information technology professionals trying to introduce a potentially groundbreaking product and to organizations that are contemplating investing in those technologies

    What the History of Linux Says About the Future of Cryptocurrencies

    Get PDF
    Since Bitcoin’s meteoric rise, hundreds of cryptocurrencies that people now publicly trade have emerged. As such, the question naturally arises: how have cryptocurrencies evolved over time? Drawing on the theory of polycentric information commons and cryptocurrencies’ historical similarities with another popular information commons (namely, Linux), we make predictions regarding what cryptocurrencies may look like in the future. Specifically, we focus on four important historical similarities: 1) support from online hacker communities, 2) pursuit of freedom, 3) criticism about features and use, and 4) proliferation of forks. We then predict that: 1) cryptocurrencies will become more pragmatic rather than ideological, 2) cryptocurrencies will become more diverse in terms of not only the underlying technology but also the intended audience, and 3) the core technology behind cryptocurrencies, called blockchain, will be successfully used beyond cryptocurrencies

    Patient Consent for Health Information Exchange: Blockchain-driven Innovation

    Get PDF
    Health information exchange (HIE) is vital to improving care delivery and outcomes, and patient consent is an important component of HIE. Existing consent processes that involve completing forms at a provider, along with poor interoperability between HIEs, give patients limited control of their consent management. We developed and deployed a survey to assess how people perceive the value of HIE, the importance of controlling access to their protected health information (PHI), and how they would prefer to manage consent for the exchange of their PHI. Given the option, 70% of the participants would prefer to use a consent application (app) to manage their consent. Based on the current U.S. HIE environment, we argue that the most viable architecture for implementing an HIE consent app would be a permissioned blockchain. We describe and illustrate a consent management app prototype that is blockchain-based as an effective alternative to current HIE consent practices

    Tailored proper scoring rules elicit decision weights

    Get PDF
    Abstract Proper scoring rules are scoring methods that incentivize honest reporting of subjective probabilities, where an agent strictly maximizes his expected score by reporting his true belief. The implicit assumption behind proper scoring rules is that agents are risk neutral. Such an assumption is often unrealistic when agents are human beings. Modern theories of choice under uncertainty based on rank-dependent utilities assert that human beings weight nonlinear utilities using decision weights, which are differences between weighting functions applied to cumulative probabilities. In this paper, I investigate the reporting behavior of an agent with a rank-dependent utility when he is rewarded using a proper scoring rule tailored to his utility function. I show that such an agent misreports his true belief by reporting a vector of decision weights. My findings thus highlight the risk of utilizing proper scoring rules without prior knowledge about all the components that drive an agent's attitude towards uncertainty. On the positive side, I discuss how tailored proper scoring rules can effectively elicit weighting functions. Moreover, I show how to obtain an agent's true belief from his misreported belief once the weighting functions are known

    Advancements in the Elicitation and Aggregation of Private Information

    Get PDF
    There are many situations where one might be interested in eliciting and aggregating the private information of a group of agents. For example, a recommendation system might suggest recommendations based on the aggregate opinions of a group of like-minded agents, or a decision maker might take a decision based on the aggregate forecasts from a group of experts. When agents are self-interested, they are not necessarily honest when reporting their private information. For example, agents who have a reputation to protect might tend to produce forecasts near the most likely group consensus, whereas agents who have a reputation to build might tend to overstate the probabilities of outcomes they feel will be understated in a possible consensus. Therefore, economic incentives are necessary to incentivize self-interested agents to honestly report their private information. Our first contribution in this thesis is a scoring method to induce honest reporting of an answer to a multiple-choice question. We formally show that, in the presence of social projection, one can induce honest reporting in this setting by comparing reported answers and rewarding agreements. Our experimental results show that the act of encouraging honest reporting through the proposed scoring method results in more accurate answers than when agents have no direct incentives for expressing their true answers. Our second contribution is about how to incentivize honest reporting when private information are subjective probabilities (beliefs). Proper scoring rules are traditional scoring methods that incentivize honest reporting of subjective probabilities, where the expected score received by an agent is maximized when that agent reports his true belief. An implicit assumption behind proper scoring rules is that agents are risk neutral. In an experiment involving proper scoring rules, we find that human beings fail to be risk neutral. We then start our discussion on how to adapt proper scoring rules to cumulative prospect theory, a modern theory of choice under uncertainty. We explain why a property called comonotonicity is a sufficient condition for proper scoring rules to be indeed proper under cumulative prospect theory. Moreover, we show how to construct a comonotonic proper scoring rule from any traditional proper scoring rule. We also propose a new approach that uses non-deterministic payments based on proper scoring rules to elicit an agent's true belief when the components that drive the agent's attitude towards uncertainty are unknown. After agents report their private information, there is still the question on how to aggregate the reported information. Our third contribution in this thesis is an empirical study on the influence of the number of agents on the quality of the aggregate information in a crowdsourcing setting. We find that both the expected error in the aggregate information as well as the risk of a poor combination of agents decrease as the number of agents increases. Moreover, we find that the top-performing agents are consistent across multiple tasks, whereas worst-performing agents tend to be inconsistent. Our final contribution in this thesis is a pooling method to aggregate reported beliefs. Intuitively, the proposed method works as if the agents were continuously updating their beliefs in order to accommodate the expertise of others. Each updated belief takes the form of a linear opinion pool, where the weight that an agent assigns to a peer's belief is inversely related to the distance between their beliefs. In other words, agents are assumed to prefer beliefs that are close to their own beliefs. We prove that such an updating process leads to consensus, i.e., the agents all converge towards the same belief. Further, we show that if risk-neutral agents are rewarded using the quadratic scoring rule, then the assumption that they prefer beliefs that are close to their own beliefs follows naturally. We empirically demonstrate the effectiveness of the proposed method using real-world data. In particular, the results of our experiment show that the proposed method outperforms the traditional unweighted average approach and another distance-based method when measured in terms of both overall accuracy and absolute error

    Sharing Rewards Based on Subjective Opinions

    Get PDF
    Fair division is the problem of dividing one or several goods among a set of agents in a way that satisfies a suitable fairness criterion. Traditionally studied in economics, philosophy, and political science, fair division has drawn a lot of attention from the multiagent systems community, since this field is strongly concerned about how a surplus (or a cost) should be divided among a group of agents. Arguably, the Shapley value is the single most important contribution to the problem of fair division. It assigns to each agent a share of the resource equal to the expected marginal contribution of that agent. Thus, it is implicitly assumed that individual marginal contributions can be objectively computed. In this thesis, we propose a game-theoretic model for sharing a joint reward when the quality of individual contributions is subjective. In detail, we consider scenarios where a group has been formed and has accomplished a task for which it is granted a reward, which must be shared among the group members. After observing the contribution of the peers in accomplishing the task, each agent is asked to provide evaluations for the others. Mainly to facilitate the sharing process, agents can also be requested to provide predictions about how their peers are evaluated. These subjective opinions are elicited and aggregated by a central, trusted entity, called the mechanism, which is also responsible for sharing the reward based exclusively on the received opinions. Besides the formal game-theoretic model for sharing rewards based on subjective opinions, we propose three different mechanisms in this thesis. Our first mechanism, the peer-evaluation mechanism, divides the reward proportionally to the evaluations received by the agents. We show that this mechanism is fair, budget-balanced, individually rational, and strategy-proof, but that it can be collusion-prone. Our second mechanism, the peer-prediction mechanism, shares the reward by considering two aspects: the evaluations received by the agents and their truth-telling scores. To compute these scores, this mechanism uses a strictly proper scoring rule. Under the assumption that agents are Bayesian decision-makers, we show that this mechanism is weakly budget-balanced, individually rational, and incentive-compatible. Further, we present approaches that guarantee the mechanism to be collusion-resistant and fair. Our last mechanism, the BTS mechanism, is the only one to elicit both evaluations and predictions from the agents. It considers the evaluations received by the agents and their truth-telling scores when sharing the reward. For computing the scores, it uses the Bayesian truth serum method, a powerful scoring method based on the surprisingly common criterion. Under the assumptions that agents are Bayesian decision-makers, and that the population of agents is sufficiently large so that a single evaluation cannot significantly affect the empirical distribution of evaluations, we show that this mechanism is incentive-compatible, budget-balanced, individually rational, and fair
    • …
    corecore