7,865 research outputs found

    Influence of warmth and competence on the promotion of safe in-group selection. Stereotype content model and social categorization of faces

    Get PDF
    Categorizing an individual as a friend or foe plays a pivotal role in navigating the social world. According to the Stereotype Content Model, social perception relies on two fundamental dimensions, Warmth and Competence, which allow us to process the intentions of others and their ability to enact those intentions, respectively. Social cognition research indicates that, in categorization tasks, people tend to classify other individuals as more likely to belong to the out-group than the in-group (In-group Overexclusion Effect, IOE) when lacking diagnostic information, probably with the aim of protecting in-group integrity. Here, we explored the role of Warmth and Competence in group-membership decisions by testing 62 participants in a social-categorization task consisting of 150 neutral faces. We assessed whether (i) Warmth and Competence ratings could predict the in-group/out-group categorization, and (ii) the reliance on these two dimensions differed in low-IOE vs. high-IOE participants. Data showed that high ratings of Warmth and Competence were necessary to categorize a face as in-group. Moreover, while low-IOE participants relied on Warmth, high-IOE participants relied on Competence. This finding suggests that the proneness to include/exclude unknown identities in/from one's own in-group is related to individual differences in the reliance on SCM social dimensions. Furthermore, the primacy of Warmth effect seems not to represent a universal phenomenon adopted in the context of social evaluatio

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table

    Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Full text link
    In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional nearinfrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions

    Using mobile technology to engage sexual and gender minorities in clinical research.

    Get PDF
    IntroductionHistorical and current stigmatizing and discriminatory experiences drive sexual and gender minority (SGM) people away from health care and clinical research. Being medically underserved, they face numerous disparities that make them vulnerable to poor health outcomes. Effective methods to engage and recruit SGM people into clinical research studies are needed.ObjectivesTo promote health equity and understand SGM health needs, we sought to design an online, national, longitudinal cohort study entitled The PRIDE (Population Research in Identity and Disparities for Equality) Study that enabled SGM people to safely participate, provide demographic and health data, and generate SGM health-related research ideas.MethodsWe developed an iPhone mobile application ("app") to engage and recruit SGM people to The PRIDE Study-Phase 1. Participants completed demographic and health surveys and joined in asynchronous discussions about SGM health-related topics important to them for future study.ResultsThe PRIDE Study-Phase 1 consented 18,099 participants. Of them, 16,394 provided data. More than 98% identified as a sexual minority, and more than 15% identified as a gender minority. The sample was diverse in terms of sexual orientation, gender identity, age, race, ethnicity, geographic location, education, and individual income. Participants completed 24,022 surveys, provided 3,544 health topics important to them, and cast 60,522 votes indicating their opinion of a particular health topic.ConclusionsWe developed an iPhone app that recruited SGM adults and collected demographic and health data for a new national online cohort study. Digital engagement features empowered participants to become committed stakeholders in the research development process. We believe this is the first time that a mobile app has been used to specifically engage and recruit large numbers of an underrepresented population for clinical research. Similar approaches may be successful, convenient, and cost-effective at engaging and recruiting other vulnerable populations into clinical research studies

    Machine learning for prognosis of oral cancer : What are the ethical challenges?

    Get PDF
    Background: Machine learning models have shown high performance, particularly in the diagnosis and prognosis of oral cancer. However, in actual everyday clinical practice, the diagnosis and prognosis using these models remain limited. This is due to the fact that these models have raised several ethical and morally laden dilemmas. Purpose: This study aims to provide a systematic stateof-the-art review of the ethical and social implications of machine learning models in oral cancer management. Methods: We searched the OvidMedline, PubMed, Scopus, Web of Science and Institute of Electrical and Electronics Engineers databases for articles examining the ethical issues of machine learning or artificial intelligence in medicine, healthcare or care providers. The Preferred Reporting Items for Systematic Review and Meta-Analysis was used in the searching and screening processes. Findings: A total of 33 studies examined the ethical challenges of machine learning models or artificial intelligence in medicine, healthcare or diagnostic analytics. Some ethical concerns were data privacy and confidentiality, peer disagreement (contradictory diagnostic or prognostic opinion between the model and the clinician), patient’s liberty to decide the type of treatment to follow may be violated, patients–clinicians’ relationship may change and the need for ethical and legal frameworks. Conclusion: Government, ethicists, clinicians, legal experts, patients’ representatives, data scientists and machine learning experts need to be involved in the development of internationally standardised and structured ethical review guidelines for the machine learning model to be beneficial in daily clinical practice.Copyright © 2020 for this paper by its authors. Use permitted under Creative CommonsLicense Attribution 4.0 International (CC BY 4.0).fi=vertaisarvioitu|en=peerReviewed

    Evaluating 'Prefer not to say' Around Sensitive Disclosures

    Get PDF
    As people's offline and online lives become increasingly entwined, the sensitivity of personal information disclosed online is increasing. Disclosures often occur through structured disclosure fields (e.g., drop-down lists). Prior research suggests these fields may limit privacy, with non-disclosing users being presumed to be hiding undesirable information. We investigated this around HIV status disclosure in online dating apps used by men who have sex with men. Our online study asked participants (N=183) to rate profiles where HIV status was either disclosed or undisclosed. We tested three designs for displaying undisclosed fields. Visibility of undisclosed fields had a significant effect on the way profiles were rated, and other profile information (e.g., ethnicity) could affect inferences that develop around undisclosed information. Our research highlights complexities around designing for non-disclosure and questions the voluntary nature of these fields. Further work is outlined to ensure disclosure control is appropriately implemented around online sensitive information disclosures

    A Novel Validation Algorithm Allows for Automated Cell Tracking and the Extraction of Biologically Meaningful Parameters

    Get PDF
    Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters with high reliability and statistical significance. These include the distribution of life/cycle times and cell areas, as well as of the symmetry of cell divisions and motion analyses. The new algorithm thus allows for the quantification and parameterization of cell culture with unprecedented accuracy. To evaluate our validation algorithm, two large reference data sets were manually created. These data sets comprise more than 320,000 unstained adult pancreatic stem cells from rat, including 2592 mitotic events. The reference data sets specify every cell position and shape, and assign each cell to the correct branch of its genealogic tree. We provide these reference data sets for free use by others as a benchmark for the future improvement of automated tracking methods
    • …
    corecore