9 research outputs found

    Análisis comparativo para seleccionar una herramienta de reconocimiento de emociones aplicando el modelo AHP

    Get PDF
    La importancia del comportamiento no verbal y lo que se transmite mediante el lenguaje corporal, juega un papel muy importante en nuestro entorno. Como una de las áreas éste trabajo se enfoca en el reconocimiento de emociones para niños de 2 a 4 años, debido a que es importante revisar el estado de emoción que representa un niño dependiendo del ambiente en el cual se encuentra sumergido. El objetivo de este artículo es evaluar la eficiencia de tres herramientas de reconocimiento de emociones, que son: Face++, Microsoft Azure Emotion API y Google Vision API al inferir atributos de niños. Para poder realizar esta investigación, se realizó un análisis comparativo de las herramientas utilizadas junto con la ayuda de una especialista en psicología infantil. Se tomaron como muestra un grupo de 20 niños, de la Fundación “Los chavitos” del MIES en Ecuador. Los resultados experimentales mostraron que Face++ presentó mayor precisión con relación a Microsoft Azure Emotion API y Google Vision API. Se espera que el conjunto de datos que se presenta en los resultados pueda ayudar a allanar el camino para futuras investigaciones sobre el uso de las herramientas de reconocimiento de emociones

    Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition

    Full text link
    Existing research on fairness evaluation of document classification models mainly uses synthetic monolingual data without ground truth for author demographic attributes. In this work, we assemble and publish a multilingual Twitter corpus for the task of hate speech detection with inferred four author demographic factors: age, country, gender and race/ethnicity. The corpus covers five languages: English, Italian, Polish, Portuguese and Spanish. We evaluate the inferred demographic labels with a crowdsourcing platform, Figure Eight. To examine factors that can cause biases, we take an empirical analysis of demographic predictability on the English corpus. We measure the performance of four popular document classifiers and evaluate the fairness and bias of the baseline classifiers on the author-level demographic attributes.Comment: Accepted at LREC 202

    Image conditions for machine-based face recognition of juvenile faces

    Get PDF
    Machine-based facial recognition could help law enforcement and other organisations to match juvenile faces more efficiently. It is especially important when dealing with indecent images of children to minimise the workload, and deal with moral and stamina challenges related to human recognition. With growth related changes, juvenile face recognition is challenging. The challenge not only relates to the growth of the child’s face, but also to face recognition in the wild with unconstrained images. The aim of the study was to evaluate how different conditions (i.e. black and white, cropped, blur and resolution reduction) can affect machine-based facial recognition of juvenile age progression. The study used three off-the-shelf facial recognition algorithms (Microsoft Face API, Amazon Rekognition, and Face++) and compared the original images and the age progression images under the four image conditions against an older image of the child. The results showed a decrease in facial similarity with an increased age gap, in comparison to Microsoft; Amazon and Face++ showed higher confidence scores and are more resilient to a change in image condition. The image condition ‘black and white’ and ‘cropped’ had a negative effect across all three softwares. The relationship between age progression images and the younger original image was explored. The results suggest manual age progression images are no more useful than the original image for facial identification of missing children, and Amazon and Face++ performed better with the original image

    Algorithmic and human prediction of success in human collaboration from visual features

    Get PDF
    This is the final version. Available on open access from Nature Research via the DOI in this recordData availability: The full dataset including aggregated features of each group of the >43K groups used in our analyses is available at the following link: https://doi.org/10.7910/DVN/HDT2RN. All photos used in this work are publicly available, posted on public Facebook pages. However, we do not release the raw images, or the individual-level raw features extracted using the Face++ API. More details are provided at the link above.The publisher correction to this article is available in ORE at http://hdl.handle.net/10871/124927As groups are increasingly taking over individual experts in many tasks, it is ever more important to understand the determinants of group success. In this paper, we study the patterns of group success in Escape The Room, a physical adventure game in which a group is tasked with escaping a maze by collectively solving a series of puzzles. We investigate (1) the characteristics of successful groups, and (2) how accurately humans and machines can spot them from a group photo. The relationship between these two questions is based on the hypothesis that the characteristics of successful groups are encoded by features that can be spotted in their photo. We analyze >43K group photos (one photo per group) taken after groups have completed the game—from which all explicit performance-signaling information has been removed. First, we find that groups that are larger, older and more gender but less age diverse are significantly more likely to escape. Second, we compare humans and off-the-shelf machine learning algorithms at predicting whether a group escaped or not based on the completion photo. We find that individual guesses by humans achieve 58.3% accuracy, better than random, but worse than machines which display 71.6% accuracy. When humans are trained to guess by observing only four labeled photos, their accuracy increases to 64%. However, training humans on more labeled examples (eight or twelve) leads to a slight, but statistically insignificant improvement in accuracy (67.4%). Humans in the best training condition perform on par with two, but worse than three out of the five machine learning algorithms we evaluated. Our work illustrates the potentials and the limitations of machine learning systems in evaluating group performance and identifying success factors based on sparse visual cues

    Diseño de investigación experimental aplicado a la estimación de las diferencias de precisión y tiempo de procesamiento entre los algoritmos de reconocimiento de imágenes de las principales plataformas de servicios en la nube.

    Get PDF
    Propone comparac las métricas producidas por los algoritmos de procesamiento pictórico obtenidas de las principales plataformas en la nube (Amazon Web Services, Google Cloud Platform y Microsoft Azure), para evaluar sus diferencias en cuanto a precisión de los resultados y tiempo de procesamiento. Pretende servir de guía para los arquitectos y desarrolladores de software en el proceso de selección de proveedores de servicios de análisis de imágenes en la nube, para que puedan elegir el servicio más adecuado para un contexto en particular

    The Influence of COVID-19 on the Well-Being of People: Big Data Methods for Capturing the Well-Being of Working Adults and Protective Factors Nationwide

    Get PDF
    The COVID-19 outbreak has affected the lives of people across the globe. To investigate the mental impact of COVID-19 and to respond to the call of researchers for the use of unobtrusive and intensive measurement in capturing time-sensitive psychological concepts (e.g., affect), we used big data methods to investigate the impact of COVID-19 by analyzing 348,933 tweets that people posted from April 1, 2020 to April 24, 2020. The dataset covers 2,231 working adults, who are from 454 counties across 48 states in the United States. In this study, we theorize the similarity and dissimilarity between COVID-19 and other common stressors. Similar to other stressors, pandemic severity negatively influenced the well-being of people by increasing negative affect. However, we did not find an influence of pandemic severity on the positive affect of the people. Dissimilar to other stressors, the protective factors for people during COVID-19 are not common factors that make people resilient to stress and they echo the unique experience during COVID-19. Moreover, we analyzed the text content of 348,933 tweets through Linguistic Inquiry Word Count (LIWC) and word cloud analysis to further reveal the psychological impact of COVID-19 and why the protective factors make people resilient to the mental impact of COVID-19. These exploratory analyses revealed the specific emotions that people experienced and the topics that people are concerned about during the pandemic. The theoretical and practical implications are discussed

    BALANCING THE ASSUMPTIONS OF CAUSAL INFERENCE AND NATURAL LANGUAGE PROCESSING

    Get PDF
    Drawing conclusions about real-world relationships of cause and effect from data collected without randomization requires making assumptions about the true processes that generate the data we observe. Causal inference typically considers low-dimensional data such as categorical or numerical fields in structured medical records. Yet a restriction to such data excludes natural language texts -- including social media posts or clinical free-text notes -- that can provide a powerful perspective into many aspects of our lives. This thesis explores whether the simplifying assumptions we make in order to model human language and behavior can support the causal conclusions that are necessary to inform decisions in healthcare or public policy. An analysis of millions of documents must rely on automated methods from machine learning and natural language processing, yet trust is essential in many clinical or policy applications. We need to develop causal methods that can reflect the uncertainty of imperfect predictive models to inform robust decision-making. We explore several areas of research in pursuit of these goals. We propose a measurement error approach for incorporating text classifiers into causal analyses and demonstrate the assumption on which it relies. We introduce a framework for generating synthetic text datasets on which causal inference methods can be evaluated, and use it to demonstrate that many existing approaches make assumptions that are likely violated. We then propose a proxy model methodology that provides explanations for uninterpretable black-box models, and close by incorporating it into our measurement error approach to explore the assumptions necessary for an analysis of gender and toxicity on Twitter
    corecore