494 research outputs found

    Data-driven Computational Social Science: A Survey

    Get PDF
    Social science concerns issues on individuals, relationships, and the whole society. The complexity of research topics in social science makes it the amalgamation of multiple disciplines, such as economics, political science, and sociology, etc. For centuries, scientists have conducted many studies to understand the mechanisms of the society. However, due to the limitations of traditional research methods, there exist many critical social issues to be explored. To solve those issues, computational social science emerges due to the rapid advancements of computation technologies and the profound studies on social science. With the aids of the advanced research techniques, various kinds of data from diverse areas can be acquired nowadays, and they can help us look into social problems with a new eye. As a result, utilizing various data to reveal issues derived from computational social science area has attracted more and more attentions. In this paper, to the best of our knowledge, we present a survey on data-driven computational social science for the first time which primarily focuses on reviewing application domains involving human dynamics. The state-of-the-art research on human dynamics is reviewed from three aspects: individuals, relationships, and collectives. Specifically, the research methodologies used to address research challenges in aforementioned application domains are summarized. In addition, some important open challenges with respect to both emerging research topics and research methods are discussed.Comment: 28 pages, 8 figure

    THE ROLE OF PROBABILISTIC INFORMATION ON AFFECTIVE PREDICTIONS: NEURAL AND SUBJECTIVE CORRELATES AS MODULATED BY INTOLERANCE OF UNCERTAINTY

    Get PDF
    Emotions have been recently reconsidered as interoceptive predictive models, “constructed” by the brain on the basis of contextual information and prior experience, with the aim to predict relevant stimuli or events, and to provide the organism with optimal resources for survival. Nevertheless, the specific mechanisms underlying the construction of affective predictions both at the neural and subjective experience level remain unclear. More specifically, both the role played by contextual information and prior experience on the one hand, and the potential interactions with dispositional characteristics such as Intolerance of Uncertainty (IU), which is considered a trans-diagnostic risk factor for affective disorders, on the other hand, have yet to be unraveled. The present thesis aimed to answer these open questions. As a first aim, we investigated how contextual information of different predictive value modulates the neural correlates of affective predictions construction. Second, we explored how prior probabilistic experience affects the construction of affective predictions at the subjective experience level. Third and last, we studied how individual differences in IU impact on the construction of affective predictions as a function of contextual information and prior experience. Taken together, this thesis contributes to untangling the dynamics of affective prediction construction at the neural and subjective experience level. Contextual information and prior experience were found to differently influence (depending on their predictive value), and to interact with IU, in shaping the neural correlates and the subjective experience of emotion along the construction of affective predictions. Thus, this work offers both a theoretical contribution to predictive models of emotion, by better clarifying the mechanisms subtending prediction construction at the neural and subjective experience levels, and potential clinical implications for the prevention and treatment of anxiety disorders, given the trans-diagnostic nature of IU as a risk factor for the development of affective psychopathology.Emotions have been recently reconsidered as interoceptive predictive models, “constructed” by the brain on the basis of contextual information and prior experience, with the aim to predict relevant stimuli or events, and to provide the organism with optimal resources for survival. Nevertheless, the specific mechanisms underlying the construction of affective predictions both at the neural and subjective experience level remain unclear. More specifically, both the role played by contextual information and prior experience on the one hand, and the potential interactions with dispositional characteristics such as Intolerance of Uncertainty (IU), which is considered a trans-diagnostic risk factor for affective disorders, on the other hand, have yet to be unraveled. The present thesis aimed to answer these open questions. As a first aim, we investigated how contextual information of different predictive value modulates the neural correlates of affective predictions construction. Second, we explored how prior probabilistic experience affects the construction of affective predictions at the subjective experience level. Third and last, we studied how individual differences in IU impact on the construction of affective predictions as a function of contextual information and prior experience. Taken together, this thesis contributes to untangling the dynamics of affective prediction construction at the neural and subjective experience level. Contextual information and prior experience were found to differently influence (depending on their predictive value), and to interact with IU, in shaping the neural correlates and the subjective experience of emotion along the construction of affective predictions. Thus, this work offers both a theoretical contribution to predictive models of emotion, by better clarifying the mechanisms subtending prediction construction at the neural and subjective experience levels, and potential clinical implications for the prevention and treatment of anxiety disorders, given the trans-diagnostic nature of IU as a risk factor for the development of affective psychopathology

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
    corecore