1,886 research outputs found

    Educational Theories and Learning Analytics : From Data to Knowledge

    Get PDF
    Under embargo until 17.01.21.acceptedVersio

    Explorative Graph Visualization

    Get PDF
    Netzwerkstrukturen (Graphen) sind heutzutage weit verbreitet. Ihre Untersuchung dient dazu, ein besseres Verständnis ihrer Struktur und der durch sie modellierten realen Aspekte zu gewinnen. Die Exploration solcher Netzwerke wird zumeist mit Visualisierungstechniken unterstützt. Ziel dieser Arbeit ist es, einen Überblick über die Probleme dieser Visualisierungen zu geben und konkrete Lösungsansätze aufzuzeigen. Dabei werden neue Visualisierungstechniken eingeführt, um den Nutzen der geführten Diskussion für die explorative Graphvisualisierung am konkreten Beispiel zu belegen.Network structures (graphs) have become a natural part of everyday life and their analysis helps to gain an understanding of their inherent structure and the real-world aspects thereby expressed. The exploration of graphs is largely supported and driven by visual means. The aim of this thesis is to give a comprehensive view on the problems associated with these visual means and to detail concrete solution approaches for them. Concrete visualization techniques are introduced to underline the value of this comprehensive discussion for supporting explorative graph visualization

    The eyes know it: FakeET -- An Eye-tracking Database to Understand Deepfake Perception

    Full text link
    We present \textbf{FakeET}-- an eye-tracking database to understand human visual perception of \emph{deepfake} videos. Given that the principal purpose of deepfakes is to deceive human observers, FakeET is designed to understand and evaluate the ease with which viewers can detect synthetic video artifacts. FakeET contains viewing patterns compiled from 40 users via the \emph{Tobii} desktop eye-tracker for 811 videos from the \textit{Google Deepfake} dataset, with a minimum of two viewings per video. Additionally, EEG responses acquired via the \emph{Emotiv} sensor are also available. The compiled data confirms (a) distinct eye movement characteristics for \emph{real} vs \emph{fake} videos; (b) utility of the eye-track saliency maps for spatial forgery localization and detection, and (c) Error Related Negativity (ERN) triggers in the EEG responses, and the ability of the \emph{raw} EEG signal to distinguish between \emph{real} and \emph{fake} videos.Comment: 8 page

    Harnessing the power of the general public for crowdsourced business intelligence: a survey

    Get PDF
    International audienceCrowdsourced business intelligence (CrowdBI), which leverages the crowdsourced user-generated data to extract useful knowledge about business and create marketing intelligence to excel in the business environment, has become a surging research topic in recent years. Compared with the traditional business intelligence that is based on the firm-owned data and survey data, CrowdBI faces numerous unique issues, such as customer behavior analysis, brand tracking, and product improvement, demand forecasting and trend analysis, competitive intelligence, business popularity analysis and site recommendation, and urban commercial analysis. This paper first characterizes the concept model and unique features and presents a generic framework for CrowdBI. It also investigates novel application areas as well as the key challenges and techniques of CrowdBI. Furthermore, we make discussions about the future research directions of CrowdBI

    Can AI Moderate Online Communities?

    Full text link
    The task of cultivating healthy communication in online communities becomes increasingly urgent, as gaming and social media experiences become progressively more immersive and life-like. We approach the challenge of moderating online communities by training student models using a large language model (LLM). We use zero-shot learning models to distill and expand datasets followed by a few-shot learning and a fine-tuning approach, leveraging open-access generative pre-trained transformer models (GPT) from OpenAI. Our preliminary findings suggest, that when properly trained, LLMs can excel in identifying actor intentions, moderating toxic comments, and rewarding positive contributions. The student models perform above-expectation in non-contextual assignments such as identifying classically toxic behavior and perform sufficiently on contextual assignments such as identifying positive contributions to online discourse. Further, using open-access models like OpenAI's GPT we experience a step-change in the development process for what has historically been a complex modeling task. We contribute to the information system (IS) discourse with a rapid development framework on the application of generative AI in content online moderation and management of culture in decentralized, pseudonymous communities by providing a sample model suite of industrial-ready generative AI models based on open-access LLMs

    AI-Generated Voice in Short Videos: A Digital Consumer Engagement Perspective

    Get PDF
    The AI-generated voice (AIGV) has been widely applied in the short video industry to facilitate video creation. However, the impact of AIGV on digital consumer engagement (DCE) remains unclear. In light of this, the study investigates the effect of AIGV on DCE by analyzing a panel dataset with observations of 21,541 videos for 3,647 content creators on TikTok. Preliminary results of a series of fixed-effect panel regressions reveal that using AIGV has a significantly negative effect on DCE (5.4% reduction in the number of likes, 5.2% reduction in the number of comments, and 7.4% reduction in the number of shares). Our further analyses show that this negative effect is particularly significant at the rising action stage of short videos. With these findings, this study is expected to have theoretical contributions to the literature on short videos and practical implications about the appropriate usage of AIGV

    Visual Event Cueing in Linked Spatiotemporal Data

    Get PDF
    abstract: The media disperses a large amount of information daily pertaining to political events social movements, and societal conflicts. Media pertaining to these topics, no matter the format of publication used, are framed a particular way. Framing is used not for just guiding audiences to desired beliefs, but also to fuel societal change or legitimize/delegitimize social movements. For this reason, tools that can help to clarify when changes in social discourse occur and identify their causes are of great use. This thesis presents a visual analytics framework that allows for the exploration and visualization of changes that occur in social climate with respect to space and time. Focusing on the links between data from the Armed Conflict Location and Event Data Project (ACLED) and a streaming RSS news data set, users can be cued into interesting events enabling them to form and explore hypothesis. This visual analytics framework also focuses on improving intervention detection, allowing users to hypothesize about correlations between events and happiness levels, and supports collaborative analysis.Dissertation/ThesisMasters Thesis Computer Science 201

    Multimodal Based Audio-Visual Speech Recognition for Hard-of-Hearing: State of the Art Techniques and Challenges

    Get PDF
    Multimodal Integration (MI) is the study of merging the knowledge acquired by the nervous system using sensory modalities such as speech, vision, touch, and gesture. The applications of MI expand over the areas of Audio-Visual Speech Recognition (AVSR), Sign Language Recognition (SLR), Emotion Recognition (ER), Bio Metrics Applications (BMA), Affect Recognition (AR), Multimedia Retrieval (MR), etc. The fusion of modalities such as hand gestures- facial, lip- hand position, etc., are mainly used sensory modalities for the development of hearing-impaired multimodal systems. This paper encapsulates an overview of multimodal systems available within literature towards hearing impaired studies. This paper also discusses some of the studies related to hearing-impaired acoustic analysis. It is observed that very less algorithms have been developed for hearing impaired AVSR as compared to normal hearing. Thus, the study of audio-visual based speech recognition systems for the hearing impaired is highly demanded for the people who are trying to communicate with natively speaking languages.  This paper also highlights the state-of-the-art techniques in AVSR and the challenges faced by the researchers for the development of AVSR systems
    corecore