5,928 research outputs found

    Interactive Search and Exploration in Online Discussion Forums Using Multimodal Embeddings

    Get PDF
    In this paper we present a novel interactive multimodal learning system, which facilitates search and exploration in large networks of social multimedia users. It allows the analyst to identify and select users of interest, and to find similar users in an interactive learning setting. Our approach is based on novel multimodal representations of users, words and concepts, which we simultaneously learn by deploying a general-purpose neural embedding model. We show these representations to be useful not only for categorizing users, but also for automatically generating user and community profiles. Inspired by traditional summarization approaches, we create the profiles by selecting diverse and representative content from all available modalities, i.e. the text, image and user modality. The usefulness of the approach is evaluated using artificial actors, which simulate user behavior in a relevance feedback scenario. Multiple experiments were conducted in order to evaluate the quality of our multimodal representations, to compare different embedding strategies, and to determine the importance of different modalities. We demonstrate the capabilities of the proposed approach on two different multimedia collections originating from the violent online extremism forum Stormfront and the microblogging platform Twitter, which are particularly interesting due to the high semantic level of the discussions they feature

    Creating Interaction Scenarios With a New Graphical User Interface

    Full text link
    The field of human-centered computing has known a major progress these past few years. It is admitted that this field is multidisciplinary and that the human is the core of the system. It shows two matters of concern: multidisciplinary and human. The first one reveals that each discipline plays an important role in the global research and that the collaboration between everyone is needed. The second one explains that a growing number of researches aims at making the human commitment degree increase by giving him/her a decisive role in the human-machine interaction. This paper focuses on these both concerns and presents MICE (Machines Interaction Control in their Environment) which is a system where the human is the one who makes the decisions to manage the interaction with the machines. In an ambient context, the human can decide of objects actions by creating interaction scenarios with a new visual programming language: scenL.Comment: 5th International Workshop on Intelligent Interfaces for Human-Computer Interaction, Palerme : Italy (2012

    Towards a framework for investigating tangible environments for learning

    Get PDF
    External representations have been shown to play a key role in mediating cognition. Tangible environments offer the opportunity for novel representational formats and combinations, potentially increasing representational power for supporting learning. However, we currently know little about the specific learning benefits of tangible environments, and have no established framework within which to analyse the ways that external representations work in tangible environments to support learning. Taking external representation as the central focus, this paper proposes a framework for investigating the effect of tangible technologies on interaction and cognition. Key artefact-action-representation relationships are identified, and classified to form a structure for investigating the differential cognitive effects of these features. An example scenario from our current research is presented to illustrate how the framework can be used as a method for investigating the effectiveness of differential designs for supporting science learning

    Proceedings of the international conference on cooperative multimodal communication CMC/95, Eindhoven, May 24-26, 1995:proceedings

    Get PDF

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    Tuning an HCI Curriculum for Master Students to Address Interactive Critical Systems Aspects

    Get PDF
    International audienceThis paper presents the need for specific curricula in order to address the training of specialists in the area of Interactive Critical Systems. Indeed, while curricula are usually built in order to produce specialists in one discipline (e.g. computer science) dealing with systems or products requires training in multiple disciplines. The area of Interactive Critical Systems requires deep knowledge in computer science, dependability, Human-Computer Interaction and safety engineering. We report in this paper how these various disciplines have been integrated in a master program at Université Toulouse III, France and highlight the carrier paths followed by the graduated students and how these carriers are oriented towards aeronautics and space application domains

    Emerging spaces for language learning: AI bots, ambient intelligence, and the metaverse

    Get PDF
    Looking at human communication from the perspective of semiotics extends our view beyond verbal language to consider other sign systems and meaning-making resources. Those include gestures, body language, images, and sounds. From this perspective, the communicative process expands from individual mental processes of verbalizing to include features of the environment, the place and space in which the communication occurs. It may be—and it is increasingly the case today—that language is mediated through digital networks. Online communication has become multimodal in virtually all platforms. At the same time, mobile devices have become indispensable digital companions, extending our perceptive and cognitive abilities. Advances in artificial intelligence are enabling tools that have considerable potential for language learning, as well as creating more complexity in the relationship between humans and the material world. In this column, we will be looking at changing perspectives on the role of place and space in language learning, as mobile, embedded, virtual, and reality-augmenting technologies play an ever-increasing role in our lives. Understanding that dynamic is aided by theories and frameworks such as 4E cognition and sociomaterialism, which posit closer connections between human cognition/language and the world around us

    The VoiceApp System: Speech Technologies to Access the Semantic Web

    Get PDF
    Proceedings of: 14th Conference of the Spanish Association for Artificial Intelligence, CAEPIA 2011, La Laguna, Spain, November 7-11, 2011Maximizing accessibility is not always the main objective in the design of web applications, specially if it is concerned with facilitating access for disabled people. In this paper we present the VoiceApp multimodal dialog system, which enables to access and browse Internet by means of speech. The system consists of several modules that provide different user experiences on the web. Voice Dictionary allows the multimodal access to the Wikipedia encyclopedia, Voice Pronunciations has been developed to facilitate the learning of new languages by means of games with words and images, whereas Voice Browser provides a fast and effective multimodal interface to the Google web search engine. All the applications in the system can be accessed multimodally using traditional graphic user interfaces such as keyboard and mouse, and/or by means of voice commands. Thus, the results are accessible also for motorhandicapped and visually impaired users and are easier to access by any user in small hand-held devices where graphical interfaces are in some cases difficult to employ.Research funded by projects CICYT TIN 2008-06742-C02-02/TSI, CICYT TEC 2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485), and DPS 2008-07029-C02-02.Publicad
    • 

    corecore