International SERIES on Information Systems and Management in Creative eMedia (CreMedia)
Not a member yet
    147 research outputs found

    Interacting with Intelligent Characters in AR

    Get PDF
    In this paper, we explore interacting with virtual characters in AR along real-world environments. Our vision is that virtual characters will be able to understand the real-world environment and interact in an intelligent and realistic manner with it. For example, a character can walk around uneven stairs and slopes, or be pushed away by collisions with real-world objects like a ball. We describe how to automatically animate a new character, and imbue it’s motion with adaption to environments and reactions to perturbations from the real world

    Combining Intelligent Recommendation and Mixed Reality in Itineraries for Urban Exploration

    Get PDF
    Exploration of points of interest (POI) in urban environments  is challenging for the large amount of items near or reachable by the user and for the modality hindrances due to reduced manual flexibility and competing visual attention. We propose to combine different modalities, VR, AR, haptics-audio interfaces, with intelligent recommendation based on a computational method combining different data graph overlays: social, personal and search-time user input. We integrate such features in flexible itineraries that aid different phases and aspects of exploration

    Artificial Intelligence Meets Virtual and Augmented Worlds (AIVR), in conjunction with SIGGRAPH Asia

    Get PDF

    Visualisation Methods of Hierarchical Biological Data: A Survey and Review

    Get PDF
    The sheer amount of high dimensional biomedical data requires machine learning, and advanced data visualization techniques to make the data understandable for human experts. Most biomedical data today is in arbitrary high dimensional spaces, and is not directly accessible to the human expert for a visual and interactive analysis process. To cope with this challenge, the application of machine learning and knowledge extraction methods is indispensable throughout the entire data analysis workflow. Nevertheless, human experts need to understand and interpret the data and experimental results. Appropriate understanding is typically supported by visualizing the results adequately, which is not a simple task. Consequently, data visualization is one of the most crucial steps in conveying biomedical results. It can and should be considered as a critical part of the analysis pipeline. Still as of today, 2D representations dominate, and human perception is limited to this lower dimension to understand the data. This makes the visualization of the results in an understandable and comprehensive manner a grand challenge. This paper reviews the current state of visualization methods in a biomedical context. It focuses on hierarchical biological data as a source for visualization, and gives a comprehensiv

    Deep Learning for Classification of Peak Emotions within Virtual Reality Systems

    Get PDF
    Research has demonstrated well-being benefits from positive, ‘peak’ emotions such as awe and wonder, prompting the HCI community to utilize affective computing and AI modelling for elicitation and measurement of those target emotional states. The immersive nature of virtual reality (VR) content and systems can lead to feelings of awe and wonder, especially with a responsive, personalized environment based on biosignals. However, an accurate model is required to differentiate between emotional states that have similar biosignal input, such as awe and fear. Deep learning may provide a solution since the subtleties of these emotional states and affect may be recognized, with biosignal data viewed in a time series so that researchers and designers can understand which features of the system may have influenced target emotions. The proposed deep learning fusion system in this paper will use data collected from a corpus, created through collection of physiological biosignals and ranked qualitative data, and will classify these multimodal signals into target outputs of affect. This model will be real-time for the evaluation of VR system features which influence awe/wonder, using a bio-responsive environment. Since biosignal data will be collected through wireless, wearable sensor technology, and modelled through the same computer powering the VR system, it can be used in field research and studios

    Adaptive Tutoring on a Virtual Reality Driving Simulator

    Get PDF
    We propose a system for a VR driving simulator including an \ac{its} to train the user's driving skills. TheVR driving simulator comprises a detailed model of a city, VR traffic, and a physical driving engine, interacting with the driver. In a physical mockup of a car cockpit, the driver operates the vehicle through the virtual environment by controlling a steering wheel, pedals, and a gear lever. Using a HMD, the driver observes the scene from within the car. The realism of the simulation is enhanced by a 6 DOF motion platform, capable of simulating forces experienced when accelerating, braking, or turning the car. Based on a pre-defined list of driving-related activities, the ITS permanently assesses the quality of driving during the simulation and suggests an optimal path through the city to the driver in order to improve the driving skills. A user study revealed that most drivers experience presence in the virtual world and are proficient in operating the car

    Data-Driven Approach to Human-Engaged Computing

    Get PDF
    This paper presents an overview of the research landscape of datadriven human-engaged computing in the Human-Computer Interaction Initiative at the Hong Kong University of Science and Technology

    DupRobo: An Interactive Robotic Platform for Physical Block-Based Autocompletion

    Get PDF
    In this paper, we present DupRobo, an interactive robotic platform for tangible block-based design and construction. DupRobo supported user-customisable exemplar, repetition control, and tangible autocompletion, through the computer-vision and the robotic techniques. With DupRobo, we aim to reduce users’ workload in repetitive block-based construction, yet preserve the direct manipulatability and the intuitiveness in tangible model design, such as product design and architecture design

    Chalktalk VR/AR

    Get PDF
    When people want to brainstorm ideas, currently they often draw their ideas on paper or on a whiteboard.  But the result of those drawings is a static visual representation.  Alternately, people often use various tools to prepare animations and simulations to express their ideas.  But those animations and simulations must be created beforehand, and therefore cannot be easily modified dynamically in the course of the brainstorming process. Chalktalk VR/AR is a paradigm for creating drawings in the context of a face to face brainstorming session that is happening with the support of VR or AR.  Participants draw their ideas in the form of simple sketched simulation elements, which can appear to be floating in the air between participants.  Those elements are then recognized by a simple AI recognition system, and can be interactively incorporated by participants into an emerging simulation that builds more complex simulations by linking together these simulation elements in the course of the discussion

    AI, You're Fired! Artwork

    Get PDF
    The paper is text-based artwork, which is representing the initial conceptualization or contemplative phase of the media art and contemporary art performance and installation.The objective of the long term art project is to further examine the potential of engagement of the advanced technology within the context of artistic research and contemporary art practice, with the specific postulate that the potential product of the artwork is expected to be imperceptible. The artistic research is referring to the philosophical and metaphysics idea that the alleged real reality cannot be perceived or defined via some concept. The question is, if it is so, than, is the art or the artist capable to successfully illustrate the undetectable real reality, even with the most advanced technological instruments employed. The text-based contemporary artwork is partly referring to another segment, which can be also observed within the context of the contemporary art – text based computer adventure games. More specifically, the method implemented for establishing the artwork’s concept uses some aspects similar to those used in early text-based computer games.There are several stages in which the long-term artwork will progress. The initial form is designed in such a manner which would confirm that this segment of artwork not only does serve as a fundament for the other parts to unfold, but is also autonomous and is already completed in terms of contemporary art. This stand is applicable to all the consecutive stages – each segment is both independent and contextual.The following stages would include the interactivity between the author, art audience, but also with the devices applied for the producing the artwork, like advanced technology instruments e.g. augmented reality (AR), virtual reality (VR), mixed reality (MR) devices, then interactive 3D technology, artificial intelligence (AI), plus the interactivity with the no-reality (reality in spiritual and philosophical contexts)

    108

    full texts

    147

    metadata records
    Updated in last 30 days.
    International SERIES on Information Systems and Management in Creative eMedia (CreMedia) is based in Finland
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇