176 research outputs found

    Artificial Intelligence to fight COVID-19 outbreak impact: an overview

    Get PDF
    Artificial Intelligence (AI) is showing its strength worldwide in the healthcare sector. Today, in the aftermath of the COVID-19 pandemic, the help of technology appears to be relevant to keep the increase in new infections stable and help medical staff in treatment. Therefore, this paper aims to investigate how AI can be employed against COVID-19 outbreak. Using a multiple case study approach, researchers find out the following insights. First, AI could be used for drugs discovery and knowledge sharing, tracking and prediction, clinical decision making and diagnosis, social distancing and medical chatbots. Second, this paper provides an in-depth analysis of international best practice for tracking contacts and social distance applications. Third, AI technologies could have a transversal impact, also focusing on prevention strategies as a new corporate social responsibility vein. In the end, this paper has theoretical and managerial implications, too. On the theoretical side, we contribute to the extensive discussion about AI and healthcare considering COVID-19 outbreak. On the practical side, we provide medical personnel and policymakers with a tool to understand artificial intelligence and focus investment choices in the practical applications analysed

    Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences

    Get PDF
    With the development of metaverse(s), industry and academia are searching for the best ways to represent users' avatars in shared Virtual Environments (VEs), where real-time communication between users is required. The expressiveness of avatars is crucial for transmitting emotions that are key for social presence and user experience, and are conveyed via verbal and non-verbal facial and body signals. In this paper, two real-time modalities for conveying expressions in Virtual Reality (VR) via realistic, full-body avatars are compared by means of a user study. The first modality uses dedicated hardware (i.e., eye and facial trackers) to allow a mapping between the user’s facial expressions/eye movements and the avatar model. The second modality relies on an algorithm that, starting from an audio clip, approximates the facial motion by generating plausible lip and eye movements. The participants were requested to observe, for both the modalities, the avatar of an actor performing six scenes involving as many basic emotions. The evaluation considered mainly social presence and emotion conveyance. Results showed a clear superiority of facial tracking when compared to lip sync in conveying sadness and disgust. The same was less evident for happiness and fear. No differences were observed for anger and surprise

    Improving AR-powered remote assistance: A new approach aimed to foster operator’s autonomy and optimize the use of skilled resources

    Get PDF
    Augmented Reality (AR) has a number of applications in industry, but remote assistance represents one of the most prominent and widely studied use cases. Notwithstanding, although the set of functionalities supporting the communication between remote experts and on-site operators grew over time, the way in which remote assistance is delivered has not evolved yet to unleash the full potential of AR technology. The expert typically guides the operator step-by-step, and basically uses AR-based hints to visually support voice instructions. With this approach, skilled human resources may go under-utilized, as the time an expert invests in the assistance corresponds to the time needed by the operator to execute the requested operations. The goal of this work is to introduce a new approach to remote assistance that takes advantage of AR functionalities separately proposed in academic works and commercial products to re-organize the guidance workflow, with the aim to increase the operator's autonomy and, thus, optimize the use of expert's time. An AR-powered remote assistance platform able to support the devised approach is also presented. By means of a user study, this approach was compared to traditional step-by-step guidance, with the aim to estimate what is the potential of AR that is still unexploited. Results showed that with the new approach it is possible to reduce the time investment for the expert, allowing the operator to autonomously complete the assigned tasks in a time comparable to step-by-step guidance with a negligible need for further support

    On the usability of consumer locomotion techniques in serious games: Comparing arm swinging, treadmills and walk-in-place

    Get PDF
    When we refer to locomotion in Virtual Reality (VR) we subtend a vast and variegated number of investigations, solutions and devices coming from both research and industry. Despite this richness, a consolidated methodology for evaluating the many locomotion techniques available is still lacking. The present paper extends a previous work in which authors performed a user study-based comparison between two common locomotion techniques, i.e., Arm Swinging, and an omni-directional treadmill with a containment ring. In the study, users were engaged in a realistic immersive VR scenario depicting a fire event in a road tunnel. Remaining adherent to the previously defined methodology, the current work widens the comparison to consider two other locomotion methods (keeping results obtained with the former technique above for reference purposes), namely, a different treadmill constraining the user through a top-mounted independent support structure, and Walk-in-Place, a technique which allows the user to move in the virtual environment by performing a natural marching gesture by exploiting two sensors placed on his or her legs

    Comparison of hands-free speech-based navigation techniques for virtual reality training

    Get PDF
    When it comes to Virtual Reality (VR) training, the depicted scenarios can be characterized by a high level of complexity and extent. Speech-based interaction techniques can provide an intuitive, natural and effective way to navigate large Virtual Environments (VEs) without the need for handheld controllers, which may impair the execution of manual tasks or prevent the use of wearable haptic devices. In this study, three hands-free speech-based navigation techniques for VR, a speech-only technique, a speech with gaze variant (gaze to point to the destination, speech as trigger), and a combination of the fist two are compared by deploying them to a large VE representing a common industrial setting (a hangar). A within-subjects user study was carried out in order to assess the usability and the performance of the considered techniques

    Re-contextualizing the standing Sekhmet statues in the Temple of Ptah at Karnak through digital reconstruction and VR experience

    Get PDF
    Recent trends in the Digital Humanities – conceived as new modalities of collaborative, transdisciplinary and computational research and presentation – also strongly influence research approaches and presentation practices in museums. Indeed, ongoing projects in museums have considerably expanded digital access to data and information, documentation and visualization of ancient ruins and objects. In addition, 3D modelling and eXtended Reality opened up new avenues of interacting with a wider public through digital reconstructions that allow both objects and sites to be presented through visual narratives based on multidisciplinary scholarly research. The article illustrates the use of 3D digital reconstruction and virtual reality to recontextualise standing statues of Sekhmet in the Temple of Ptah at Karnak, where they were found in 1818. Today, they are on display at Museo Egizio, Turin. The theoretical framework of the research and the operational workflow – based on the study of the available archaeological, textual, and pictorial data – is presented here
    • …
    corecore