139 research outputs found

    Multimodal interaction in conneted automated vehicles

    Get PDF
    Electric vehicles and automated vehicles are getting more pervasive in our everyday life. Ideally, fully automated vehicles that drivers can completely trust would be the best solution. However, due to technical limitations and human factors issues, fully automated vehicles are still under test, and no concrete evidence has yet shown their functionalities are superior to human cognition and operation. In the Mind Music Machine Lab, we are actively conducting research on connected and automated vehicles, mainly using driving simulators. This talk specifically focuses on multimodal interactions between a driver and a vehicle as well as the driver and nearby drivers. In this autonomous driving context, we facilitate the collaborative driving by estimating the driver’s cognitive and affective states using multiple sensors (e.g., computer vision, physiological devices) and by communicating via auditory and gestural channels. Future works include refining our designs for diverse populations, including drivers with difficulties/disabilities, passengers, pedestrians, etc.https://digitalcommons.mtu.edu/techtalks/1028/thumbnail.jp

    Robotic arts: Current practices, potentials, and implications

    Get PDF
    Given that the origin of the “robot” comes from efforts to create a worker to help people, there has been relatively little research on making a robot for non-work purposes. However, some researchers have explored robotic arts since Leonardo da Vinci. Many questions can be posed regarding the potentials of robotic arts: (1) Is there anything we can call machine-creativity? (2) Can robots improvise artworks on the fly? and (3) Can art robots pass the Turing test? To ponder these questions and see the current status quo of robotic arts, the present paper surveys the contributions of robotics in diverse forms of arts, including drawing, theater, music, and dance. The present paper describes selective projects in each genre, core procedure, possibilities and limitations within the aesthetic computing framework. Then, the paper discusses implications of these robotic arts in terms of both robot research and art research, followed by conclusions including answers to the questions posed at the outset

    Regulating drivers’ aggressiveness by Sonifying emotional data

    Get PDF
    There have been efforts within the area of cognitive and behavioral sciences to mitigate drivers’ emotion to decrease the associated traffic accidents, injuries, fatalities, and property damage. In this study, we targeted aggressive drivers and try to regulate their emotion through sonifying their emotional data. Results are discussed with an affect regulation model and future research

    Robotic motion learning framework to promote social engagement

    Get PDF
    Abstract Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human–robot interaction (HRI). This paper discusses a novel framework designed to improve human–robot interaction through robotic imitation of a participant’s gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant’s novel gestures during a play session. We hypothesize that the robot’s use of imitation will increase the participant’s openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction

    “Musical Exercise” for people with visual impairments: A preliminary study with the blindfolded

    Get PDF
    Performing independent physical exercise is critical to maintain one\u27s good health, but it is specifically hard for people with visual impairments. To address this problem, we have developed a Musical Exercise platform for people with visual impairments so that they can perform exercise in a good form consistently. We designed six different conditions, including blindfolded or visual without audio conditions, and blindfolded or visual with two different types of audio feedback (continuous vs. discrete) conditions. Eighteen sighted participants participated in the experiment, by doing two exercises - squat and wall sit with all six conditions. The results show that Musical Exercise is a usable exercise assistance system without any adverse effect on exercise completion time or perceived workload. Also, the results show that with a specific sound design (i.e., discrete), participants in the blindfolded condition can do exercise as consistently as participants in the non-blindfolded condition. This implies that not all sounds equally work and thus, care is required to refine auditory displays. Potentials and limitations of Musical Exercise and future works are discussed with the results

    Towards an in-vehicle sonically-enhanced gesture control interface: A pilot study

    Get PDF
    A pilot study was conducted to explore the potential of sonically-enhanced gestures as controls for future in-vehicle information systems (IVIS). Four concept menu systems were developed using a LEAP Motion and Pure Data: (1) 2x2 with auditory feedback, (2) 2x2 without auditory feedback, (3) 4x4 with auditory feedback, and (4) 4x4 without auditory feedback. Seven participants drove in a simulator while completing simple target-acquisition tasks using each of the four prototype systems. Driving performance and eye glance behavior were collected as well as subjective ratings of workload and system preference. Results from driving performance and eye tracking measures strongly indicate that the 2x2 grids yield better driving safety outcomes than 4x4 grids. Subjective ratings show similar patterns for driver workload and preferences. Auditory feedback led to similar improvements in driving performance and eye glance behavior as well as subjective ratings of workload and preference, compared to visual-only

    Anger effects on driver situation awareness and driving performance

    Get PDF
    Research has suggested that emotional states have critical effects on various cognitive processes, which are important components of situation awareness (Endsley, 1995b). Evidence from driving studies has also emphasized the importance of driver situation awareness for performance and safety. However, to date, little research has investigated the relationship between emotional effects and driver situation awareness. In our experiment, 30 undergraduates drove in a simulator after induction of either anger or neutral affect. Results showed that an induced angry state can degrade driver situation awareness as well as driving performance as compared to a neutral state. However, the angry state did not have an impact on participants\u27 subjective judgment or perceived workload, which might imply that the effects of anger occurred below their level of conscious awareness. One of the reasons participants showed a lack of compensation for their deficits in performance might be that they were not aware of severe impacts of emotional effects on driving performance

    From rituals to magic: Interactive art and HCI of the past, present, and future

    Get PDF
    The connection between art and technology is much tighter than is commonly recognized. The emergence of aesthetic computing in the early 2000s has brought renewed focus on this relationship. In this article, we articulate how art and Human–Computer Interaction (HCI) are compatible with each other and actually essential to advance each other in this era, by briefly addressing interconnected components in both areas—interaction, creativity, embodiment, affect, and presence. After briefly introducing the history of interactive art, we discuss how art and HCI can contribute to one another by illustrating contemporary examples of art in immersive environments, robotic art, and machine intelligence in art. Then, we identify challenges and opportunities for collaborative efforts between art and HCI. Finally, we reiterate important implications and pose future directions. This article is intended as a catalyst to facilitate discussions on the mutual benefits of working together in the art and HCI communities. It also aims to provide artists and researchers in this domain with suggestions about where to go next

    Examining the learnability of auditory displays: Music, earcons, spearcons, and lyricons

    Get PDF
    Auditory displays are a useful platform to convey information to users for a variety of reasons. The present study sought to examine the use of different types of sounds that can be used in auditory displays—music, earcons, spearcons, and lyricons—to determine which sounds have the highest learnability when presented in sequences. Participants were self-trained on sound meanings and then asked to recall meanings after listening to sequences of varying lengths. The relatedness of sounds and their attributed meanings, or the intuitiveness of the sounds, was also examined. The results show that participants were able to learn and recall lyricons and spearcons the best, and related meaning is an important contributing variable to learnability and memorability of all sound types. This should open the door for future research and experimentation of lyricons and spearcons presented in auditory streams
    • …
    corecore