26 research outputs found

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    All Hands on Deck: Choosing Virtual End Effector Representations to Improve Near Field Object Manipulation Interactions in Extended Reality

    Get PDF
    Extended reality, or XR , is the adopted umbrella term that is heavily gaining traction to collectively describe Virtual reality (VR), Augmented reality (AR), and Mixed reality (MR) technologies. Together, these technologies extend the reality that we experience either by creating a fully immersive experience like in VR or by blending in the virtual and real worlds like in AR and MR. The sustained success of XR in the workplace largely hinges on its ability to facilitate efficient user interactions. Similar to interacting with objects in the real world, users in XR typically interact with virtual integrants like objects, menus, windows, and information that convolve together to form the overall experience. Most of these interactions involve near-field object manipulation for which users are generally provisioned with visual representations of themselves also called self-avatars. Representations that involve only the distal entity are called end-effector representations and they shape how users perceive XR experiences. Through a series of investigations, this dissertation evaluates the effects of virtual end effector representations on near-field object retrieval interactions in XR settings. Through studies conducted in virtual, augmented, and mixed reality, implications about the virtual representation of end-effectors are discussed, and inferences are made for the future of near-field interaction in XR to draw upon from. This body of research aids technologists and designers by providing them with details that help in appropriately tailoring the right end effector representation to improve near-field interactions, thereby collectively establishing knowledge that epitomizes the future of interactions in XR

    Real-Time Collision Imminent Steering Using One-Level Nonlinear Model Predictive Control

    Full text link
    Automotive active safety features are designed to complement or intervene a human driver's actions in safety critical situations. Existing active safety features, such as adaptive cruise control and lane keep assist, are able to exploit the ever growing sensor and computing capabilities of modern automobiles. An emerging feature, collision imminent steering, is designed to perform an evasive lane change to avoid collision if the vehicle believes collision cannot be avoided by braking alone. This is a challenging maneuver, as the expected highway setting is characterized by high speeds, narrow lane restrictions, and hard safety constraints. To perform such a maneuver, the vehicle may be required to operate at the nonlinear dynamics limits, necessitating advanced control strategies to enforce safety and drivability constraints. This dissertation presents a one-level nonlinear model predictive controller formulation to perform a collision imminent steering maneuver in a highway setting at high speeds, with direct consideration of safety criteria in the highway environment and the nonlinearities characteristic of such a potentially aggressive maneuver. The controller is cognizant of highway sizing constraints, vehicle handling capability and stability limits, and time latency when calculating the control action. In simulated testing, it is shown the controller can avoid collision by conducting a lane change in roughly half the distance required to avoid collision by braking alone. In preliminary vehicle testing, it is shown the control formulation is compatible with the existing perception pipeline, and prescribed control action can safely perform a lane change at low speed. Further, the controller must be suitable for real-time implementation and compatible with expected automotive control architecture. Collision imminent steering, and more broadly collision avoidance, control is a computationally challenging problem. At highway speeds, the required time for action is on the order of hundreds of milliseconds, requiring a control formulation capable of operating at tens of Hertz. To this extent, this dissertation investigates the computational expense of such a controller, and presents a framework for designing real-time compatible nonlinear model predictive controllers. Specifically, methods for numerically simulating the predicted vehicle response and response sensitivities are compared, their cross interaction with trajectory optimization strategy are considered, and the resulting mapping to a parallel computing hardware architecture is investigated. The framework systematically evaluates the underlying numerical optimization problem for bottlenecks, from which it provides alternative solutions strategies to achieve real-time performance. As applied to the baseline collision imminent steering controller, the procedure results in an approximate three order of magnitude reduction in compute wall time, supporting real-time performance and enabling preliminary testing on automotive grade hardware.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163063/1/jbwurts_1.pd

    Bimodal Audiovisual Perception in Interactive Application Systems of Moderate Complexity

    Get PDF
    The dissertation at hand deals with aspects of quality perception of interactive audiovisual application systems of moderate complexity as e.g. defined in the MPEG-4 standard. Because in these systems the available computing power is limited, it is decisive to know which factors influence the perceived quality. Only then can the available computing power be distributed in the most effective and efficient way for the simulation and display of audiovisual 3D scenes. Whereas quality factors for the unimodal auditory and visual stimuli are well known and respective models of perception have been successfully devised based on this knowledge, this is not true for bimodal audiovisual perception. For the latter, it is only known that some kind of interdependency between auditory and visual perception does exist. The exact mechanisms of human audiovisual perception have not been described. It is assumed that interaction with an application or scene has a major influence upon the perceived overall quality. The goal of this work was to devise a system capable of performing subjective audiovisual assessments in the given context in a largely automated way. By applying the system, first evidence regarding audiovisual interdependency and influence of interaction upon perception should be collected. Therefore this work was composed of three fields of activities: the creation of a test bench based on the available but (regarding the audio functionality) somewhat restricted MPEG-4 player, the preoccupation with methods and framework requirements that ensure comparability and reproducibility of audiovisual assessments and results, and the performance of a series of coordinated experiments including the analysis and interpretation of the collected data. An object-based modular audio rendering engine was co-designed and -implemented which allows to perform simple room-acoustic simulations based on the MPEG-4 scene description paradigm in real-time. Apart from the MPEG-4 player, the test bench consists of a haptic Input Device used by test subjects to enter their quality ratings and a logging tool that allows to journalize all relevant events during an assessment session. The collected data can be exported comfortably for further analysis using appropriate statistic tools. A thorough analysis of the well established test methods and recommendations for unimodal subjective assessments was performed to find out whether a transfer to the audiovisual bimodal case is easily possible. It became evident that - due to the limited knowledge about the underlying perceptual processes - a novel categorization of experiments according to their goals could be helpful to organize the research in the field. Furthermore, a number of influencing factors could be identified that exercise control over bimodal perception in the given context. By performing the perceptual experiments using the devised system, its functionality and ease of use was verified. Apart from that, some first indications for the role of interaction in perceived overall quality have been collected: interaction in the auditory modality reduces a human's ability of correctly rating the audio quality, whereas visually based (cross-modal) interaction does not necessarily generate this effect.Die vorliegende Dissertation beschäftigt sich mit Aspekten der Qualitätswahrnehmung von interaktiven audiovisuellen Anwendungssystemen moderater Komplexität, wie sie z.B. durch den MPEG-4 Standard definiert sind. Die Frage, welche Faktoren Einfluss auf die wahrgenommene Qualität von audiovisuellen Anwendungssystemen haben ist entscheidend dafür, wie die nur begrenzt zur Verfügung stehende Rechenleistung für die Echtzeit-Simulation von 3D Szenen und deren Darbietung sinnvoll verteilt werden soll. Während Qualitätsfaktoren für unimodale auditive als auch visuelle Stimuli seit langem bekannt sind und entsprechende Modelle existieren, müssen diese für die bimodale audiovisuelle Wahrnehmung noch hergeleitet werden. Dabei ist bekannt, dass eine Wechselwirkung zwischen auditiver und visueller Qualität besteht, nicht jedoch, wie die Mechanismen menschlicher audiovisueller Wahrnehmung genau arbeiten. Es wird auch angenommen, dass der Faktor Interaktion einen wesentlichen Einfluss auf wahrgenommene Qualität hat. Das Ziel dieser Arbeit war, ein System für die zeitsparende und weitgehend automatisierte Durchführung von subjektiven audiovisuellen Wahrnehmungstests im gegebenen Kontext zu erstellen und es für einige exemplarische Experimente einzusetzen, welche erste Aussagen über audiovisuelleWechselwirkungen und den Einfluss von Interaktion auf die Wahrnehmung erlauben sollten. Demzufolge gliederte sich die Arbeit in drei Aufgabenbereiche: die Erstellung eines geeigneten Testsystems auf der Grundlage eines vorhandenen, jedoch in seiner Audiofunktionalität noch eingeschränkten MPEG-4 Players, das Sicherstellen von Vergleichbarkeit und Wiederholbarkeit von audiovisuellen Wahrnehmungstests durch definierte Testmethoden und -bedingungen, und die eigentliche Durchführung der aufeinander abgestimmten Experimente mit anschlieÿender Auswertung und Interpretation der gewonnenen Daten. Dazu wurde eine objektbasierte, modulare Audio-Engine mitentworfen und -implementiert, welche basierend auf den Möglichkeiten der MPEG-4 Szenenbeschreibung alle Fähigkeiten zur Echtzeitberechnung von Raumakustik bietet. Innerhalb des entwickelten Testsystems kommuniziert der MPEG-4 Player mit einem hardwaregestützten Benutzerinterface zur Eingabe der Qualitätsbewertungen durch die Testpersonen. Sämtliche relevanten Ereignisse, die während einer Testsession auftreten, können mit Hilfe eines Logging-Tools aufgezeichnet und für die weitere Datenanalyse mit Statistikprogrammen exportiert werden. Eine Analyse der existierenden Testmethoden und -empfehlungen für unimodale Wahrnehmungstests sollte zeigen, ob deren Übertragung auf den audiovisuellen Fall möglich ist. Dabei wurde deutlich, dass bedingt durch die fehlende Kenntnis der zugrundeliegenden Wahrnehmungsprozesse zunächst eine Unterteilung nach den Zielen der durchgeführten Experimente sinnvoll erscheint. Weiterhin konnten Einflussfaktoren identifiziert werden, die die bimodale Wahrnehmung im gegebenen Kontext steuern. Bei der Durchführung der Wahrnehmungsexperimente wurde die Funktionsfähigkeit des erstellten Testsystems verifiziert. Darüber hinaus ergaben sich erste Anhaltspunkte für den Einfluss von Interaktion auf die wahrgenommene Gesamtqualität: Interaktion in der auditiven Modalität verringert die Fähigkeit, Audioqualität korrekt beurteilen zu können, während visuell gestützte Interaktion (cross-modal) diesen Effekt nicht zwingend generiert

    To Affinity and Beyond: Interactive Digital Humans as a Human Computer Interface

    Get PDF
    The field of human computer interaction is increasingly exploring the use of more natural, human-like user interfaces to build intelligent agents to aid in everyday life. This is coupled with a move to people using ever more realistic avatars to represent themselves in their digital lives. As the ability to produce emotionally engaging digital human representations is only just now becoming technically possible, there is little research into how to approach such tasks. This is due to both technical complexity and operational implementation cost. This is now changing as we are at a nexus point with new approaches, faster graphics processing and enabling new technologies in machine learning and computer vision becoming available. I articulate the issues required for such digital humans to be considered successfully located on the other side of the phenomenon known as the Uncanny Valley. My results show that a complex mix of perceived and contextual aspects affect the sense making on digital humans and highlights previously undocumented effects of interactivity on the affinity. Users are willing to accept digital humans as a new form of user interface and they react to them emotionally in previously unanticipated ways. My research shows that it is possible to build an effective interactive digital human that crosses the Uncanny Valley. I directly explore what is required to build a visually realistic digital human as a primary research question and I explore if such a realistic face provides sufficient benefit to justify the challenges involved in building it. I conducted a Delphi study to inform the research approaches and then produced a complex digital human character based on these insights. This interactive and realistic digital human avatar represents a major technical undertaking involving multiple teams around the world. Finally, I explored a framework for examining the ethical implications and signpost future research areas

    Proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress

    Get PDF
    Published proceedings of the 2018 Canadian Society for Mechanical Engineering (CSME) International Congress, hosted by York University, 27-30 May 2018

    Cognitive Foundations for Visual Analytics

    Full text link
    corecore