584 research outputs found

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    A vision-based approach for human hand tracking and gesture recognition.

    Get PDF
    Hand gesture interface has been becoming an active topic of human-computer interaction (HCI). The utilization of hand gestures in human-computer interface enables human operators to interact with computer environments in a natural and intuitive manner. In particular, bare hand interpretation technique frees users from cumbersome, but typically required devices in communication with computers, thus offering the ease and naturalness in HCI. Meanwhile, virtual assembly (VA) applies virtual reality (VR) techniques in mechanical assembly. It constructs computer tools to help product engineers planning, evaluating, optimizing, and verifying the assembly of mechanical systems without the need of physical objects. However, traditional devices such as keyboards and mice are no longer adequate due to their inefficiency in handling three-dimensional (3D) tasks. Special VR devices, such as data gloves, have been mandatory in VA. This thesis proposes a novel gesture-based interface for the application of VA. It develops a hybrid approach to incorporate an appearance-based hand localization technique with a skin tone filter in support of gesture recognition and hand tracking in the 3D space. With this interface, bare hands become a convenient substitution of special VR devices. Experiment results demonstrate the flexibility and robustness introduced by the proposed method to HCI.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .L8. Source: Masters Abstracts International, Volume: 43-03, page: 0883. Adviser: Xiaobu Yuan. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    United States Department of Energy Integrated Manufacturing & Processing Predoctoral Fellowships. Final Report

    Full text link

    Sculpting Unrealities: Using Machine Learning to Control Audiovisual Compositions in Virtual Reality

    Get PDF
    This thesis explores the use of interactive machine learning (IML) techniques to control audiovisual compositions within the emerging medium of virtual reality (VR). Accompanying the text is a portfolio of original compositions and open-source software. These research outputs represent the practical elements of the project that help to shed light on the core research question: how can IML techniques be used to control audiovisual compositions in VR? In order to find some answers to this question, it was broken down into its constituent elements. To situate the research, an exploration of the contemporary field of audiovisual art locates the practice between the areas of visual music and generative AV. This exploration of the field results in a new method of categorising the constituent practices. The practice of audiovisual composition is then explored, focusing on the concept of equality. It is found that, throughout the literature, audiovisual artists aim to treat audio and visual material equally. This is interpreted as a desire for balance between the audio and visual material. This concept is then examined in the context of VR. A feeling of presence is found to be central to this new medium and is identified as an important consideration for the audiovisual composer in addition to the senses of sight and sound. Several new terms are formulated which provide the means by which the compositions within the portfolio are analysed. A control system, based on IML techniques, is developed called the Neural AV Mapper. This is used to develop a compositional methodology through the creation of several studies. The outcomes from these studies are incorporated into two live performance pieces, Ventriloquy I and Ventriloquy II. These pieces showcase the use of IML techniques to control audiovisual compositions in a live performance context. The lessons learned from these pieces are incorporated into the development of the ImmersAV toolkit. This open-source software toolkit was built specifically to allow for the exploration of the IML control paradigm within VR. The toolkit provides the means by which the immersive audiovisual compositions, Obj_#3 and Ag Fás Ar Ais Arís are created. Obj_#3 takes the form of an immersive audiovisual sculpture that can be manipulated in real-time by the user. The title of the thesis references the physical act of sculpting audiovisual material. It also refers to the ability of VR to create alternate realities that are not bound to the physics of real-life. This exploration of unrealities emerges as an important aspect of the medium. The final piece in the portfolio, Ag Fás Ar Ais Arís takes the knowledge gained from the earlier work and pushes the boundaries to maximise the potential of the medium and the material

    A Biosymtic (Biosymbiotic Robotic) Approach to Human Development and Evolution. The Echo of the Universe.

    Get PDF
    In the present work we demonstrate that the current Child-Computer Interaction paradigm is not potentiating human development to its fullest – it is associated with several physical and mental health problems and appears not to be maximizing children’s cognitive performance and cognitive development. In order to potentiate children’s physical and mental health (including cognitive performance and cognitive development) we have developed a new approach to human development and evolution. This approach proposes a particular synergy between the developing human body, computing machines and natural environments. It emphasizes that children should be encouraged to interact with challenging physical environments offering multiple possibilities for sensory stimulation and increasing physical and mental stress to the organism. We created and tested a new set of computing devices in order to operationalize our approach – Biosymtic (Biosymbiotic Robotic) devices: “Albert” and “Cratus”. In two initial studies we were able to observe that the main goal of our approach is being achieved. We observed that, interaction with the Biosymtic device “Albert”, in a natural environment, managed to trigger a different neurophysiological response (increases in sustained attention levels) and tended to optimize episodic memory performance in children, compared to interaction with a sedentary screen-based computing device, in an artificially controlled environment (indoors) - thus a promising solution to promote cognitive performance/development; and that interaction with the Biosymtic device “Cratus”, in a natural environment, instilled vigorous physical activity levels in children - thus a promising solution to promote physical and mental health

    Productive Vision: Methods for Automatic Image Comprehension

    Get PDF
    Image comprehension is the ability to summarize, translate, and answer basic questions about images. Using original techniques for scene object parsing, material labeling, and activity recognition, a system can gather information about the objects and actions in a scene. When this information is integrated into a deep knowledge base capable of inference, the system becomes capable of performing tasks that, when performed by students, are considered by educators to demonstrate comprehension. The vision components of the system consist of the following: object scene parsing by means of visual filters, material scene parsing by superpixel segmentation and kernel descriptors, and activity recognition by action grammars. These techniques are characterized and compared with the state-of-the-art in their respective fields. The output of the vision components is a list of assertions in a Cyc microtheory. By reasoning on these assertions and the rest of the Cyc knowledge base, the system is able to perform a variety of tasks, including the following: Recognize essential parts of objects are likely present in the scene despite not having an explicit detector for them. Recognize the likely presence of objects due to the presence of their essential parts. Improve estimates of both object and material labels by incorporating knowledge about the typical pairings. Label ambiguous objects with a more general label that encompasses both possible labelings. Answer questions about the scene that require inference and give justifications for the answers in natural language. Create a visual representation of the scene in a new medium. Recognize scene similarity even when there is little visual similarity

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions. They were organized in topical sections as follows: haptic science; haptic technology; and haptic applications

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Musical Haptics

    Get PDF
    Haptic Musical Instruments; Haptic Psychophysics; Interface Design and Evaluation; User Experience; Musical Performanc

    A perceptually motivated approach to timbre representation and visualisation.

    Get PDF
    Musical timbre is a complex phenomenon and is often understood in relation to the separation and comparison of different sound categories. The representation of musical timbre has traditionally consisted of instrumentation category (e.g. violin, piano) and articulation technique (e.g. pizzicato, staccato). Electroacoustic music places more emphasis on timbre variation as musical structure, and has highlighted the need for better, more in-depth forms of representation of musical timbre. Similarly, research from experimental psychology and audio signal analysis has deepened our understanding of the perception, description, and measurement of musical timbre, suggesting the possibility of more exact forms of representation that directly reference low-level descriptors of the audio signal (rather than high-level categories of sound or instrumentation). Research into the perception of timbre has shown that ratings of similarity between sounds can be used to arrange sounds in an N-dimensional perceptual timbre space, where each dimension relates to a particular axis of differentiation between sounds. Similarly, research into the description of timbre has shown that verbal descriptors can often be clustered into a number of categories, resulting in an N-dimensional semantic timbre space. Importantly, these semantic descriptors are often physical, material, and textural in nature. Audio signal processing techniques can be used to extract numeric descriptors of the spectral and dynamic content of an audio signal. Research has suggested correlations between these audio descriptors and different semantic descriptors and perceptual dimensions in perceptual timbre spaces. This thesis aims to develop a perceptually motivated approach to timbre representation by making use of correlations between semantic and acoustic descriptors of timbre. User studies are discussed that explored participant preferences for different visual mappings of acoustic timbre features. The results of these studies, together with results from existing research, have been used in the design and development of novel systems for timbre representation. These systems were developed both in the context of digital interfaces for sound design and music production, and in the context of real-time performance and generative audio-reactive visualisation. A generalised approach to perceptual timbre representation is presented and discussed with reference to the experimentation and resulting systems. The use of semantic visual mappings for low-level audio descriptors in the representation of timbre suggests that timbre would be better defined with reference to individual audio features and their variation over time. The experimental user studies and research-led development have highlighted specific techniques and audio-visual mappings that would be very useful to practitioners and researchers in the area of audio analysis and representation
    • …
    corecore