20 research outputs found

    HairBrush for Immersive Data-Driven Hair Modeling

    Get PDF
    International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques

    Intuitive, Interactive Beard and Hair Synthesis with Generative Models

    Full text link
    We present an interactive approach to synthesizing realistic variations in facial hair in images, ranging from subtle edits to existing hair to the addition of complex and challenging hair in images of clean-shaven subjects. To circumvent the tedious and computationally expensive tasks of modeling, rendering and compositing the 3D geometry of the target hairstyle using the traditional graphics pipeline, we employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second. The synthesis is controlled by simple and sparse guide strokes from the user defining the general structural and color properties of the target hairstyle. We qualitatively and quantitatively evaluate our chosen method compared to several alternative approaches. We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle, and demonstrate that our approach also allows for flexible and high-fidelity scalp hair synthesis.Comment: To be presented in the 2020 Conference on Computer Vision and Pattern Recognition (CVPR 2020, Oral Presentation). Supplementary video can be seen at: https://www.youtube.com/watch?v=v4qOtBATrv

    Using visual feedback to guide movement: Properties of adaptation in changing environments and Parkinson\u27s disease

    Get PDF
    On a day-to-day basis we use visual information to guide the execution of our movements with great ease. The use of vision allows us to guide and modify our movements by appropriately transforming external sensory information into proper motor commands. Current literature characterizes the process of visuomotor adaptation, but fails to consider the incremental response to sensed errors that comprise a fully adaptive process. We aimed to understand the properties of the trial-by-trial transformation of sensed visual error into subsequent motor adaptation. In this thesis we further aimed to understand how visuomotor learning changes as a function of experienced environment and how it is impacted by Parkinson\u27s disease. Recent experiments in force learning have shown that adaptive strategies can be flexibly and readily modified according to the demands of the environment a person experiences. In Chapter 2, we investigated the properties of visual feedback strategies in response to environments that changed daily. We introduced visual environments that could change as a function of the likelihood of experiencing a visual perturbation, or the direction of the visual perturbation bias across the workspace. By testing subjects in environments with changing statistics across several days, we were able to observe changes in the visuomotor sensitivity across environments. We found that subjects experiencing changes in visual likelihood adopted strategies very similar to those seen in force field learning. However, unlike in haptic learning, we discovered that when subjects experienced different environmental biases, adaptive sensitivity could be effected both within a single training day as well as across training days. In Chapter 3, we investigated the properties of visuomotor adaptation in patients with Parkinson\u27s disease. Previous experiments have suggested that patients with Parkinson\u27s disease have impoverished visuomotor learning when compared to healthy age-matched controls. We tested two aspects of visuomotor adaptation to determine the contribution of visual feedback in Parkinson\u27s disease: visual extent - thought to be mediated by the basal ganglia, and visual direction - thought to be cortically mediated. We found that patients with Parkinson\u27s disease fully adapted to changes in visual direction and showed more complete adaptation compared to control subjects, but adaptation in Parkinson\u27s disease patients was impaired during changes of visual extent. Our results confirm the idea that basal ganglia deficits can alter aspects of visuomotor adaptation. However, we have shown that part of this adaptive process remains intact, in accordance with hypotheses that state visuomotor control of direction and extent are separable processes

    Autocomplete element fields and interactive synthesis system development for aggregate applications.

    Get PDF
    Aggregate elements are ubiquitous in natural and man-made objects and have played an important role in the application of graphics, design and visualization. However, to efficiently arrange these aggregate elements with varying anisotropy and deformability still remains challenging, in particular in 3D environments. To overcome such a thorny issue, we thus introduce autocomplete element fields, including an element distribution formulation that can effectively cope with diverse output compositions with controllable element distributions in high production standard and efficiency as well as an element field formulation that can smoothly orient all the synthesized elements following given inputs, such as scalar or direction fields. The pro- posed formulations can not only properly synthesize distinct types of aggregate elements across various domain spaces without incorporating any extra process but also directly compute complete element fields from partial specifications without requiring fully specified inputs in any algorithmic step. Furthermore, in order to reduce input workload and enhance output quality for better usability and interactivity, we further develop an interactive synthesis system, centered on the idea of our autocomplete element fields, to facilitate the creation of element aggregations within different output do- mains. Analogous to conventional painting workflows, through a palette- based brushing interface, users can interactively mix and place a few aggregate elements over a brushing canvas and let our system automatically populate more aggregate elements with intended orientations and scales for the rest of outcome. The developed system can empower the users to iteratively design a variety of novel mixtures with reduced workload and enhanced quality under an intuitive and user-friendly brushing workflow with- out the necessity of a great deal of manual labor or technical expertise. We validate our prototype system with a pilot user study and exhibit its application in 2D graphic design, 3D surface collage, and 3D aggregate modeling

    Near-Infrared Spectroscopy for Brain Computer Interfacing

    Get PDF
    A brain-computer interface (BCI) gives those suffering from neuromuscular impairments a means to interact and communicate with their surrounding environment. A BCI translates physiological signals, typically electrical, detected from the brain to control an output device. A significant problem with current BCIs is the lengthy training periods involved for proficient usage, which can often lead to frustration and anxiety on the part of the user and may even lead to abandonment of the device. A more suitable and usable interface is needed to measure cognitive function more directly. In order to do this, new measurement modalities, signal acquisition and processing, and translation algorithms need to be addressed. This work implements a novel approach to BCI design, using noninvasive near-infrared spectroscopic (NIRS) techniques to develop a userfriendly optical BCI. NIRS is a practical non-invasive optical technique that can detect characteristic haemodynamic responses relating to neural activity. This thesis describes the use of NIRS to develop an accessible BCI system requiring very little user training. In harnessing the optical signal for BCI control an assessment of NIRS signal characteristics is carried out and detectable physiological effects are identified for BCI development. The investigations into various mental tasks for controlling the BCI show that motor imagery functions can be detected using NIRS. The optical BCI (OBCI) system operates in realtime characterising the occurrence of motor imagery functions, allowing users to control a switch - a “Mindswitch”. This work demonstrates the great potential of optical imaging methods for BCI development and brings to light an innovative approach to this field of research

    Video Conferencing: Infrastructures, Practices, Aesthetics

    Get PDF
    The COVID-19 pandemic has reorganized existing methods of exchange, turning comparatively marginal technologies into the new normal. Multipoint videoconferencing in particular has become a favored means for web-based forms of remote communication and collaboration without physical copresence. Taking the recent mainstreaming of videoconferencing as its point of departure, this anthology examines the complex mediality of this new form of social interaction. Connecting theoretical reflection with material case studies, the contributors question practices, politics and aesthetics of videoconferencing and the specific meanings it acquires in different historical, cultural and social contexts

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Mapping Brain Development and Decoding Brain Activity with Diffuse Optical Tomography

    Get PDF
    Functional neuroimaging has been used to map brain function as well as decode information from brain activity. However, applications like studying early brain development or enabling augmentative communication in patients with severe motor disabilities have been constrained by extant imaging modalities, which can be challenging to use in young children and entail major tradeoffs between logistics and image quality. Diffuse optical tomography (DOT) is an emerging method combining logistical advantages of optical imaging with enhanced image quality. Here, we developed one of the world’s largest DOT systems for high-performance optical brain imaging in children. From visual cortex activity in adults, we decoded the locations of checkerboard visual stimuli, e.g. localizing a 60 degree wedge rotating through 36 positions with an error of 25.8±24.7 degrees. Using animated movies as more child-friendly stimuli, we mapped reproducible responses to speech and faces with DOT in awake, typically developing 1-7 year-old children and adults. We then decoded with accuracy significantly above chance which movie a participant was watching or listening to from DOT data. This work lays a valuable foundation for ongoing research with wearable imaging systems and increasingly complex algorithms to map atypical brain development and decode covert semantic information in clinical populations

    Embodied-self-monitoring

    Get PDF
    corecore