118,538 research outputs found

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    Mirror Adaptation in Sensory-Motor Simultaneity

    Get PDF
    Background: When one watches a sports game, one may feel her/his own muscles moving in synchrony with the player's. Such parallels between observed actions of others and one's own has been well supported in the latest progress in neuroscience, and coined “mirror system.” It is likely that due to such phenomena, we are able to learn motor skills just by observing an expert's performance. Yet it is unknown whether such indirect learning occurs only at higher cognitive levels, or also at basic sensorimotor levels where sensorimotor delay is compensated and the timing of sensory feedback is constantly calibrated. Methodology/Principal Findings: Here, we show that the subject's passive observation of an actor manipulating a computer mouse with delayed auditory feedback led to shifts in subjective simultaneity of self mouse manipulation and auditory stimulus in the observing subjects. Likewise, self adaptation to the delayed feedback modulated the simultaneity judgment of the other subjects manipulating a mouse and an auditory stimulus. Meanwhile, subjective simultaneity of a simple visual disc and the auditory stimulus (flash test) was not affected by observation of an actor nor self-adaptation. Conclusions/Significance: The lack of shift in the flash test for both conditions indicates that the recalibration transfer is specific to the action domain, and is not due to a general sensory adaptation. This points to the involvement of a system for the temporal monitoring of actions, one that processes both one's own actions and those of others

    Characterization of spatio-temporal epidural event-related potentials for mouse models of psychiatric disorders.

    Get PDF
    Distinctive features in sensory event-related potentials (ERPs) are endophenotypic biomarkers of psychiatric disorders, widely studied using electroencephalographic (EEG) methods in humans and model animals. Despite the popularity and unique significance of the mouse as a model species in basic research, existing EEG methods applicable to mice are far less powerful than those available for humans and large animals. We developed a new method for multi-channel epidural ERP characterization in behaving mice with high precision, reliability and convenience and report an application to time-domain ERP feature characterization of the Sp4 hypomorphic mouse model for schizophrenia. Compared to previous methods, our spatio-temporal ERP measurement robustly improved the resolving power of key signatures characteristic of the disease model. The high performance and low cost of this technique makes it suitable for high-throughput behavioral and pharmacological studies

    Vision systems with the human in the loop

    Get PDF
    The emerging cognitive vision paradigm deals with vision systems that apply machine learning and automatic reasoning in order to learn from what they perceive. Cognitive vision systems can rate the relevance and consistency of newly acquired knowledge, they can adapt to their environment and thus will exhibit high robustness. This contribution presents vision systems that aim at flexibility and robustness. One is tailored for content-based image retrieval, the others are cognitive vision systems that constitute prototypes of visual active memories which evaluate, gather, and integrate contextual knowledge for visual analysis. All three systems are designed to interact with human users. After we will have discussed adaptive content-based image retrieval and object and action recognition in an office environment, the issue of assessing cognitive systems will be raised. Experiences from psychologically evaluated human-machine interactions will be reported and the promising potential of psychologically-based usability experiments will be stressed

    ImageSpirit: Verbal Guided Image Parsing

    Get PDF
    Humans describe images in terms of nouns and adjectives while algorithms operate on images represented as sets of pixels. Bridging this gap between how humans would like to access images versus their typical representation is the goal of image parsing, which involves assigning object and attribute labels to pixel. In this paper we propose treating nouns as object labels and adjectives as visual attribute labels. This allows us to formulate the image parsing problem as one of jointly estimating per-pixel object and attribute labels from a set of training images. We propose an efficient (interactive time) solution. Using the extracted labels as handles, our system empowers a user to verbally refine the results. This enables hands-free parsing of an image into pixel-wise object/attribute labels that correspond to human semantics. Verbally selecting objects of interests enables a novel and natural interaction modality that can possibly be used to interact with new generation devices (e.g. smart phones, Google Glass, living room devices). We demonstrate our system on a large number of real-world images with varying complexity. To help understand the tradeoffs compared to traditional mouse based interactions, results are reported for both a large scale quantitative evaluation and a user study.Comment: http://mmcheng.net/imagespirit

    In vivo investigation of hyperpolarized [1,3-13C2]acetoacetate as a metabolic probe in normal brain and in glioma.

    Get PDF
    Dysregulation in NAD+/NADH levels is associated with increased cell division and elevated levels of reactive oxygen species in rapidly proliferating cancer cells. Conversion of the ketone body acetoacetate (AcAc) to β-hydroxybutyrate (β-HB) by the mitochondrial enzyme β-hydroxybutyrate dehydrogenase (BDH) depends upon NADH availability. The β-HB-to-AcAc ratio is therefore expected to reflect mitochondrial redox. Previous studies reported the potential of hyperpolarized 13C-AcAc to monitor mitochondrial redox in cells, perfused organs and in vivo. However, the ability of hyperpolarized 13C-AcAc to cross the blood brain barrier (BBB) and its potential to monitor brain metabolism remained unknown. Our goal was to assess the value of hyperpolarized [1,3-13C2]AcAc in healthy and tumor-bearing mice in vivo. Following hyperpolarized [1,3-13C2]AcAc injection, production of [1,3-13C2]β-HB was detected in normal and tumor-bearing mice. Significantly higher levels of [1-13C]AcAc and lower [1-13C]β-HB-to-[1-13C]AcAc ratios were observed in tumor-bearing mice. These results were consistent with decreased BDH activity in tumors and associated with increased total cellular NAD+/NADH. Our study confirmed that AcAc crosses the BBB and can be used for monitoring metabolism in the brain. It highlights the potential of AcAc for future clinical translation and its potential utility for monitoring metabolic changes associated with glioma, and other neurological disorders

    Integrating Multiple 3D Views through Frame-of-reference Interaction

    Get PDF
    Frame-of-reference interaction consists of a unified set of 3D interaction techniques for exploratory navigation of large virtual spaces in nonimmersive environments. It is based on a conceptual framework that considers navigation from a cognitive perspective, as a way of facilitating changes in user attention from one reference frame to another, rather than from the mechanical perspective of moving a camera between different points of interest. All of our techniques link multiple frames of reference in some meaningful way. Some techniques link multiple windows within a zooming environment while others allow seamless changes of user focus between static objects, moving objects, and groups of moving objects. We present our techniques as they are implemented in GeoZui3D, a geographic visualization system for ocean data
    • …
    corecore