12,442 research outputs found

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 359)

    Get PDF
    This bibliography lists 164 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during Jan. 1992. Subject coverage includes: aerospace medicine and physiology, life support systems and man/system technology, protective clothing, exobiology and extraterrestrial life, planetary biology, and flight crew behavior and performance

    Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds

    Full text link
    Accurate detection of 3D objects is a fundamental problem in computer vision and has an enormous impact on autonomous cars, augmented/virtual reality and many applications in robotics. In this work we present a novel fusion of neural network based state-of-the-art 3D detector and visual semantic segmentation in the context of autonomous driving. Additionally, we introduce Scale-Rotation-Translation score (SRTs), a fast and highly parameterizable evaluation metric for comparison of object detections, which speeds up our inference time up to 20\% and halves training time. On top, we apply state-of-the-art online multi target feature tracking on the object measurements to further increase accuracy and robustness utilizing temporal information. Our experiments on KITTI show that we achieve same results as state-of-the-art in all related categories, while maintaining the performance and accuracy trade-off and still run in real-time. Furthermore, our model is the first one that fuses visual semantic with 3D object detection

    Deriving a holistic cognitive fit model for an optimal visualization of data for management decisions

    Get PDF
    Research shows that managerial decision making is directly correlated to both, the swift availability, and subsequently the ease of interpretation of the relevant information. Visualizations are already widely used to transform raw data into a more understandable format and to compress the constantly growing amount of information produced. However, research in this area is highly fragmented and results are contradicting. This paper proposes a preliminary model based on an extensive literature review including top current research on cognition theory. Furthermore an early stage validation of this model by experimental research using structural equation modeling is presented. The authors are able to identify task complexity as one of the most important predicting variables for information perception of visual data, however, other influences are significant as well (data density, domain expertise, working memory capacity and subjective visual complexity

    Predicting student performance in an augmented reality learning environment using eye-tracking data

    Get PDF
    This paper investigates the use of eye-tracking data as a predictor of student performance in an augmented reality (AR) learning environment. 33 undergraduate students enrolled in an ergonomics course at the University of Missouri-Columbia participated in an AR biomechanics lecture consisting of 14 modules. Following each module students answered learning comprehension questions to test their understanding of the lecture material. An additional dataset was recorded for each module in which the participant perfectly follows the virtual instructor throughout the learning space. This dataset, referred to as the baseline, can be used as a comparison tool to gauge how well students follows the lecture material. Two methods are proposed to quantify the student's attention level for each module. The average difference method calculates the average distance between the student and baseline coordinates for each module. The distraction rate method expands upon the average difference method and aims to reduce the amount noise detected. This is done by incorporating a minimum distance threshold, a binary detection signal, and a moving average window. Both metrics are tested as factors in a set of logistic regression models to determine whether they can accurately predict student answer correctness. Average difference showed a correlation with student answer correctness, but with an underwhelming level of significance. Distraction rate outperformed average difference and proved to be a strong and statistically significant predictor of student answer correctness. Finally, two feedback systems are proposed which use distraction rate to detect when students have become distracted so that their attention can be regained through the use of module-based feedback or a real-time attention guidance system.Includes bibliographical references

    Contextual Encoder-Decoder Network for Visual Saliency Prediction

    Get PDF
    Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes.Comment: Accepted Manuscrip

    What User Behaviors Make the Differences During the Process of Visual Analytics?

    Full text link
    The understanding of visual analytics process can benefit visualization researchers from multiple aspects, including improving visual designs and developing advanced interaction functions. However, the log files of user behaviors are still hard to analyze due to the complexity of sensemaking and our lack of knowledge on the related user behaviors. This work presents a study on a comprehensive data collection of user behaviors, and our analysis approach with time-series classification methods. We have chosen a classical visualization application, Covid-19 data analysis, with common analysis tasks covering geo-spatial, time-series and multi-attributes. Our user study collects user behaviors on a diverse set of visualization tasks with two comparable systems, desktop and immersive visualizations. We summarize the classification results with three time-series machine learning algorithms at two scales, and explore the influences of behavior features. Our results reveal that user behaviors can be distinguished during the process of visual analytics and there is a potentially strong association between the physical behaviors of users and the visualization tasks they perform. We also demonstrate the usage of our models by interpreting open sessions of visual analytics, which provides an automatic way to study sensemaking without tedious manual annotations.Comment: This version corrects the issues of previous version

    Attention and visual memory in visualization and computer graphics

    Get PDF
    Abstract—A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we “see ” details in an image can directly impact a viewer’s efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics

    Effects of individuality, education, and image on visual attention: Analyzing eye-tracking data using machine learning

    Get PDF
    Machine learning, particularly classification algorithms, constructs mathematical models from labeled data that can predict labels for new data. Using its capability to identify distinguishing patterns among multi-dimensional data, we investigated the impact of three factors on the observation of architectural scenes: Individuality, education, and image stimuli. An analysis of the eye-tracking data revealed that (1) a velocity histogram was unique to individuals, (2) students of architecture and other disciplines could be distinguished via endogenous parameters, but (3) they were more distinct in terms of seeking structural versus symbolic elements. Because of the reverse nature of the classification algorithms that automatically learn from data, we could identify relevant parameters and distinguishing eye-tracking patterns that have not been reported in previous studies

    Cognitive process modeling of spatial ability: a construct validity study of an assembling object task

    Get PDF
    M.A. University of Kansas, Psychology 2002The purpose of this study was to examine the cognitive processes involved in completing a spatial task in which a participant must mentally assemble a two-dimensional objects. These tasks are used to measure spatial ability on tests such as the Revised Minnesota Paper Form Board Test. Two studies were completed to support a cognitive processing model, previously proposed by Embretson and Gorin (2001), for stages a participant must go through to solve this problem type. In the first study, data from a large group of students from the University of Kansas was used to discover what variables could be manipulated within each item to effect item difficulty and mean response time. Multiple regression models and linear logistic latent trait models were used to measure the impact of each variable on its respective cognitive processing stage. Finally, an eye tracker study was done on ten students from the University of Kansas to further support the proposed cognitive processing model. A qualitative analysis of the data generally supported the proposed cognitive model, but also indicated necessary revisions
    corecore