3,260 research outputs found

    Understanding Minds in Real-World Environments: Toward a Mobile Cognition Approach

    Get PDF
    There is a growing body of evidence that important aspects of human cognition have been marginalized, or overlooked, by traditional cognitive science. In particular, the use of laboratory-based experiments in which stimuli are artificial, and response options are fixed, inevitably results in findings that are less ecologically valid in relation to real-world behavior. In the present review we highlight the opportunities provided by a range of new mobile technologies that allow traditionally lab-bound measurements to now be collected during natural interactions with the world. We begin by outlining the theoretical support that mobile approaches receive from the development of embodied accounts of cognition, and we review the widening evidence that illustrates the importance of examining cognitive processes in their context. As we acknowledge, in practice, the development of mobile approaches brings with it fresh challenges, and will undoubtedly require innovation in paradigm design and analysis. If successful, however, the mobile cognition approach will offer novel insights in a range of areas, including understanding the cognitive processes underlying navigation through space and the role of attention during natural behavior. We argue that the development of real-world mobile cognition offers both increased ecological validity, and the opportunity to examine the interactions between perception, cognition and action—rather than examining each in isolation

    An informatics system for exploring eye movements in reading

    Get PDF
    Eye tracking techniques have been widely used in many research areas including cognitive science, psychology, human-computer interaction, marketing research, medical research etc. Many computer programs have emerged to help these researchers to design experiments, present visual stimuli and process the large quantity of numerical data produced by the eye tracker. However, most applications, especially commercial products, are designed for a particular tracking device and tend to be general purpose. Few of them are designed specifically for reading research. This can be inconvenient when dealing with complex experimental design, multi-source data collection, and text based data analysis, including almost every aspect of a reading study lifecycle. A flexible and powerful system that manages the lifecycle of different reading studies is required to fulfill these demands. Therefore, we created an informatics system with two major software suites: Experiment Executor and EyeMap. It is a system designed specifically for reading research. Experiment Executor helps reading researchers build complex experimental environments, which can rapidly present display changes and support the co-registration of eye tracking information with other data collection devices such as EEG (electroencephalography) amplifiers. The EyeMap component helps researchers visualize and analysis a wide range of writing systems including spaced and unspaced scripts, which can be presented in proportional or non-proportional font types. The aim of the system is to accelerate the life cycle of a reading experiment from design through analysis. Several experiments were conducted on this system. These experiments confirmed the effectiveness and the capability of the system with several new reading research findings from the visual information processing stages of reading

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Geosynchronous Meteorological Satellite Data Seminar

    Get PDF
    A seminar was organized by NASA to acquaint the meteorological community with data now available, and data scheduled to be available in the future, from geosynchronous meteorological satellites. The twenty-four papers were presented in three half-day sessions in addition to tours of the Image Display and LANDSAT Processing Facilities during the afternoon of the second day
    • …
    corecore