1,861 research outputs found

    Brain-Computer Interfaces for Detection and Localization of Targets in Aerial Images

    Get PDF
    Objective. The N2pc event-related potential (ERP) appears on the opposite side of the scalp with respect to the visual hemisphere where an object of interest is located. We explored the feasibility of using it to extract information on the spatial location of targets in aerial images shown by means of a rapid serial visual presentation (RSVP) protocol using single-Trial classification. Methods. Images were shown to 11 participants at a presentation rate of 5 Hz while recording electroencephalographic signals. With the resulting ERPs, we trained linear classifiers for single-Trial detection of target presence and location. We analyzed the classifiers' decisions and their raw output scores on independent test sets as well as the averages and voltage distributions of the ERPs. Results. The N2pc is elicited in RSVP presentation of complex images and can be recognized in single trials (the median area under the receiver operating characteristic curve was 0.76 for left versus right classification). Moreover, the peak amplitude of this ERP correlates with the horizontal position of the target within an image. The N2pc varies significantly depending on handedness, and these differences can be used for discriminating participants in terms of their preferred hand. Conclusion and Significance. The N2pc is elicited during RSVP presentation of real complex images and contains analogue information that can be used to roughly infer the horizontal position of targets. Furthermore, differences in the N2pc due to handedness should be taken into account when creating collaborative brain-computer interfaces

    The multisensory body revealed through its cast shadows

    Get PDF
    One key issue when conceiving the body as a multisensory object is how the cognitive system integrates visible instances of the self and other bodies with one\u2019s own somatosensory processing, to achieve self-recognition and body ownership. Recent research has strongly suggested that shadows cast by our own body have a special status for cognitive processing, directing attention to the body in a fast and highly specific manner. The aim of the present article is to review the most recent scientific contributions addressing how body shadows affect both sensory/perceptual and attentional processes. The review examines three main points: (1) body shadows as a special window to investigate the construction of multisensory body perception; (2) experimental paradigms and related findings; (3) open questions and future trajectories. The reviewed literature suggests that shadows cast by one\u2019s own body promote binding between personal and extrapersonal space and elicit automatic orienting of attention toward the bodypart casting the shadow. Future research should address whether the effects exerted by body shadows are similar to those observed when observers are exposed to other visual instances of their body. The results will further clarify the processes underlying the merging of vision and somatosensation when creating body representations

    The Integration Of Audio Into Multimodal Interfaces: Guidelines And Applications Of Integrating Speech, Earcons, Auditory Icons, and Spatial Audio (SEAS)

    Get PDF
    The current research is directed at providing validated guidelines to direct the integration of audio into human-system interfaces. This work first discusses the utility of integrating audio to support multimodal human-information processing. Next, an auditory interactive computing paradigm utilizing Speech, Earcons, Auditory icons, and Spatial audio (SEAS) cues is proposed and guidelines for the integration of SEAS cues into multimodal systems are presented. Finally, the results of two studies are presented that evaluate the utility of using SEAS cues, developed following the proposed guidelines, in relieving perceptual and attention processing bottlenecks when conducting Unmanned Air Vehicle (UAV) control tasks. The results demonstrate that SEAS cues significantly enhance human performance on UAV control tasks, particularly response accuracy and reaction time on a secondary monitoring task. The results suggest that SEAS cues may be effective in overcoming perceptual and attentional bottlenecks, with the advantages being most revealing during high workload conditions. The theories and principles provided in this paper should be of interest to audio system designers and anyone involved in the design of multimodal human-computer systems

    Target detection and localization using thermal camera, mmWave radar and deep learning.

    Get PDF
    Reliable detection, and localization of tiny unmanned aerial vehicles (UAVs), birds, and other aerial vehicles with small cross-sections is an ongoing challenge. The detection task becomes even more challenging in harsh weather conditions such as snow, fog, and dust. RGB camera-based sensing is widely used for some tasks, especially navigation. However, the RGB camera's performance degrades in poor lighting conditions. On the other hand, mmWave radars perform very well in harsh weather conditions also. Additionally, thermal cameras perform reliably in low lighting conditions too. The combination of these two sensors makes an excellent choice for many of these applications. In this work, a model to detect and localize UAVs is made using an integrated system of a thermal camera and mmWave radar. Data collected with the integrated sensors are used to train a model for object detection using the yolov5 algorithm. The model detects and classifies objects such as humans, cars and UAVs. The images from the thermal camera are used in combination with the trained model to localize UAVs in the cameras Field of View(FOV)

    Target detection and localization using thermal camera, mmWave radar and deep learning

    Get PDF
    Reliable detection, and localization of tiny unmanned aerial vehicles (UAVs), birds, and other aerial vehicles with small cross-sections is an ongoing challenge. The detection task becomes even more challenging in harsh weather conditions such as snow, fog, and dust. RGB camera-based sensing is widely used for some tasks, especially navigation. However, the RGB camera's performance degrades in poor lighting conditions. On the other hand, mmWave radars perform very well in harsh weather conditions also. Additionally, thermal cameras perform reliably in low lighting conditions too. The combination of these two sensors makes an excellent choice for many of these applications. In this work, a model to detect and localize UAVs is made using an integrated system of a thermal camera and mmWave radar. Data collected with the integrated sensors are used to train a model for object detection using the yolov5 algorithm. The model detects and classifies objects such as humans, cars and UAVs. The images from the thermal camera are used in combination with the trained model to localize UAVs in the cameras Field of View(FOV)

    Autonomous Quadrocopter for Search, Count and Localization of Objects

    Get PDF
    This chapter describes and evaluates the design and implementation of a new fully autonomous quadrocopter, which is capable of self‐reliant search, count and localization of a predefined object on the ground inside a room
    • 

    corecore