751 research outputs found

    Technical Workshop: Advanced Helicopter Cockpit Design

    Get PDF
    Information processing demands on both civilian and military aircrews have increased enormously as rotorcraft have come to be used for adverse weather, day/night, and remote area missions. Applied psychology, engineering, or operational research for future helicopter cockpit design criteria were identified. Three areas were addressed: (1) operational requirements, (2) advanced avionics, and (3) man-system integration

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Aerospace medicine and biology. A continuing bibliography (supplement 231)

    Get PDF
    This bibliography lists 284 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1982

    Intelligent computational techniques and virtual environment for understanding cerebral visual impairment patients

    Get PDF
    Cerebral Visual Impairment (CVI) is a medical area that concerns the study of the effect of brain damages on the visual field (VF). People with CVI are not able to construct a perfect 3-Dimensional view of what they see through their eyes in their brain. Therefore, they have difficulties in their mobility and behaviours that others find hard to understand due to their visual impairment. A branch of Artificial Intelligence (AI) is the simulation of behaviour by building computational models that help to explain how people solve problems or why they behave in a certain way. This project describes a novel intelligent system that simulates the navigation problems faced by people with CVI. This will help relatives, friends, and ophthalmologists of CVI patients understand more about their difficulties in navigating their everyday environment. The navigation simulation system is implemented using the Unity3D game engine. Virtual scenes of different living environments are also created using the Unity modelling software. The vision of the avatar in the virtual environment is implemented using a camera provided by the 3D game engine. Given a visual field chart of a CVI patient with visual impairment, the system automatically creates a filter (mask) that mimics a visual defect and places it in front of the visual field of the avatar. The filters are created by extracting, classifying and converting the symbols of the defected areas in the visual field chart to numerical values and then converted to textures to mask the vision. Each numeric value represents a level of transparency and opacity according to the severity of the visual defect in that region. The filters represent the vision masks. Unity3D supports physical properties to facilitate the representation of the VF defects into a form of structures of rays. The length of each ray depends on the VF defect s numeric value. Such that, the greater values (means a greater percentage of opacity) represented by short rays in length. While the smaller values (means a greater percentage of transparency) represented by longer rays. The lengths of all rays are representing the vision map (how far the patient can see). Algorithms for navigation based on the generated rays have been developed to enable the avatar to move around in given virtual environments. The avatar depends on the generated vision map and will exhibit different behaviours to simulate the navigation problem of real patients. The avatar s behaviour of navigation differs from patient to another according to their different defects. An experiment of navigating virtual environments (scenes) using the HTC Oculus Vive Headset was conducted using different scenarios. The scenarios are designed to use different VF defects within different scenes. The experiment simulates the patient s navigation in virtual environments with static objects (rooms) and in virtual environments with moving objects. The behaviours of the experiment participants actions (avoid/bump) match the avatar s using the same scenario. This project has created a system that enables the CVI patient s parents and relatives to aid the understanding what the CVI patient encounter. Besides, it aids the specialists and educators to take into account all the difficulties that the patients experience. Then, is to design and develop appropriate educational programs that can help each individual patient

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem Schlüsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, übertreffen Gehirne von Säugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfähigsten Maschinen. Industrieroboter sind sehr schnell und präzise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie für die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfähig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits überlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen Repräsentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras übertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch für eine vergleichende Studie zu sample-basierten Planern verwendet. Ergänzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewählt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung für dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage für neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg für die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK

    Three-dimensional media for mobile devices

    Get PDF
    Cataloged from PDF version of article.This paper aims at providing an overview of the core technologies enabling the delivery of 3-D Media to next-generation mobile devices. To succeed in the design of the corresponding system, a profound knowledge about the human visual system and the visual cues that form the perception of depth, combined with understanding of the user requirements for designing user experience for mobile 3-D media, are required. These aspects are addressed first and related with the critical parts of the generic system within a novel user-centered research framework. Next-generation mobile devices are characterized through their portable 3-D displays, as those are considered critical for enabling a genuine 3-D experience on mobiles. Quality of 3-D content is emphasized as the most important factor for the adoption of the new technology. Quality is characterized through the most typical, 3-D-specific visual artifacts on portable 3-D displays and through subjective tests addressing the acceptance and satisfaction of different 3-D video representation, coding, and transmission methods. An emphasis is put on 3-D video broadcast over digital video broadcasting-handheld (DVB-H) in order to illustrate the importance of the joint source-channel optimization of 3-D video for its efficient compression and robust transmission over error-prone channels. The comparative results obtained identify the best coding and transmission approaches and enlighten the interaction between video quality and depth perception along with the influence of the context of media use. Finally, the paper speculates on the role and place of 3-D multimedia mobile devices in the future internet continuum involving the users in cocreation and refining of rich 3-D media content

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Light environment - A. Visible light. B. Ultraviolet light

    Get PDF
    Visible and ultraviolet light environment as related to human performance and safety during space mission
    corecore