357 research outputs found

    Ocular attention-sensing interface system

    Get PDF
    The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload

    Holographic enhanced remote sensing system

    Get PDF
    The Holographic Enhanced Remote Sensing System (HERSS) consists of three primary subsystems: (1) an Image Acquisition System (IAS); (2) a Digital Image Processing System (DIPS); and (3) a Holographic Generation System (HGS) which multiply exposes a thermoplastic recording medium with sequential 2-D depth slices that are displayed on a Spatial Light Modulator (SLM). Full-parallax holograms were successfully generated by superimposing SLM images onto the thermoplastic and photopolymer. An improved HGS configuration utilizes the phase conjugate recording configuration, the 3-SLM-stacking technique, and the photopolymer. The holographic volume size is currently limited to the physical size of the SLM. A larger-format SLM is necessary to meet the desired 6 inch holographic volume. A photopolymer with an increased photospeed is required to ultimately meet a display update rate of less than 30 seconds. It is projected that the latter two technology developments will occur in the near future. While the IAS and DIPS subsystems were unable to meet NASA goals, an alternative technology is now available to perform the IAS/DIPS functions. Specifically, a laser range scanner can be utilized to build the HGS numerical database of the objects at the remote work site

    Conceptual design study for an advanced cab and visual system, volume 2

    Get PDF
    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation

    Eye Tracking: A Perceptual Interface for Content Based Image Retrieval

    Get PDF
    In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented

    Handheld image acquisition with real-time vision for human-computer interaction on mobile applications

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2019Várias patologias importantes manifestam-se na retina, sendo que estas podem ter origem na própria retina ou então provirem de doenças sistémicas. A retinopatia diabética, o glaucoma e a degeneração macular relacionada com a idade são algumas dessas patologias oculares, e também as maiores causas de cegueira nos países desenvolvidos. Graças à maior prevalência que se tem verificado, tem havido uma aposta cada vez maior na massificação do rastreio destas doenças, principalmente na população mais suscetível de as contrair. Visto que a retina é responsável pela formação de imagens, ou seja, pelo sentido da visão, os componentes oculares que estão localizados anteriormente têm de ser transparentes, permitindo assim a passagem da luz. Isto faz com que a retina e, por sua vez, o tecido cerebral, possam ser examinados de forma não-invasiva. Existem várias técnicas de imagiologia da retina, incluindo a angiografia fluoresceínica, a tomografia de coerência ótica e a retinografia. O protótipo EyeFundusScope (EFS) da Fraunhofer é um retinógrafo portátil, acoplado a um smartphone, que permite a obtenção de imagens do fundo do olho, sem que seja necessária a dilatação da pupila. Utiliza um algoritmo de aprendizagem automática para detetar lesões existentes na retina, que estão normalmente associadas a um quadro de retinopatia diabética. Para além disso, utiliza um sistema de suporte à decisão, que indica a ausência ou presença da referida retinopatia. A fiabilidade deste tipo de algoritmos e o correto diagnóstico por parte de oftalmologistas e neurologistas estão extremamente dependentes da qualidade das imagens adquiridas. A consistência da captura portátil, com este tipo de retinógrafos, está intimamente relacionada com uma interação apropriada com o utilizador. De forma a melhorar o contributo prestado pelo utilizador, durante o procedimento habitual da retinografia, foi desenvolvida uma nova interface gráfica de utilizador, na aplicação Android do EFS. A abordagem pretendida consiste em tornar o uso do EFS mais acessível, e encorajar técnicos não especializados a utilizarem esta técnica de imagem médica, tanto em ambiente clínico como fora deste. Composto por vários elementos de interação, que foram criados para atender às necessidades do protocolo de aquisição de imagem, a interface gráfica de utilizador deverá auxiliar todos os utilizadores no posicionamento e alinhamento do EFS com a pupila do doente. Para além disto, poderá existir um controlo personalizado do tempo despendido em aquisições do mesmo olho. Inicialmente, foram desenhadas várias versões dos elementos de interação rotacionais, sendo posteriormente as mesmas implementadas na aplicação Android. Estes elementos de interação utilizam os dados recolhidos dos sensores inerciais, já existentes no smartphone, para transmitir uma resposta em tempo real ao utilizador enquanto este move o EFS. Além dos elementos de interação rotacionais, também foram implementados um temporizador e um indicador do olho que está a ser examinado. Após a implementação de três configurações com as várias versões dos elementos de interação, procedeu-se à realização dos testes de usabilidade. No entanto, antes desta etapa se poder concretizar, foram realizados vários acertos e correções com a ajuda de um olho fantoma. Durante o planeamento dos testes de usabilidade foi estabelecido um protocolo para os diferentes cenários de uso e foi criado um tutorial com as principais cautelas que os utilizadores deveriam ter aquando das aquisições. Os resultados dos testes de usabilidade mostram que a nova interface gráfica teve um efeito bastante positivo na experiência dos utilizadores. A maioria adaptou-se rapidamente à nova interface, sendo que para muitos contribuiu para o sucesso da tarefa de aquisição de imagem. No futuro, espera-se que a combinação dos dados fornecidos pelos sensores inerciais, juntamente com a implementação de novos algoritmos de reconhecimento de imagem, sejam a base de uma nova e mais eficaz técnica de interação em prática clínica. Além disso, a nova interface gráfica poderá proporcionar ao EFS uma aplicação que sirva exclusivamente para efeitos de formação profissional.Many important diseases manifest themselves in the retina, both primary retinal conditions and systemic disorders. Diabetic retinopathy, glaucoma and age-related macular degeneration are some of the most frequent ocular disorders and the leading causes of blindness in developed countries. Since these disorders are becoming increasingly prevalent, there has been the need to encourage high coverage screening among the most susceptible population. As its function requires the retina to see the outside world, the involved optical components must be transparent for image formation. This makes the retinal tissue, and thereby brain tissue, accessible for imaging in a non-invasive manner. There are several approaches to visualize the retina including fluorescein angiography, optical coherence tomography and fundus photography. The Fraunhofer’s EyeFundusScope (EFS) prototype is a handheld smartphone-based fundus camera, that doesn’t require pupil dilation. It employs advanced machine learning algorithms to process the image in search of lesions that are often associated with diabetic retinopathy, making it a pre-diagnostic tool. The robustness of this computer vision algorithm, as well as the diagnose performance of ophthalmologists and neurologists, is strongly related with the quality of the images acquired. The consistency of handheld capture deeply depends on proper human interaction. In order to improve the user’s contribution to the retinal acquisition procedure, a new graphical user interface was designed and implemented in the EFS Acquisition App. The intended approach is to make the EFS easier to use by non-ophthalmic trained personnel, either in a non-clinical or in a clinical environment. Comprised of several interaction elements that were created to suit the needs of the acquisition procedure, the graphical user interface should help the user to position and align the EFS illumination beam with the patient’s pupil as well as keeping track of the time between acquisitions on the same eye. Initially, several versions of rotational interaction elements were designed and later implemented on the EFS Acquisition App. These used data from the smartphone’s inertial sensors to give real-time feedback to the user while moving the EFS. Besides the rotational interactional elements, a time-lapse and an eye indicator were also designed and implemented in the EFS. Usability tests took place, after three assemblies being successfully implemented and corrected with the help of a model eye ophthalmoscope trainer. Also, a protocol for the different use-case scenarios was elaborated, and a tutorial was created. Results from the usability tests, show that the new graphical user interface had a very positive outcome. The majority of users adapted very quickly to the new interface, and for many it contributed for a successful acquisition task. In the future, the grouping of inertial sensors data and image recognition may prove to be the foundations for a more efficient interaction technique performed in clinical practices. Furthermore, the new graphical user interface could provide the EFS with an application for educational purposes

    Heritage documentation techniques and methods

    Get PDF
    This methodology notebooks "Heritage documentation techniques and methods", contains • 3D modelling, digital photography and information dissemination • Creation of 3D models by using scanners • Low-cost desktop scanner • Photography notes: Exposure • Photography notes: Focal length, lenses and cross-polarization • White adjustment and colour calibration • Image-Based Modelling Systems • Focus stacking technique • Rollout photography and DStretch filter • Information dissemination • 3D diagram blocks • Simple animations of 3D modelsEsta serie de cuadernos tiene como objetivo difundir un conjunto de técnicas usadas principalmente para la construcción y documentación de modelos tridimensionales (3D) y fotografía de alta resolución de objetos arqueológicos. Estas técnicas posibilitan construir modelos con calidad métrica contrastada, color calibrado y alta resolución que se difunden por internet usando diversas plataformas.This series of notebooks aims to describe a set of techniques used mainly to construct and document the three-dimensional (3D) models and high-resolution photographs of archaeological objects. These techniques can be used to build models with a contrasting metric quality, calibrated colour and high resolution, to be disseminated on the Internet using various platforms and web services.Parte de la realización de estos cuadernos ha sido financiada a través del proyecto GR18028 (Grupo de investigación RNM026) el cual ha sido cofinanciado por los Fondos Europeos de Desarrollo Regional (FEDER) y el Gobierno de Extremadura

    A comparison of actual ergonomic usage using a statistical survey to prescribed ergonomic usage by experts

    Get PDF
    The purpose of this research is to show how factual data can be combined with statistical evidence to make people more aware of the fact that having knowledge of ergonomic workstation design is not necessarily the same as putting that knowledge to work and then to further show that continuous use of bad computer workstation habits can lead to permanent disability. A survey model is presented and used to validate the previous hypothesis of permanent disability occurring. There are 250 individual,anonymous surveys used in the data collection for the validation. The statistical evidence is then compared to the factual evidence presented in the literature review to show any differences or similarities.This research represents an array of medical and professional people that have worked with or are currently working in the field of computer workstation ergonomics.The technique of statistical analysis was a major factor in determining the outcome of whether the computer workstation population is using the information supplied to them regarding the correct usage of computer ergonomics. The survey used to obtain the data for the analysis can be adjusted to fit any ergonomic professionals quest for data input.The outcomes will be dependent on how the input is arranged and the amounts of participants

    Engineering Data Compendium. Human Perception and Performance, Volume 1

    Get PDF
    The concept underlying the Engineering Data Compendium was the product an R and D program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 1, which contains sections on Visual Acquisition of Information, Auditory Acquisition of Information, and Acquisition of Information by Other Senses

    Designing biomimetic vehicle-to-pedestrian communication protocols for autonomously operating & parking on-road electric vehicles

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 125-127).With research institutions from various private, government and academic sectors performing research into autonomous vehicle deployment strategies, the way we think about vehicles must adapt. But what happens when the driver, the main conduit of information transaction between the vehicle and its surroundings, is removed? The EVITA system aims to fill this communication void by giving the autonomous vehicle the means to sense others around it, and react to various stimuli in as intuitive ways as possible by taking design cues from the living world. The system is comprised of various types of sensors (computer vision, UWB beacon tracking, sonar) and actuators (light, sound, mechanical) in order to express recognition of others, announcement of intentions, and portraying the vehicle's general state. All systems are built on the 2 nd version of the 1/2 -scale CityCar concept vehicle, featuring advanced mixed-materials (CFRP + Aluminum) and a significantly more modularized architecture.by Nicholas Pennycooke.S.M

    Manoeuvring drone (Tello Talent) using eye gaze and or fingers gestures

    Get PDF
    The project aims to combine hands and eyes to control a Tello Talent drone based on computer vision, machine learning and an eye tracking device for gaze detection and interaction. The main purpose of this project is gaming, experimental and educational for next coming generation, in addition it is very useful for the peoples who cannot use their hands, they can maneuver the drone by their eyes movement, and hopefully this will bring them some fun. The idea of this project is inspired by the progress and development in the innovative technologies such as machine learning, computer vision and object detection that offer a large field of applications which can be used in diverse domains, there are many researcher are improving, instructing and innovating the new intelligent manner for controlling the drones by combining computer vision, machine learning, artificial intelligent, etc. This project can help anyone even the people who they don¿t have any prior knowledge of programming or Computer Vision or theory of eye tracking system, they learn the basic knowledge of drone concept, object detection, programing, and integrating different hardware and software involved, then playing. As a final objective, they can able to build simple application that can control the drones by using movements of hands, eyes or both, during the practice they should take in consideration the operating condition and safety required by the manufacturers of drones and eye tracking device. The concept of Tello Talent drone is based on a series of features, functions and scripts which are already been developed, embedded in autopilot memories and are accessible by users via an SDK protocol. The SDK is used as an easy guide to developing simple and complex applications; it allows the user to develop several flying mission programs. There are different experiments were studied for checking which scenario is better in detecting the hands movement and exploring the keys points in real-time with low computing power computer. As a result, I find that the Google artificial intelligent research group offers an open source platform dedicated for developing this application; the platform is called MediaPipe based on customizable machine learning solution for live streaming video. In this project the MediaPipe and the eye tracking module are the fundamental tools for developing and realizing the application
    corecore