1,932 research outputs found

    Optimizations and applications in head-mounted video-based eye tracking

    Get PDF
    Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye\u27s pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This difference vector is mapped to determine an observer\u27s point of regard (POR). In current head-mounted video-based eye trackers, the systems are limited in several aspects, such as inadequate measurement range and misdetection of eye features (pupil and CR). This research first proposes a new `structured illumination\u27 configuration, using multiple IREDs to illuminate the eye, to ensure that eye positions can still be tracked even during extreme eye movements (up to ±45° horizontally and ±25° vertically). Then eye features are detected by a two-stage processing approach. First, potential CRs and the pupil are isolated based on statistical information in an eye image. Second, genuine CRs are distinguished by a novel CR location prediction technique based on the well-correlated relationship between the offset of the pupil and that of the CR. The optical relationship of the pupil and CR offsets derived in this thesis can be applied to two typical illumination configurations - collimated and near-source ones- in the video-based eye tracking system. The relationships from the optical derivation and that from an experimental measurement match well. Two application studies, smooth pursuit dynamics in controlled static (laboratory) and unconstrained vibrating (car) environments were conducted. In the first study, the extended stimuli (color photographs subtending 2° and 17°, respectively) were found to enhance smooth pursuit movements induced by realistic images, and the eye velocity for tracking a small dot (subtending \u3c0.1°) was saturated at about 64 deg/sec while the saturation velocity occurred at higher velocities for the extended images. The difference in gain due to target size was significant between dot and the two extended stimuli, while no statistical difference existed between the two extended stimuli. In the second study, twovisual stimuli same as in the first study were used. The visual performance was impaired dramatically due to the whole body motion in the car, even in the tracking of a slowly moving target (2 deg/sec); the eye was found not able to perform a pursuit task as smooth as in the static environment though the unconstrained head motion in the unstable condition was supposed to enhance the visual performance

    Collision detection for UAVs using Event Cameras

    Get PDF
    This dissertation explores the use of event cameras for collision detection in unmanned aerial vehicles (UAVs). Traditional cameras have been widely used in UAVs for obstacle avoidance and navigation, but they suffer from high latency and low dynamic range. Event cameras, on the other hand, capture only the changes in the scene and can operate at high speeds with low latency. The goal of this research is to investigate the potential of event cameras in UAVs collision detection, which is crucial for safe operation in complex and dynamic environments. The dissertation presents a review of the current state of the art in the field and evaluates a developed algorithm for event-based collision detection for UAVs. The performance of the algorithm was tested through practical experiments in which 9 sequences of events were recorded using an event camera, depicting different scenarios with stationary and moving objects as obstacles. Simultaneously, inertial measurement unit (IMU) data was collected to provide additional information about the UAV’s movement. The recorded data was then processed using the proposed event-based collision detection algorithm for UAVs, which consists of four components: ego-motion compensation, normalized mean timestamp, morphological operations, and clustering. Firstly, the ego-motion component compensates for the UAV’s motion by estimating its rotational movement using the IMU data. Next, the normalized mean timestamp component calculates the mean timestamp of each event and normalizes it, helping to reduce the noise in the event data and improving the accuracy of collision detection. The morphological operations component applies mathematical operations such as erosion and dilation to the event data to remove small noise and enhance the edges of objects. Finally, the last component uses a clustering method called DBSCAN to group the events, allowing for the detection of objects and estimation of their positions. This step provides the final output of the collision detection algorithm, which can be used for obstacle avoidance and navigation in UAVs. The algorithm was evaluated based on its accuracy, latency, and computational efficiency. The findings demonstrate that event-based collision detection has the potential to be an effective and efficient method for detecting collisions in UAVs, with high accuracy and low latency. These results suggest that event cameras could be beneficial for enhancing the safety and dependability of UAVs in challenging situations. Moreover, the datasets and algorithm developed in this research are made publicly available, facilitating the evaluation and enhancement of the algorithm for specific applications. This approach could encourage collaboration among researchers and enable further comparisons and investigations.Esta dissertação explora o uso de câmeras de eventos para deteção de colisões em veículos aéreos não tripulados (UAVs). As câmeras tradicionais têm sido amplamente utilizadas em UAVs para evitar obstáculos, mas sofrem de alguns problemas como alta latência ou baixa faixa dinâmica. As câmeras de eventos, por outro lado, capturam apenas as alterações na cena e podem operar em alta velocidade com baixa latência. O objetivo desta pesquisa é investigar o potencial de câmeras de eventos na deteção de colisões em UAVs, o que é crucial para uma operação segura em ambientes complexos e dinâmicos. A dissertação apresenta uma revisão do estado atual da arte neste tema e avalia um algoritmo desenvolvido para deteção de colisões em UAVs baseado em eventos. O desempenho do algoritmo foi avaliado através de testes práticas em que foram registadas 9 sequências de eventos utilizando uma câmera de eventos, retratando diferentes cenários com objetos estacionários e em movimento. Simultaneamente, foram capturados dados da unidade de medida inercial (IMU) para fornecer informações adicionais sobre o movimento do UAV. Os dados registados foram então processados usando o algoritmo proposto de deteção de colisões, que consiste em quatro etapas: ego-motion compensation, normalized mean timestamp, operações morfológicas e clustering. Primeiramente, o ego-motion compensation compensa o movimento do UAV estimando o seu movimento rotacional usando os dados do IMU. Em seguida, o componente de normalized mean timestamp cálcula o timestamp médio de cada evento e normaliza-o, ajudando a reduzir o ruído nos dados de eventos e melhorando a precisão da deteção de colisões. A etapa de operações morfológicas aplica operações matemáticas como erosão e dilatação nos dados dos eventos para remover pequenos ruídos. Finalmente, a última etapa utiliza um método de clustering chamado DBSCAN para agrupar os eventos, permitindo a deteção de objetos e a estimativa das suas posições. Esta etapa fornece o output final do algoritmo de deteção de colisões, que pode ser usado para evitar obstáculos em UAVs. O algoritmo foi avaliado com base na sua precisão, latência e eficiência computacional. Os resultados demonstram que a deteção de colisões baseada em eventos tem o potencial de ser um método eficaz e eficiente para a deteção de colisões em UAVs, com alta precisão e baixa latência. Estes resultados sugerem que as câmeras de eventos poderiam ser benéficas para melhorar a segurança e a confiabilidade dos UAVs em situações desafiadoras. Além disso, os conjuntos de dados e o algoritmo desenvolvido nesta pesquisa estão disponíveis online, facilitando a avaliação e o aprimoramento do algoritmo para aplicações específicas. Esta abordagem pode incentivar a colaboração entre os investigadores da área e possibilitar mais comparações e investigações

    ISAR Autofocus Imaging Algorithm for Maneuvering Targets Based on Phase Retrieval and Gabor Wavelet Transform

    Get PDF
    The imaging issue of a rotating maneuvering target with a large angle and a high translational speed has been a challenging problem in the area of inverse synthetic aperture radar (ISAR) autofocus imaging, in particular when the target has both radial and angular accelerations. In this paper, on the basis of the phase retrieval algorithm and the Gabor wavelet transform (GWT), we propose a new method for phase error correction. The approach first performs the range compression on ISAR raw data to obtain range profiles, and then carries out the GWT transform as the time-frequency analysis tool for the rotational motion compensation (RMC) requirement. The time-varying terms, caused by rotational motion in the Doppler frequency shift, are able to be eliminated at the selected time frame. Furthermore, the processed backscattered signal is transformed to the one in the frequency domain while applying the phase retrieval to run the translational motion compensation (TMC). Phase retrieval plays an important role in range tracking, because the ISAR echo module is not affected by both radial velocity and the acceleration of the target. Finally, after the removal of both the rotational and translational motion errors, the time-invariant Doppler shift is generated, and radar returned signals from the same scatterer are always kept in the same range cell. Therefore, the unwanted motion effects can be removed by applying this approach to have an autofocused ISAR image of the maneuvering target. Furthermore, the method does not need to estimate any motion parameters of the maneuvering target, which has proven to be very effective for an ideal range–Doppler processing. Experimental and simulation results verify the feasibility of this approach

    VISUAL ATTITUDE PROPAGATION FOR SMALL SATELLITES

    Get PDF
    As electronics become smaller and more capable, it has become possible to conduct meaningful and sophisticated satellite missions in a small form factor. However, the capability of small satellites and the range of possible applications are limited by the capabilities of several technologies, including attitude determination and control systems. This dissertation evaluates the use of image-based visual attitude propagation as a compliment or alternative to other attitude determination technologies that are suitable for miniature satellites. The concept lies in using miniature cameras to track image features across frames and extracting the underlying rotation. The problem of visual attitude propagation as a small satellite attitude determination system is addressed from several aspects: related work, algorithm design, hardware and performance evaluation, possible applications, and on-orbit experimentation. These areas of consideration reflect the organization of this dissertation. A “stellar gyroscope” is developed, which is a visual star-based attitude propagator that uses relative motion of stars in an imager’s field of view to infer the attitude changes. The device generates spacecraft relative attitude estimates in three degrees of freedom. Algorithms to perform the star detection, correspondence, and attitude propagation are presented. The Random Sample Consensus (RANSAC) approach is applied to the correspondence problem to successfully pair stars across frames while mitigating false-positive and false-negative star detections. This approach provides tolerance to the noise levels expected in using miniature optics and no baffling, and the noise caused by radiation dose on orbit. The hardware design and algorithms are validated using test images of the night sky. The application of the stellar gyroscope as part of a CubeSat attitude determination and control system is described. The stellar gyroscope is used to augment a MEMS gyroscope attitude propagation algorithm to minimize drift in the absence of an absolute attitude sensor. The stellar gyroscope is a technology demonstration experiment on KySat-2, a 1-Unit CubeSat being developed in Kentucky that is in line to launch with the NASA ELaNa CubeSat Launch Initiative. It has also been adopted by industry as a sensor for CubeSat Attitude Determination and Control Systems (ADCS)

    Creating Bright Shadows: Visual Music Using Immersion, Stereography, and Computer Animation

    Get PDF
    This thesis outlines the research and process of creating an immersive audiovisual work titled “Bright Shadows,” an 11 minute three-dimensional animation of dynamic, colorful abstractions choreographed to instrumental music. This piece is categorized under a long tradition of a type of visual art aspiring to musical analogy called “visual music” and draws heavily from the two-dimensional aesthetic stylings of time-based visual music works made in the early to mid-twentieth century. Among the topics discussed in this paper will be an overview of the artistic and technical challenges associated with translating the visual grammar of these two-dimensional works to three-dimensional computer graphics while establishing a unique aesthetic style. This paper also presents a framework for creating a digital, synthetic space using a large-format immersive theater, stereoscopic imaging, and static framing of the digital environment

    Correction of spherical single lens aberration using digital image processing for cellular phone camera

    Get PDF
    制度:新 ; 報告番号:甲3276号 ; 学位の種類:博士(工学) ; 授与年月日:2011/2/21 ; 早大学位記番号:新558

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors
    corecore