7 research outputs found

    Real-time On-board Object Tracking for Cooperative Flight Control

    Full text link
    One of possible cooperative Situations for flights could be a scenario when the decision on a new path is taken by A Certain fleet member, who is called the leader. The update on the new path is Transmitted to the fleet members via communication That can be noisy. An optical sensor can be used as a back-up for re-estimating the path parameters based on visual information. For A Certain topology, the issue can be solved by continuous tracking of the leader of the fleet in the video sequence and re-adjusting parameters of the flight, accordingly. To solve such a problem of a real time system has been developed for Recognizing and tracking 3D objects. Any change in the 3D position of the leading object is Determined by the on-board system and adjustments of the speed, pitch, yaw and roll angles are made to sustain the topology. Given a 2D image acquired by an on-board camera, the system has to perform the background subtraction, recognize the object, track it and evaluate the relative rotation, scale and translation of the object. In this paper, a comparative study of different algorithms is Carried out based on time and accuracy constraints. The solution for 3D pose estimation is provided based on the system of invariant Zernike moments. The candidate techniques solving the complete set of procedures have been Implemented on Texas Instruments TMS320DM642 EVM board. It is shown That 14 frames per second can be processed; That supports the real time Implementation of the tracking system with the reasonable accuracy

    On-board three-dimensional object tracking: Software and hardware solutions

    Full text link
    We describe a real time system for recognition and tracking 3D objects such as UAVs, airplanes, fighters with the optical sensor. Given a 2D image, the system has to perform background subtraction, recognize relative rotation, scale and translation of the object to sustain a prescribed topology of the fleet. In the thesis a comparative study of different algorithms and performance evaluation is carried out based on time and accuracy constraints. For background subtraction task we evaluate frame differencing, approximate median filter, mixture of Gaussians and propose classification based on neural network methods. For object detection we analyze the performance of invariant moments, scale invariant feature transform and affine scale invariant feature transform methods. Various tracking algorithms such as mean shift with variable and a fixed sized windows, scale invariant feature transform, Harris and fast full search based on fast fourier transform algorithms are evaluated. We develop an algorithm for the relative rotations and the scale change calculation based on Zernike moments. Based on the design criteria the selection is made for on-board implementation. The candidate techniques have been implemented on the Texas Instrument TMS320DM642 EVM board. It is shown in the thesis that 14 frames per second can be processed; that supports the real time implementation of the tracking system under reasonable accuracy limits

    Real-Time Edge Detection using Sundance Video and Image Processing System

    Get PDF
    Edge detection from images is one of the most important concerns in digital image and video processing. With development in technology, edge detection has been greatly benefited and new avenues for research opened up, one such field being the real time video and image processing whose applications have allowed other digital image and video processing. It consists of the implementation of various image processing algorithms like edge detection using sobel, prewitt, canny and laplacian etc. A different technique is reported to increase the performance of the edge detection. The algorithmic computations in real-time may have high level of time based complexity and hence the use of Sundance Module Video and Image processing system for the implementation of such algorithms is proposed here. In this module is based on the Sundance module SMT339 processor is a dedicated high speed image processing module for use in a wide range of image analysis systems. This processor is combination of the DSP and FPGA processor. The image processing engine is based upon the „Texas Instruments‟ TMS320DM642 Video Digital Signal Processor. And A powerful Vitrex-4 FPGA (XC4VFX60-10) is used onboard as the FPGA processing unit for image data. It is observed that techniques which follow the stage process of detection of noise and filtering of noisy pixels achieve better performance than others. In this thesis such schemes of sobel, prewitt, canny and laplacian detector are proposed

    DESIGN AND CONSTRUCTION OF A REMOTELY CONTROLLED VEHICLE ANTI - THEFT SYSTEM VIA GSM NETWORK

    Get PDF
    Remotely controlled vehicle anti - theft system via GSM network is a system that explores the GSM network in order to produce a reliable and efficient vehicle security system. However, the design project can be viewed from two perspectives viz the hardware consideration and the software consideration. Minicom which is a terminal emulation program on Linux was utilized for the configuration of t he Modem used in this project work due to its inherent advantages. Communication between the user and the vehicle sub - system is via sms (Short Messaging Service) messaging. SMS commands are sent to the GSM/GPRS Modem Module. The GSM/GPRS interpretes the me ssage and performs necessary control actions. Also, sms messages are sent from the GSM/GPRS Modem Module to the user’s mobile phone whenever an alarm situation occurs. However, a toy car was used as a prototype display of this project work and prototype ca r was immobilized and demobilized from a mobile phone via SM

    Real-time scalable video coding for surveillance applications on embedded architectures

    Get PDF

    Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions

    Get PDF
    Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera. A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024×1024 frame, the throughput is equivalent to 172 frames per second (fps). Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participants’ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning © 2012 Acoustical Society of Americapublished_or_final_versio
    corecore