177 research outputs found

    Lane Departure and Front Collision Warning System Using Monocular and Stereo Vision

    Get PDF
    Driving Assistance Systems such as lane departure and front collision warning has caught great attention for its promising usage on road driving. This, this research focus on implementing lane departure and front collision warning at same time. In order to make the system really useful for real situation, it is critical that the whole process could be near real-time. Thus we chose Hough Transform as the main algorithm for detecting lane on the road. Hough Transform is used for that it is a very fast and robust algorithm, which makes it possible to execute as many frames as possible per frames. Hough Transform is used to get boundary information, so that we could decide if the car is doing lane departure based on the car\u27s position in lane. Later, we move on to use front car\u27s symmetry character to do front car detection, and combine it with Camshift tracking algorithm to fill the gap for failure of detection. Later we introduce camera calibration, stereo calibration, and how to calculate real distance from depth map

    An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display

    Get PDF
    We present a tele-immersive system that enables people to interact with each other in a virtual world using body gestures in addition to verbal communication. Beyond the obvious applications, including general online conversations and gaming, we hypothesize that our proposed system would be particularly beneficial to education by offering rich visual contents and interactivity. One distinct feature is the integration of egocentric pose recognition that allows participants to use their gestures to demonstrate and manipulate virtual objects simultaneously. This functionality enables the instructor to ef- fectively and efficiently explain and illustrate complex concepts or sophisticated problems in an intuitive manner. The highly interactive and flexible environment can capture and sustain more student attention than the traditional classroom setting and, thus, delivers a compelling experience to the students. Our main focus here is to investigate possible solutions for the system design and implementation and devise strategies for fast, efficient computation suitable for visual data processing and network transmission. We describe the technique and experiments in details and provide quantitative performance results, demonstrating our system can be run comfortably and reliably for different application scenarios. Our preliminary results are promising and demonstrate the potential for more compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201

    Gravity optimised particle filter for hand tracking

    Get PDF
    This paper presents a gravity optimised particle filter (GOPF) where the magnitude of the gravitational force for every particle is proportional to its weight. GOPF attracts nearby particles and replicates new particles as if moving the particles towards the peak of the likelihood distribution, improving the sampling efficiency. GOPF is incorporated into a technique for hand features tracking. A fast approach to hand features detection and labelling using convexity defects is also presented. Experimental results show that GOPF outperforms the standard particle filter and its variants, as well as state-of-the-art CamShift guided particle filter using a significantly reduced number of particles

    The effect of illumination compensation methods with histogram back projection for camshift application

    Get PDF
    This paper presents the results of a factorial experiment performed to determine the effect ofillumination compensation methods with histogram back projection to be used for objecttracking algorithm continuous adaptive mean-shift (Camshift). Since Camshift tracking can beused for distance approximation of an object, a precise tracker algorithm is required. Thisstudy compared two types of illumination compensation methods using Design of Experiment(DOE) in the presence of illumination inconsistency. Based on the results, it was found thatselecting two channels as reference in histogram back projection weakens the accuracy ofCamshift tracking whereas the combination of both methods produces results that are moredesirable.Keywords: object tracking; camshift; vision system; DOE

    Improved Face Tracking Thanks to Local Features Correspondence

    Get PDF
    In this paper, we propose a technique to enhance the quality of detected face tracks in videos. In particular, we present a tracking algorithm that can improve the temporal localization of the tracks, remedying to the unavoidable failures of the face detection algorithms. Local features are extracted and tracked to “fill the gaps” left by missed detections. The principal aim of this work is to provide robust and well localized tracks of faces to a system of Interactive Movietelling, but the concepts can be extended whenever there is the necessity to localize the presence of a determined face even in environments where the face detection is, for any reason, difficult. We test the effectiveness of the proposed algorithm in terms of faces localization both in space and time, first assessing the performance in an ad-hoc simulation scenario and then showing output examples of some real-world video sequences

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system

    A Novel System for Non-Invasive Method of Animal Tracking and Classification in Designated Area Using Intelligent Camera System

    Get PDF
    This paper proposed a novel system for non-invasive method of animal tracking and classification in designated area. The system is based on intelligent devices with cameras, which are situated in a designated area and a main computing unit (MCU) acting as a system master. Intelligent devices track animals and then send data to MCU to evaluation. The main purpose of this system is detection and classification of moving animals in a designated area and then creation of migration corridors of wild animals. In the intelligent devices, background subtraction method and CAMShift algorithm are used to detect and track animals in the scene. Then, visual descriptors are used to create representation of unknown objects. In order to achieve the best accuracy in classification, key frame extraction method is used to filtrate an object from detection module. Afterwards, Support Vector Machine is used to classify unknown moving animals

    Fusing face and body gesture for machine recognition of emotions

    Full text link
    Research shows that humans are more likely to consider computers to be human-like when those computers understand and display appropriate nonverbal communicative behavior. Most of the existing systems attempting to analyze the human nonverbal behavior focus only on the face; research that aims to integrate gesture as an expression mean has only recently emerged. This paper presents an approach to automatic visual recognition of expressive face and upper body action units (FAUs and BAUs) suitable for use in a vision-based affective multimodal framework. After describing the feature extraction techniques, classification results from three subjects are presented. Firstly, individual classifiers are trained separately with face and body features for classification into FAU and BAU categories. Secondly, the same procedure is applied for classification into labeled emotion categories. Finally, we fuse face and body information for classification into combined emotion categories. In our experiments, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual face modality. © 2005 IEEE

    Low vision assistance with mobile devices

    Get PDF
    Low vision affects many people, both young and old. Low vision conditions can range from near- and far-sightedness to conditions such as blind spots and tunnel vision. With the growing popularity of mobile devices such as smartphones, there is large opportunity for use of these multipurpose devices to provide low vision assistance. Furthermore, Google\u27s Android operating system provides a robust environment for applications in various fields, including low vision assistance. The objective of this thesis research is to develop a system for low vision assistance that displays important information at the preferred location of the user\u27s visual field. To that end, a first release of a prototype blind spot/tunnel vision assistance system was created and demonstrated on an Android smartphone. Various algorithms for face detection and face tracking were implemented on the Android platform and their performance was assessed with regards to metrics such as throughput and battery usage. Specifically, Viola-Jones, Support Vector Machines, and a color-based method from Pai et al were used for face detection. Template matching, CAMShift, and Lucas-Kanade methods were used for face tracking. It was found that face detection and tracking could be successfully executed within acceptable bounds of time and battery usage, and in some cases performed faster than it would take a comparable cloud-based system for offloading algorithm usage to complete execution
    corecore