1,769 research outputs found

    OPTIMIZED ARCHITECTURE DESIGN AND IMPLEMENTATION OF OBJECT TRACKING ALGORITHM ON FPGA

    Get PDF
    FPGA based Object tracking implementation is one of the most recent video surveillance applications in embedded systems. In general, FPGA implementation is more efficient than general purpose computers in attaining high throughput due to its parallelism and execution speed. The system need to be designed on a standard frame rate in such a way to achieve optimal performance in real time environment. Optimal design of a system is dependent on minimizing the cost, area (device utility) and power while achieving the required speed. Past research work that investigated object tracking systems' implementation on FPGA achieved a significantly high throughput but have shown high device utilization. This research work aims at optimizing the device utilization under real time constraints. The Adaptive Hybrid Difference algorithm (AHD), which is used to detect the moving objects, was chosen to be implemented on FPGA due to its computation ability and efficiency with regard to hardware implementation. AHD can work at various lighting conditions automatically by determining the adaptive threshold in every period of time

    Monocular line tracking for the reduction of vibration induced during image acquisition

    Get PDF
    This article details our research in the use of monocular cameras mounted on moving vehicles such as quadcopter or similar unmanned aerial vehicles (UAV). These cameras are subjected to vibration due to the constant movement experienced by these vehicles and consequently the captured images are often distorted. Our approach uses the Hough transform for line detection but this can be hampered when the surface of the objects to be captured has a high reflection factor. Our approach combines two key algorithms to detect and reduce both glare and vibration induced during image acquisition from a moving object

    Video Surveillance for Road Traffic Monitoring

    Get PDF
    This project addresses the improvement of the current process of road traffic monitoring system being implemented in Malaysia. The current monitoring system implies video feeds from a particular road to a place where there will be personnel monitoring the traffic condition. The personnel will then manually update the traffic condition to various radio and television networks throughout the country to be broadcasted. FM radio is a famous channel for traffic updates since every vehicle is equipped with one. This project will provide a real-time update of the current traffic condition

    A WSN approach to unmanned aerial surveillance of traffic anomalies: Some challenges and potential solutions

    Get PDF
    Stationary CCTV cameras are often used to help monitor car movements and detect any anomalies - e.g., accidents, cars going faster than the allowed speed, driving under the influence of alcohol, etc. The height of the cameras can limit their effectiveness and the types of image processing algorithm which can be used. With advancements in the development of inexpensive aerial flying objects and wireless devices, these two technologies can be coupled to support enhanced surveillance. The flying objects can carry multiple cameras and be sent well above the ground to capture and feed video/image information back to a ground station. In addition, because of the height the objects can achieve, they can capture videos and images which could lend themselves more suitably for the application of a variety of video and image processing algorithms to assist analysts in detecting any anomalies. In this paper, we examine some main challenges of using flying objects for surveillance purposes and propose some potential solutions to these challenges. By doing so, we attempt to provide the basis for developing a framework to build a viable system for improved surveillance based on low-cost equipment. © 2013 IEEE.t.published_or_final_versio

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    Optical Tracking and Spectral Characterization of Cubesats for Operational Missions

    Get PDF
    Orbital debris in low Earth orbit is of growing concern to operational satellites from the government and commercial sector. With an uptick in worldwide satellite launches and the growing adoption of the CubeSat standard, the number of small objects in orbit are increasing at a faster pace than ever. As a result, a cascading collision event seems inevitable in the near future. The United States Strategic Command tracks and determines the orbit of resident space objects using a worldwide network of radar and optical sensors. However, in order to better protect space assets, there has been increased interest in not just knowing where a space object is, but what the object is. The optical and spectral characteristics of solar light reflected off of satellites or debris can provide information on the physical state or identity of the object. These same optical signatures can be used for mission support of operational satellite missions- down to satellites as small as CubeSats. Optical observation of CubeSats could provide independent monitoring of spin rate, deployable status, identification of individual CubeSats in a swarm, or possibly attitude information. This thesis first introduces the reader to a review of available observation techniques followed by the basics of observational astronomy relevant to satellite tracking. The thesis then presents the OSCOM system- a system for Optical tracking and Spectral characterization of CubeSats for Operational Missions. OSCOM is a ground-based system capable of observing and characterizing small debris and CubeSats with commercially available optical telescopes and detectors. The system is just as applicable for larger satellites which have higher signal to noise ratio. The OSCOM system has been used to successfully collect time-series photometry of more than 60 unique satellites of all sizes. Selected photometry results are presented along with a discussion of the technical details required for optical observation of small satellites

    CITRIC: A low-bandwidth wireless camera network platform

    Get PDF
    In this paper, we propose and demonstrate a novel wireless camera network system, called CITRIC. The core component of this system is a new hardware platform that integrates a camera, a frequency-scalable (up to 624 MHz) CPU, 16 MB FLASH, and 64 MB RAM onto a single device. The device then connects with a standard sensor network mote to form a camera mote. The design enables in-network processing of images to reduce communication requirements, which has traditionally been high in existing camera networks with centralized processing. We also propose a back-end client/server architecture to provide a user interface to the system and support further centralized processing for higher-level applications. Our camera mote enables a wider variety of distributed pattern recognition applications than traditional platforms because it provides more computing power and tighter integration of physical components while still consuming relatively little power. Furthermore, the mote easily integrates with existing low-bandwidth sensor networks because it can communicate over the IEEE 802.15.4 protocol with other sensor network platforms. We demonstrate our system on three applications: image compression, target tracking, and camera localization

    Embedded Vision Systems: A Review of the Literature

    Get PDF
    Over the past two decades, the use of low power Field Programmable Gate Arrays (FPGA) for the acceleration of various vision systems mainly on embedded devices have become widespread. The reconfigurable and parallel nature of the FPGA opens up new opportunities to speed-up computationally intensive vision and neural algorithms on embedded and portable devices. This paper presents a comprehensive review of embedded vision algorithms and applications over the past decade. The review will discuss vision based systems and approaches, and how they have been implemented on embedded devices. Topics covered include image acquisition, preprocessing, object detection and tracking, recognition as well as high-level classification. This is followed by an outline of the advantages and disadvantages of the various embedded implementations. Finally, an overview of the challenges in the field and future research trends are presented. This review is expected to serve as a tutorial and reference source for embedded computer vision systems
    corecore