25 research outputs found

    Flexi-WVSNP-DASH: A Wireless Video Sensor Network Platform for the Internet of Things

    Get PDF
    abstract: Video capture, storage, and distribution in wireless video sensor networks (WVSNs) critically depends on the resources of the nodes forming the sensor networks. In the era of big data, Internet of Things (IoT), and distributed demand and solutions, there is a need for multi-dimensional data to be part of the Sensor Network data that is easily accessible and consumable by humanity as well as machinery. Images and video are expected to become as ubiquitous as is the scalar data in traditional sensor networks. The inception of video-streaming over the Internet, heralded a relentless research for effective ways of distributing video in a scalable and cost effective way. There has been novel implementation attempts across several network layers. Due to the inherent complications of backward compatibility and need for standardization across network layers, there has been a refocused attention to address most of the video distribution over the application layer. As a result, a few video streaming solutions over the Hypertext Transfer Protocol (HTTP) have been proposed. Most notable are Apple’s HTTP Live Streaming (HLS) and the Motion Picture Experts Groups Dynamic Adaptive Streaming over HTTP (MPEG-DASH). These frameworks, do not address the typical and future WVSN use cases. A highly flexible Wireless Video Sensor Network Platform and compatible DASH (WVSNP-DASH) are introduced. The platform's goal is to usher video as a data element that can be integrated into traditional and non-Internet networks. A low cost, scalable node is built from the ground up to be fully compatible with the Internet of Things Machine to Machine (M2M) concept, as well as the ability to be easily re-targeted to new applications in a short time. Flexi-WVSNP design includes a multi-radio node, a middle-ware for sensor operation and communication, a cross platform client facing data retriever/player framework, scalable security as well as a cohesive but decoupled hardware and software design.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Miniaturized embedded stereo vision system (MESVS)

    Get PDF
    Stereo vision is one of the fundamental problems of computer vision. It is also one of the oldest and heavily investigated areas of 3D vision. Recent advances of stereo matching methodologies and availability of high performance and efficient algorithms along with availability of fast and affordable hardware technology, have allowed researchers to develop several stereo vision systems capable of operating at real-time. Although a multitude of such systems exist in the literature, the majority of them concentrates only on raw performance and quality rather than factors such as dimension, and power requirement, which are of significant importance in the embedded settings. In this thesis a new miniaturized embedded stereo vision system (MESVS) is presented, which is miniaturized to fit within a package of 5x5cm, is power efficient, and cost-effective. Furthermore, through application of embedded programming techniques and careful optimization, MESVS achieves the real-time performance of 20 frames per second. This work discusses the various challenges involved regarding design and implementation of this system and the measures taken to tackle them

    An Event-Driven Multiple Objects Surveillance System

    Get PDF
    Traditional surveillance systems are constrained because of a fixed and preset pattern of monitoring. It can reduce the reliability of the system and cause an increased generation of false alarms. It results in an increased processing activity of the system, which causes an augmented consumption of system resources and power. Within this framework, a human surveillance system is proposed based on the event-driven awakening and self-organization principle. The proposed system overcomes these downsides up to a certain level. It is achieved by intelligently merging an assembly of sensors with two cameras, actuators, a lighting module and cost-effective embedded processors. With the exception of low-power event detectors, all other system modules remain in the sleep mode. These modules are activated only upon detection of an event and as a function of the sensing environment condition. It reduces power consumption and processing activity of the proposed system. An effective combination of a sensor assembly and a robust classifier suppresses generation of false alarms and improves system reliability. An experimental setup is realized in order to verify the functionality of the proposed system. Results confirm proper functionality of the implemented system. A 62.3-fold system memory utilization and bandwidth consumption reduction compared to traditional counterparts is achieved, i.e. a result of the proposed system self-organization and event-driven awakening features. It confirms that the proposed system outperforms its classical counterparts in terms of processing activity, power consumption and usage of resources

    ENERGY-EFFICIENT LIGHTWEIGHT ALGORITHMS FOR EMBEDDED SMART CAMERAS: DESIGN, IMPLEMENTATION AND PERFORMANCE ANALYSIS

    Get PDF
    An embedded smart camera is a stand-alone unit that not only captures images, but also includes a processor, memory and communication interface. Battery-powered, embedded smart cameras introduce many additional challenges since they have very limited resources, such as energy, processing power and memory. When camera sensors are added to an embedded system, the problem of limited resources becomes even more pronounced. Hence, computer vision algorithms running on these camera boards should be light-weight and efficient. This thesis is about designing and developing computer vision algorithms, which are aware and successfully overcome the limitations of embedded platforms (in terms of power consumption and memory usage). Particularly, we are interested in object detection and tracking methodologies and the impact of them on the performance and battery life of the CITRIC camera (embedded smart camera employed in this research). This thesis aims to prolong the life time of the Embedded Smart platform, without affecting the reliability of the system during surveillance tasks. Therefore, the reader is walked through the whole designing process, from the development and simulation, followed by the implementation and optimization, to the testing and performance analysis. The work presented in this thesis carries out not only software optimization, but also hardware-level operations during the stages of object detection and tracking. The performance of the algorithms introduced in this thesis are comparable to state-of-the-art object detection and tracking methods, such as Mixture of Gaussians, Eigen segmentation, color and coordinate tracking. Unlike the traditional methods, the newly-designed algorithms present notable reduction of the memory requirements, as well as the reduction of memory accesses per pixel. To accomplish the proposed goals, this work attempts to interconnect different levels of the embedded system architecture to make the platform more efficient in terms of energy and resource savings. Thus, the algorithms proposed are optimized at the API, middleware, and hardware levels to access the pixel information of the CMOS sensor directly. Only the required pixels are acquired in order to reduce the unnecessary communications overhead. Experimental results show that when exploiting the architecture capabilities of an embedded platform, 41.24% decrease in energy consumption, and 107.2% increase in battery-life can be accomplished. Compared to traditional object detection and tracking methods, the proposed work provides an additional 8 hours of continuous processing on 4 AA batteries, increasing the lifetime of the camera to 15.5 hours

    Optically Powered Highly Energy-efficient Sensor Networks

    Get PDF
    In optically powered networks, both, communication signals and power for remotely located sensor nodes, are transmitted over an optical fiber. Key features of optically powered networks are node operation without local power supplies or batteries as well as operation with negligible susceptibility to electro-magnetic interference and to lightning. In this book, different kinds of optically powered devices and networks are investigated, and selected applications are demonstrated

    Optically Powered Highly Energy-efficient Sensor Networks

    Get PDF
    In optically powered networks, both, communication signals and power for remotely located sensor nodes, are transmitted over an optical fiber. Key features of optically powered networks are node operation without local power supplies or batteries as well as operation with negligible susceptibility to electro-magnetic interference and to lightning. In this book, different kinds of optically powered devices and networks are investigated, and selected applications are demonstrated

    Smart vision in system-on-chip applications

    Get PDF
    In the last decade the ability to design and manufacture integrated circuits with higher transistor densities has led to the integration of complete systems on a single silicon die. These are commonly referred to as System-on-Chip (SoC). As SoCs processes can incorporate multiple technologies it is now feasible to produce single chip camera systems with embedded image processing, known as Imager-on-Chips (IoC). The development of IoCs is complicated due to the mixture of digital and analog components and the high cost of prototyping these designs using silicon processes. There are currently no re-usable prototyping platforms that specifically address the needs of IoC development. This thesis details a new prototyping platform specifically for use in the development of low-cost mass-market IoC applications. FPGA technology was utilised to implement a frame-based processing architecture suitable for supporting a range of real-time imaging and machine vision applications. To demonstrate the effectiveness of the prototyping platform, an example object counting and highlighting application was developed and functionally verified in real-time. A high-level IoC cost model was formulated to calculate the cost of manufacturing prototyped applications as a single IoC. This highlighted the requirement for careful analysis of optical issues, embedded imager array size and the silicon process used to ensure the desired IoC unit cost was achieved. A modified version of the FPGA architecture, which would result in improving the DSP performance, is also proposed

    Reconfigurable FPGA Architecture for Computer Vision Applications in Smart Camera Networks

    Get PDF
    Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In this vision, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely update the application parameter and the performed computer vision application at run­time. In this respect, we present a novel SCN node architecture based on a device in which a microcontroller manage all the network functionality as well as the remote configuration, while an FPGA implements all the necessary module of a full computer vision pipeline. In this work the envisioned architecture is first detailed in general terms, then a real implementation is presented to show the feasibility and the benefits of the proposed solution. Finally, performance evaluation results underline the potential of an hardware software codesign approach in reaching flexibility and reduced processing time

    FPGA-based stereo vision system for autonomous driving

    Get PDF
    The project consists on the design and implementation of a real-time stereo vision image sensor oriented to autonomous driving systems using an FPGA. The function of this sensor is to output a real-time depth image from an input of two grayscale luminance images, which can make further processing much easier and faster. The final objective of the project is to develop a standalone prototype for the implementation of the system on an autonomous vehicle, but it will be developed on an existing FPGA platform to prove its viability. Two low-cost digital cameras will be used as input sensors, and the output image will be transmitted to a PC

    A high-performance hardware architecture of an image matching system based on the optimised SIFT algorithm

    Get PDF
    The Scale Invariant Feature Transform (SIFT) is one of the most popular matching algorithms in the field of computer vision. It takes over many other algorithms because features detected are fully invariant to image scaling and rotation, and are also shown to be robust to changes in 3D viewpoint, addition of noise, changes in illumination and a sustainable range of affine distortion. However, the computational complexity is high, which prevents it from achieving real-time. The aim of this project, therefore, is to develop a high-performance image matching system based on the optimised SIFT algorithm to perform real-time feature detection, description and matching. This thesis presents the stages of the development of the system. To reduce the computational complexity, an alternative to the grid layout of standard SIFT is proposed, which is termed as SRI-DASIY (Scale and Rotation Invariant DAISY). The SRI-DAISY achieves comparable performance with the standard SIFT descriptor, but is more efficient to be implemented using hardware, in terms of both computational complexity and memory usage. The design takes only 7.57 µs to generate a descriptor with a system frequency of 100 MHz, which is equivalent to approximately 132,100 descriptors per second and is of the highest throughput when compared with existing designs. Besides, a novel keypoint matching strategy is also presented in this thesis, which achieves higher precision than the widely applied distance ratio based matching and is computationally more efficient. All phases of the SIFT algorithm have been investigated, including feature detection, descriptor generation and descriptor matching. The characterisation of each individual part of the design is carried out and compared with the software simulation results. A fully stand-alone image matching system has been developed that consists of a CMOS camera front-end for image capture, a SIFT processing core embedded in a Field Programmable Logic Array (FPGA) device, and a USB back-end for data transfer. Experiments are conducted by using real-world images to verify the system performance. The system has been tested by integrating into two practical applications. The resulting image matching system eliminates the bottlenecks that limit the overall throughput of the system, and hence allowing the system to process images in real-time without interruption. The design can be modified to adapt to the applications processing images with higher resolution and is still able to achieve real-time
    corecore