754 research outputs found

    Smart Visual Beacons with Asynchronous Optical Communications using Event Cameras

    Full text link
    Event cameras are bio-inspired dynamic vision sensors that respond to changes in image intensity with a high temporal resolution, high dynamic range and low latency. These sensor characteristics are ideally suited to enable visual target tracking in concert with a broadcast visual communication channel for smart visual beacons with applications in distributed robotics. Visual beacons can be constructed by high-frequency modulation of Light Emitting Diodes (LEDs) such as vehicle headlights, Internet of Things (IoT) LEDs, smart building lights, etc., that are already present in many real-world scenarios. The high temporal resolution characteristic of the event cameras allows them to capture visual signals at far higher data rates compared to classical frame-based cameras. In this paper, we propose a novel smart visual beacon architecture with both LED modulation and event camera demodulation algorithms. We quantitatively evaluate the relationship between LED transmission rate, communication distance and the message transmission accuracy for the smart visual beacon communication system that we prototyped. The proposed method achieves up to 4 kbps in an indoor environment and lossless transmission over a distance of 100 meters, at a transmission rate of 500 bps, in full sunlight, demonstrating the potential of the technology in an outdoor environment.Comment: 7 pages, 8 figures, accepted by IEEE International Conference on Intelligent Robots and Systems (IROS) 202

    Visible light communication system based on software defined radio: Performance study of intelligent transportation and indoor applications

    Get PDF
    In this paper, our first attempt at visible light communication system, based on software defined radio (SDR) and implemented in LabVIEW is introduced. This paper mainly focuses on two most commonly used types of LED lights, ceiling lights and LED car lamps/tail-lights. The primary focus of this study is to determine the basic parameters of real implementation of visible light communication (VLC) system, such as transmit speed, communication errors (bit-error ratio, error vector magnitude, energy per bit to noise power spectral density ratio) and highest reachable distance. This work focuses on testing various multistate quadrature amplitude modulation (M-QAM). We have used Skoda Octavia III tail-light and Phillips indoor ceiling light as transmitters and SI PIN Thorlabs photodetector as receiver. Testing method for each light was different. When testing ceiling light, we have focused on reachable distance for each M-QAM variant. On the other side, Octavia tail-light was tested in variable nature conditions (such as thermal turbulence, rain, fog) simulated in special testing box. This work will present our solution, measured parameters and possible weak spots, which will be adjusted in the future.Web of Science84art. no. 43

    Augmented and Virtual Reality techniques for footwear

    Get PDF
    The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear customization is an added value aimed not only to improve product quality, but also consumer comfort. Moreover, customisation implies a new business model that avoids the competition of mass production coming from new manufacturers settled mainly in Asian countries. However, footwear customisation implies a significant effort at different levels. In manufacturing, rapid and virtual prototyping is required; indeed the prototype is intended to become the final product. The whole design procedure must be validated using exclusively virtual techniques to ensure the feasibility of this process, since physical prototypes should be avoided. With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs and increase sales, since shops would not require storing every shoe model and the process of trying several models on would be easier and faster for the consumer. In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on 3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is implemented with low-cost computational elements that allow a hypothetical customer to check in real time the goodness of a given virtual footwear model from an aesthetical point of view

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Mobile Robot Navigation

    Get PDF

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    A systematic review of perception system and simulators for autonomous vehicles research

    Get PDF
    This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented.This work was partially supported by DGT (ref. SPIP2017-02286) and GenoVision (ref. BFU2017-88300-C2-2-R) Spanish Government projects, and the “Research Programme for Groups of Scientific Excellence in the Region of Murcia" of the Seneca Foundation (Agency for Science and Technology in the Region of Murcia – 19895/GERM/15)

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit

    Multiple-input multiple-output visible light communication receivers for high data-rate mobile applications

    Full text link
    Visible light communication (VLC) is an emerging form of optical wireless communication that transmits data by modulating light in the visible spectrum. To meet the growing demand for wireless communication capacity from mobile devices, we investigate multiple-input multiple-output (MIMO) VLC to achieve multiplexing capacity gains and to allow multiple users to simultaneously transmit without disrupting each other. Previous approaches to receive VLC signals have either been unable to simultaneously receive multiple independent signals from multiple transmitters, unable to adapt to moving transmitters and receivers, or unable to sample the received signals fast enough for high-speed VLC. In this dissertation, we develop and evaluate two novel approaches to receive high-speed MIMO VLC signals from mobile transmitters that can be practically scaled to support additional transmitters. The first approach, Token-Based Pixel Selection (TBPS) exploits the redundancy and sparsity of high-resolution transmitter images in imaging VLC receivers to greatly increase the rate at which complementary metal-oxide semiconductor (CMOS) active pixel sensor (APS) image sensors can sample VLC signals though improved signal routing to enable such high-resolution image sensors to capture high-speed VLC signals. We further model the CMOS APS pixel as a linear shift-invariant system, investigate how it scales to support additional transmitters and higher resolutions, and investigate how noise can affect its performance. The second approach, a spatial light modulator (SLM)-based VLC receiver, uses an SLM to dynamically control the resulting wireless channel matrix to enable relatively few photodetectors to reliably receive from multiple transmitters despite their movements. As part of our analysis, we develop a MIMO VLC channel capacity model that accounts for the non-negativity and peak-power constraints of VLC systems to evaluate the performance of the SLM VLC receiver and to facilitate the optimization of the channel matrix through the SLM
    corecore