1,002 research outputs found

    Connected cars: communication between vehicles and infrastructure through visible light

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia de Electrónica e TelecomunicaçõesDevido ao aumento da procura de tráfego, a banda de radiofrequência (RF) esta actualmente em baixa oferta. Uma vez que a Visible Light Communication (VLC) emprega o espectro de luz visível para transmitir e codificar dados, e uma potencial tecnologia sem fios alternativa a considerar. Nos últimos anos, este tipo de tecnologia tem mudado e avançado. Esta tese visa caracterizar e testar ligações de comunicação baseadas em VLC para utilização em sistemas de gestão de tráfego. A infraestrutura primaria para regular o acesso as estradas, semáforos, será em breve substituída por outras mais eficazes para melhorar a gestão do tráfego. Os LEDs brancos Tetrachromatic servem como transmissores da ligação VLC e são utilizados tanto para a transmissão de dados como para a iluminação. O recetor e construído em fotodiodos SiC:H/a-Si:H, que tem sensibilidade de espectro seletivo. A modulação On-Off Keying (OOK) e utilizada para transmitir a mensagem num cenário de intersecção de sete células com uma estrutura de 64 bits e LEDs em cada canto criando uma célula de nove pés de impressão. Utilizando uma ferramenta de simulação, foram obtidas as pegadas de cada célula e o mapa de cobertura. Para demonstrar como a estrutura da moldura e construída e enviada, foi testada uma trajetória com o veículo a viajar de Este para Sul.Due to increased traffic demand, Radio Frequency (RF) band is currently in low supply. Since Visible Light Communication (VLC) employs the visible light spectrum to transmit and encode data, it is a potential alternative wireless technology to consider. In recent years, this kind of technology has changed and advanced. This thesis aims to characterize and test VLC-based communication links for use in traffic management systems. The primary infrastructure for regulating access to roads, traffic lights, will shortly be replaced by more effective ones to enhance traffic management. Tetrachromatic white LEDs serve as the VLC link’s transmitters and are used for both data transmission and lighting. The receiver is built on SiC:H/a-Si:H photodiodes, which have selective spectrum sensitivity. Modulation On-Off Keying (OOK) is used to transmit the message in a sevencell intersection scenario with a 64-bit frame structure and LEDs at each corner creating a nine-footprint cell. Using a simulation tool, the footprints of each cell and the coverage map were obtained. To demonstrate how the frame structure is constructed and sent, a trajectory was tested with the vehicle travelling from East to South.N/

    Investigation of advanced navigation and guidance system concepts for all-weather rotorcraft operations

    Get PDF
    Results are presented of a survey conducted of active helicopter operators to determine the extent to which they wish to operate in IMC conditions, the visibility limits under which they would operate, the revenue benefits to be gained, and the percent of aircraft cost they would pay for such increased capability. Candidate systems were examined for capability to meet the requirements of a mission model constructed to represent the modes of flight normally encountered in low visibility conditions. Recommendations are made for development of high resolution radar, simulation of the control display system for steep approaches, and for development of an obstacle sensing system for detecting wires. A cost feasibility analysis is included

    Robot guidance using machine vision techniques in industrial environments: A comparative review

    Get PDF
    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works

    Intelligent thermal image-based sensor for affordable measurement of crop canopy temperature

    Get PDF
    Crop canopy temperature measurement is necessary for monitoring water stress indicators such as the Crop Water Stress Index (CWSI). Water stress indicators are very useful for irrigation strategies management in the precision agriculture context. For this purpose, one of the techniques used is thermography, which allows remote temperature measurement. However, the applicability of these techniques depends on being affordable, allowing continuous monitoring over multiple field measurement. In this article, the development of a sensor capable of automatically measuring the crop canopy temperature by means of a low-cost thermal camera and the implementation of artificial intelligence-based image segmentation models is presented. In addition, we provide results on almond trees comparing our system with a commercial thermal camera, in which an R-squared of 0.75 is obtained.This research was funded by the Agencia Estatal de Investigación (AEI) under project numbers: AGL2016-77282-C3-3-R, and PID2019-106226-C22 AEI/https://doi.org//10.13039/501100011033. FPU17/05155, FPU19/00020 have been granted by Ministerio de Educación y Formación Profesional. The authors would like to acknowledge the support of Miriam Montoya Gómez in language assistance

    Functional mobile-based two-factor authentication by photonic physical unclonable functions

    Get PDF
    Given the rapid expansion of the Internet of Things and because of the concerns around counterfeited goods, secure and resilient cryptographic systems are in high demand. Due to the development of digital ecosystems, mobile applications for transactions require fast and reliable methods to generate secure cryptographic keys, such as Physical Unclonable Functions (PUFs). We demonstrate a compact and reliable photonic PUF device able to be applied in mobile-based authentication. A miniaturized, energy-efficient, and low-cost token was forged of flexible luminescent organic–inorganic hybrid materials doped with lanthanides, displaying unique challenge–response pairs (CRPs) for two-factor authentication. Under laser irradiation in the red spectral region, a speckle pattern is attained and accessed through conventional charge-coupled cameras, and under ultraviolet light-emitting diodes, it displays a luminescent pattern accessed through hyperspectral imaging and converted to a random intensity-based pattern, ensuring the two-factor authentication. This methodology features the use of a discrete cosine transform to enable a low-cost and semi-compact encryption system suited for speckle and luminescence-based CRPs. The PUF evaluation and the authentication protocol required the analysis of multiple CRPs from different tokens, establishing an optimal cryptographic key size (128 bits) and an optimal decision threshold level that minimizes the error probability.publishe

    Multiframe visual-inertial blur estimation and removal for unmodified smartphones

    Get PDF
    Pictures and videos taken with smartphone cameras often suffer from motion blur due to handshake during the exposure time. Recovering a sharp frame from a blurry one is an ill-posed problem but in smartphone applications additional cues can aid the solution. We propose a blur removal algorithm that exploits information from subsequent camera frames and the built-in inertial sensors of an unmodified smartphone. We extend the fast non-blind uniform blur removal algorithm of Krishnan and Fergus to non-uniform blur and to multiple input frames. We estimate piecewise uniform blur kernels from the gyroscope measurements of the smartphone and we adaptively steer our multiframe deconvolution framework towards the sharpest input patches. We show in qualitative experiments that our algorithm can remove synthetic and real blur from individual frames of a degraded image sequence within a few seconds

    Infrared based monocular relative navigation for active debris removal

    No full text
    In space, visual based relative navigation systems suffer from the harsh illumination conditions of the target (e.g. eclipse conditions, solar glare, etc.). In current Rendezvous and Docking (RvD) missions, most of these issues are addressed by advanced mission planning techniques (e.g strict manoeuvre timings). However, such planning would not always be feasible for Active Debris Removal (ADR) missions which have more unknowns. Fortunately, thermal infrared technology can operate under any lighting conditions and therefore has the potential to be exploited in the ADR scenario. In this context, this study investigates the benefits and the challenges of infrared based relative navigation. The infrared environment of ADR is very much different to that of terrestrial applications. This study proposes a methodology of modelling this environment in a computationally cost effective way to create a simulation environment in which the navigation solution can be tested. Through an intelligent classification of possible target surface coatings, the study is generalised to simulate the thermal environment of space debris in different orbit profiles. Through modelling various scenarios, the study also discusses the possible challenges of the infrared technology. In laboratory conditions, providing the thermal-vacuum environment of ADR, these theoretical findings were replicated. By use of this novel space debris set-up, the study investigates the behaviour of infrared cues extracted by different techniques and identifies the issue of short-lifespan features in the ADR scenarios. Based on these findings, the study suggests two different relative navigation methods based on the degree of target cooperativeness: partially cooperative targets, and uncooperative targets. Both algorithms provide the navigation solution with respect to an online reconstruction of the target. The method for partially cooperative targets provides a solution for smooth trajectories by exploiting the subsequent image tracks of features extracted from the first frame. The second algorithm is for uncooperative targets and exploits the target motion (e.g. tumbling) by formulating the problem in terms of a static target and a moving map (i.e. target structure) within a filtering framework. The optical flow information is related to the target motion derivatives and the target structure. A novel technique that uses the quality of the infrared cues to improve the algorithm performance is introduced. The problem of short measurement duration due to target tumbling motion is addressed by an innovative smart initialisation procedure. Both navigation solutions were tested in a number of different scenarios by using computer simulations and a specific laboratory set-up with real infrared camera. It is shown that these methods can perform well as the infrared-based navigation solutions using monocular cameras where knowledge relating to the infrared appearance of the target is limited

    DISeR: Designing Imaging Systems with Reinforcement Learning

    Full text link
    Imaging systems consist of cameras to encode visual information about the world and perception models to interpret this encoding. Cameras contain (1) illumination sources, (2) optical elements, and (3) sensors, while perception models use (4) algorithms. Directly searching over all combinations of these four building blocks to design an imaging system is challenging due to the size of the search space. Moreover, cameras and perception models are often designed independently, leading to sub-optimal task performance. In this paper, we formulate these four building blocks of imaging systems as a context-free grammar (CFG), which can be automatically searched over with a learned camera designer to jointly optimize the imaging system with task-specific perception models. By transforming the CFG to a state-action space, we then show how the camera designer can be implemented with reinforcement learning to intelligently search over the combinatorial space of possible imaging system configurations. We demonstrate our approach on two tasks, depth estimation and camera rig design for autonomous vehicles, showing that our method yields rigs that outperform industry-wide standards. We believe that our proposed approach is an important step towards automating imaging system design.Comment: ICCV 2023. Project Page: https://tzofi.github.io/dise
    corecore