59 research outputs found

    Passive visible light detection of humans

    Get PDF
    This paper experimentally investigates passive human visible light sensing (VLS). A passive VLS system is tested consisting of one light emitting diode (LED) and one photodiode-based receiver, both ceiling-mounted. There is no line of sight between the LED and the receiver, so only reflected light can be considered. The influence of a human is investigated based on the received signal strength (RSS) values of the reflections of ambient light at the photodiode. Depending on the situation, this influence can reach up to +/- 50%. The experimental results show the influence of three various clothing colors, four different walking directions and four different layouts. Based on the obtained results, a human pass-by detection system is proposed and tested. The system achieves a detection rate of 100% in a controlled environment for 21 experiments. For a realistic corridor experiment, the system keeps its detection rate of 100% for 19 experiments

    Occupancy Estimation Through Visible Light Sensing (VLS)

    Get PDF
    Visible light is everywhere around in our daily life. The part of the electromagnetic spectrum that is visible to the human eye is called visible light; it is in the range between 350-750 nm, which is roughly between 400-750 THz in terms of frequency. Visible light sensing and communication (VLS and VLC) are considered two of the most emerging fields in sensing and wireless communication areas, where light emitting diodes (LEDs) are used as the transmission unit (Tx). LEDs have a number of advantages, some of which are extended life expectancy, illumination, lower energy dissipation, and eco-friendly. As a result, visible light can be used in several sensing applications in our life such as occupancy estimation.In this thesis, a new occupancy estimation method based on VLS is presented. A visible light source (e.g., LED) is utilized as the transmitter and a photo-detector (PD) used as a receiver, forming a visible light sensing system. Depending on the number of people in the room crossing the line-of-sight LOS (between the light source and PD), the received power at the receiver change. Consequently, probability density function (PDF) and cumulative distribution function (CDF) of the received power at the receiver change. First, a theoretical analysis of the received power is developed to incorporate the impact of room occupancy on the PDF and CDF of the received power. Second, these received power PDF and CDF expressions are compared with simulation results. Both results are in perfect agreement that verifies the theoretical analysis. In addition, Kullback-Leibler divergence (KL-divergence) method to analyze measurement data to detect the number of occupants in an environment. In this method, the stored PDF of the received power in the database is compared with the measured received power PDF, which reveals the estimated room occupancy. It was shown how a slight variation in room occupancy can dramatically alter the statistics the received power. Theoretical analysis and simulations are performed. In addition, we have conducted experiments in the Wireless Communication and Sensing Research Lab (WCSRL) located in Engineering South (ES) 408 at Oklahoma State University. As a future work, we are planning to study the impact of scattering on estimation accuracy. Multiple LEDs and PDs (i.e., multiple transmitters and receivers) can also be considered in future tests. Developing a complete system that can control and regulate buildings HVAC (heating, ventilation, and air conditioning) and lighting to improve sustainability and energy efficiency will be one of our promising research direction in the future

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith

    Sensing Mountains

    Get PDF
    Sensing mountains by close-range and remote techniques is a challenging task. The 4th edition of the international Innsbruck Summer School of Alpine Research 2022 – Close-range Sensing Techniques in Alpine Terrain brings together early career and experienced scientists from technical-, geo- and environmental-related research fields. The interdisciplinary setting of the summer school creates a creative space for exchanging and learning new concepts and solutions for mapping, monitoring and quantifying mountain environments under ongoing conditions of change

    FDLA: A Novel Frequency Diversity and Link Aggregation Solution for Handover in an Indoor Vehicular VLC Network

    Get PDF
    VLC (VLC) has been introduced as a complementary wireless technology that can be widely used in industrial indoor environments where automated guided vehicles aim to ease and accelerate logistics. Despite its advantages, there is one significant drawback of using an indoor () network that is there is a high handover outage duration. In line-of-sight VLC links, such handovers are frequently due to mobility, shadowing, and obstacles. In this paper, we propose a frequency diversity and link aggregation solution, which is a novel technique in Data link layer to tackle handover challenge in indoor networks. We have developed a small-scale prototype and experimentally evaluated its performance for a variety of scenarios and compared the results with other handover techniques. We also assessed the configuration options in more detail, in particular focusing on different network traffic types and various address resolution protocol intervals. The measurement results demonstrate the advantages of our approach for low-outage duration handovers in. The proposed idea is able to decrease the handover outage duration in a two-dimensional network to about 0.2 s, which is considerably lower compared to previous solutions

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions

    Multi-environment Georeferencing of RGB-D Panoramic Images from Portable Mobile Mapping – a Perspective for Infrastructure Management

    Get PDF
    Hochaufgelöste, genau georeferenzierte RGB-D-Bilder sind die Grundlage für 3D-Bildräume bzw. 3D Street-View-Webdienste, welche bereits kommerziell für das Infrastrukturmanagement eingesetzt werden. MMS ermöglichen eine schnelle und effiziente Datenerfassung von Infrastrukturen. Die meisten im Aussenraum eingesetzten MMS beruhen auf direkter Georeferenzierung. Diese ermöglicht in offenen Bereichen absolute Genauigkeiten im Zentimeterbereich. Bei GNSS-Abschattung fällt die Genauigkeit der direkten Georeferenzierung jedoch schnell in den Dezimeter- oder sogar in den Meterbereich. In Innenräumen eingesetzte MMS basieren hingegen meist auf SLAM. Die meisten SLAM-Algorithmen wurden jedoch für niedrige Latenzzeiten und für Echtzeitleistung optimiert und nehmen daher Abstriche bei der Genauigkeit, der Kartenqualität und der maximalen Ausdehnung in Kauf. Das Ziel dieser Arbeit ist, hochaufgelöste RGB-D-Bilder in verschiedenen Umgebungen zu erfassen und diese genau und zuverlässig zu georeferenzieren. Für die Datenerfassung wurde ein leistungsstarkes, bildfokussiertes und rucksackgetragenes MMS entwickelt. Dieses besteht aus einer Mehrkopf-Panoramakamera, zwei Multi-Beam LiDAR-Scannern und einer GNSS- und IMU-kombinierten Navigationseinheit der taktischen Leistungsklasse. Alle Sensoren sind präzise synchronisiert und ermöglichen Zugriff auf die Rohdaten. Das Gesamtsystem wurde in Testfeldern mit bündelblockbasierten sowie merkmalsbasierten Methoden kalibriert, was eine Voraussetzung für die Integration kinematischer Sensordaten darstellt. Für eine genaue und zuverlässige Georeferenzierung in verschiedenen Umgebungen wurde ein mehrstufiger Georeferenzierungsansatz entwickelt, welcher verschiedene Sensordaten und Georeferenzierungsmethoden vereint. Direkte und LiDAR SLAM-basierte Georeferenzierung liefern Initialposen für die nachträgliche bildbasierte Georeferenzierung mittels erweiterter SfM-Pipeline. Die bildbasierte Georeferenzierung führt zu einer präzisen aber spärlichen Trajektorie, welche sich für die Georeferenzierung von Bildern eignet. Um eine dichte Trajektorie zu erhalten, die sich auch für die Georeferenzierung von LiDAR-Daten eignet, wurde die direkte Georeferenzierung mit Posen der bildbasierten Georeferenzierung gestützt. Umfassende Leistungsuntersuchungen in drei weiträumigen anspruchsvollen Testgebieten zeigen die Möglichkeiten und Grenzen unseres Georeferenzierungsansatzes. Die drei Testgebiete im Stadtzentrum, im Wald und im Gebäude repräsentieren reale Bedingungen mit eingeschränktem GNSS-Empfang, schlechter Beleuchtung, sich bewegenden Objekten und sich wiederholenden geometrischen Mustern. Die bildbasierte Georeferenzierung erzielte die besten Genauigkeiten, wobei die mittlere Präzision im Bereich von 5 mm bis 7 mm lag. Die absolute Genauigkeit betrug 85 mm bis 131 mm, was einer Verbesserung um Faktor 2 bis 7 gegenüber der direkten und LiDAR SLAM-basierten Georeferenzierung entspricht. Die direkte Georeferenzierung mit CUPT-Stützung von Bildposen der bildbasierten Georeferenzierung, führte zu einer leicht verschlechterten mittleren Präzision im Bereich von 13 mm bis 16 mm, wobei sich die mittlere absolute Genauigkeit nicht signifikant von der bildbasierten Georeferenzierung unterschied. Die in herausfordernden Umgebungen erzielten Genauigkeiten bestätigen frühere Untersuchungen unter optimalen Bedingungen und liegen in derselben Grössenordnung wie die Resultate anderer Forschungsgruppen. Sie können für die Erstellung von Street-View-Services in herausfordernden Umgebungen für das Infrastrukturmanagement verwendet werden. Genau und zuverlässig georeferenzierte RGB-D-Bilder haben ein grosses Potenzial für zukünftige visuelle Lokalisierungs- und AR-Anwendungen
    • …
    corecore