9 research outputs found

    Two-Dimensional RSSI-Based Indoor Localization Using Multiple Leaky Coaxial Cables With a Probabilistic Neural Network

    Get PDF
    Received signal strength indicator (RSSI) based indoor localization technology has its irreplaceable advantages for many location-aware applications. It is becoming obvious that in the development of fifth-generation (5G) and future communication technology, indoor localization technology will play a key role in location-based application scenarios including smart home systems, manufacturing automation, health care, and robotics. Compared with wireless coverage using conventional monopole antenna, leaky coaxial cables (LCX) can generate a uniform and stable wireless coverage over a long-narrow linear-cell or irregular environment such as railway station and underground shopping-mall, especially for some manufacturing factories with wireless zone areas from a large number of mental machines. This paper presents a localization method using multiple leaky coaxial cables (LCX) for an indoor multipath-rich environment. Different from conventional localization methods based on time of arrival (TOA) or time difference of arrival (TDOA), we consider improving the localization accuracy by machine learning RSSI from LCX. We will present a probabilistic neural network (PNN) approach by utilizing RSSI from LCX. The proposal is aimed at the two-dimensional (2-D) localization in a trajectory. In addition, we also compared the performance of the RSSI-based PNN (RSSI-PNN) method and conventional TDOA method over the same environment. The results show the RSSI-PNN method is promising and more than 90% of the localization errors in the RSSI-PNN method are within 1 m. Compared with the conventional TDOA method, the RSSI-PNN method has better localization performance especially in the middle area of the wireless coverage of LCXs in the indoor environment

    INTERNET DAS COISAS APLICADA AO GERENCIAMENTO DE PRESENÇA E ENCONTROS DE PESSOAS EM PRÉDIOS INTELIGENTES

    Get PDF
    With technological advances and the popularization of the Internet of Things (IoT), more concepts of Smart Cities have been adopted in large urban centers,in particular, in smart buildings, that use sensors to better control their facilities. One of the areas of smart buildings is the presence and meeting management that manages the displacement and location of occupants in the building. The detection of people in indoor environments is becoming increasingly useful, espe- cially in times of pandemic, when it is important to identify which people were close to each other in the same place and for how long. Considering this scenario, this study presents a distributed software architecture that uses four components that provide services for storing and consulting data, meeting notifications, and identifying the devices involved. In a flexible way, beacons and Android devices can be used to represent both people and physical spaces. In addition, the proposed architecture enables to determine attendances in real time, calculate the total time of stay, and check meetings of people. The effectiveness of the proposed solution was demonstrated through an experimental evaluation simulating its use.Com o avanço tecnológico e a popularização da Internet das Coisas (IoT), aplicações para Cidades Inteligentes tem sido mais frequentes, em particular, nos prédios inteligentes, que utilizam sensores para melhor gerir seus recursos. Uma das áreas de atuação da IoT, no contexto de prédios inteligentes, consiste na gestão de presença e encontros de pessoas em ambientes fechados. A detecção de pessoas em ambientes internos torna-se relevante, especialmente em tempos de pandemia, em que é fundamental identificar os locais que uma pessoa esteve, por quanto tempo e se houve encontro entre pessoas. Este trabalho apresenta uma arquitetura de software distribuída que serve de base para o desenvolvimento de aplicações que requeiram a gestão de presenças e encontros de pessoas em ambientes internos. Composta por quatro componentes, a arquitetura proposta utiliza beacons Bluetooth e dispositivos Android para representar pessoas e espaços físicos. Com a integração dos componentes, a arquitetura pode detectar presença de pessoas em tempo real, calcular o tempo total de permanência, e verificar encontros de pessoas. A efetividade da solução proposta foi demonstrada através da obtenção de resultados promissores em uma avaliação experimental que comparou o tempo real de permanência de uma pessoa em um espaço físico com o tempo computado pela arquitetura

    Generalizable Deep-Learning-Based Wireless Indoor Localization

    Get PDF
    The growing interest in indoor localization has been driven by its wide range of applications in areas such as smart homes, industrial automation, and healthcare. With the increasing reliance on wireless devices for location-based services, accurate estimation of device positions within indoor environments has become crucial. Deep learning approaches have shown promise in leveraging wireless parameters like Channel State Information (CSI) and Received Signal Strength Indicator (RSSI) to achieve precise localization. However, despite their success in achieving high accuracy, these deep learning models suffer from limited generalizability, making them unsuitable for deployment in new or dynamic environments without retraining. To address the generalizability challenge faced by conventionally trained deep learning localization models, we propose the use of meta-learning-based approaches. By leveraging meta-learning, we aim to improve the models\u27 ability to adapt to new environments without extensive retraining. Additionally, since meta-learning algorithms typically require diverse datasets from various scenarios, which can be difficult to collect specifically for localization tasks, we introduce a novel meta-learning algorithm called TB-MAML (Task Biased Model Agnostic Meta Learning). This algorithm is specifically designed to enhance generalization when dealing with limited datasets. Finally, we conduct an evaluation to compare the performance of TB-MAML-based localization with conventionally trained localization models and other meta-learning algorithms in the context of indoor localization

    Constrained Localization: A Survey

    Get PDF
    International audienceIndoor localization techniques have been extensively studied in the last decade. The wellestablished technologies enable the development of Real-Time Location Systems (RTLS). A good body of publications emerged, with several survey papers that provide a deep analysis of the research advances. Existing survey papers focus on either a specific technique and technology or on a general overview of indoor localization research. However, there is a need for a use case-driven survey on both recent academic research and commercial trends, as well as a hands-on evaluation of commercial solutions. This work aims at helping researchers select the appropriate technology and technique suitable for developing low-cost, low-power localization system, capable of providing centimeter level accuracy. The article is both a survey on recent academic research and a hands-on evaluation of commercial solutions. We introduce a specific use case as a guiding application throughout this article: localizing low-cost low-power miniature wireless swarm robots. We define a taxonomy and classify academic research according to five criteria: Line of Sight (LoS) requirement, accuracy, update rate, battery life, cost. We discuss localization fundamentals, the different technologies and techniques, as well as recent commercial developments and trends. Besides the traditional taxonomy and survey, this article also presents a hands-on evaluation of popular commercial localization solutions based on Bluetooth Angle of Arrival (AoA) and Ultra-Wideband (UWB). We conclude this article by discussing the five most important open research challenges: lightweight filtering algorithms, zero infrastructure dependency, low-power operation, security, and standardization

    Autonomous Sensing Nodes for IoT Applications

    Get PDF
    The present doctoral thesis fits into the energy harvesting framework, presenting the development of low-power nodes compliant with the energy autonomy requirement, and sharing common technologies and architectures, but based on different energy sources and sensing mechanisms. The adopted approach is aimed at evaluating multiple aspects of the system in its entirety (i.e., the energy harvesting mechanism, the choice of the harvester, the study of the sensing process, the selection of the electronic devices for processing, acquisition and measurement, the electronic design, the microcontroller unit (MCU) programming techniques), accounting for very challenging constraints as the low amounts of harvested power (i.e., [μW, mW] range), the careful management of the available energy, the coexistence of sensing and radio transmitting features with ultra-low power requirements. Commercial sensors are mainly used to meet the cost-effectiveness and the large-scale reproducibility requirements, however also customized sensors for a specific application (soil moisture measurement), together with appropriate characterization and reading circuits, are also presented. Two different strategies have been pursued which led to the development of two types of sensor nodes, which are referred to as 'sensor tags' and 'self-sufficient sensor nodes'. The first term refers to completely passive sensor nodes without an on-board battery as storage element and which operate only in the presence of the energy source, provisioning energy from it. In this thesis, an RFID (Radio Frequency Identification) sensor tag for soil moisture monitoring powered by the impinging electromagnetic field is presented. The second term identifies sensor nodes equipped with a battery rechargeable through energy scavenging and working as a secondary reserve in case of absence of the primary energy source. In this thesis, quasi-real-time multi-purpose monitoring LoRaWAN nodes harvesting energy from thermoelectricity, diffused solar light, indoor white light, and artificial colored light are presented

    Indoor Mapping and Reconstruction with Mobile Augmented Reality Sensor Systems

    Get PDF
    Augmented Reality (AR) ermöglicht es, virtuelle, dreidimensionale Inhalte direkt innerhalb der realen Umgebung darzustellen. Anstatt jedoch beliebige virtuelle Objekte an einem willkürlichen Ort anzuzeigen, kann AR Technologie auch genutzt werden, um Geodaten in situ an jenem Ort darzustellen, auf den sich die Daten beziehen. Damit eröffnet AR die Möglichkeit, die reale Welt durch virtuelle, ortbezogene Informationen anzureichern. Im Rahmen der vorliegenen Arbeit wird diese Spielart von AR als "Fused Reality" definiert und eingehend diskutiert. Der praktische Mehrwert, den dieses Konzept der Fused Reality bietet, lässt sich gut am Beispiel seiner Anwendung im Zusammenhang mit digitalen Gebäudemodellen demonstrieren, wo sich gebäudespezifische Informationen - beispielsweise der Verlauf von Leitungen und Kabeln innerhalb der Wände - lagegerecht am realen Objekt darstellen lassen. Um das skizzierte Konzept einer Indoor Fused Reality Anwendung realisieren zu können, müssen einige grundlegende Bedingungen erfüllt sein. So kann ein bestimmtes Gebäude nur dann mit ortsbezogenen Informationen augmentiert werden, wenn von diesem Gebäude ein digitales Modell verfügbar ist. Zwar werden größere Bauprojekt heutzutage oft unter Zuhilfename von Building Information Modelling (BIM) geplant und durchgeführt, sodass ein digitales Modell direkt zusammen mit dem realen Gebäude ensteht, jedoch sind im Falle älterer Bestandsgebäude digitale Modelle meist nicht verfügbar. Ein digitales Modell eines bestehenden Gebäudes manuell zu erstellen, ist zwar möglich, jedoch mit großem Aufwand verbunden. Ist ein passendes Gebäudemodell vorhanden, muss ein AR Gerät außerdem in der Lage sein, die eigene Position und Orientierung im Gebäude relativ zu diesem Modell bestimmen zu können, um Augmentierungen lagegerecht anzeigen zu können. Im Rahmen dieser Arbeit werden diverse Aspekte der angesprochenen Problematik untersucht und diskutiert. Dabei werden zunächst verschiedene Möglichkeiten diskutiert, Indoor-Gebäudegeometrie mittels Sensorsystemen zu erfassen. Anschließend wird eine Untersuchung präsentiert, inwiefern moderne AR Geräte, die in der Regel ebenfalls über eine Vielzahl an Sensoren verfügen, ebenfalls geeignet sind, als Indoor-Mapping-Systeme eingesetzt zu werden. Die resultierenden Indoor Mapping Datensätze können daraufhin genutzt werden, um automatisiert Gebäudemodelle zu rekonstruieren. Zu diesem Zweck wird ein automatisiertes, voxel-basiertes Indoor-Rekonstruktionsverfahren vorgestellt. Dieses wird außerdem auf der Grundlage vierer zu diesem Zweck erfasster Datensätze mit zugehörigen Referenzdaten quantitativ evaluiert. Desweiteren werden verschiedene Möglichkeiten diskutiert, mobile AR Geräte innerhalb eines Gebäudes und des zugehörigen Gebäudemodells zu lokalisieren. In diesem Kontext wird außerdem auch die Evaluierung einer Marker-basierten Indoor-Lokalisierungsmethode präsentiert. Abschließend wird zudem ein neuer Ansatz, Indoor-Mapping Datensätze an den Achsen des Koordinatensystems auszurichten, vorgestellt

    A Mixed Reality application for Object detection with audiovisual feedback through MS HoloLenses

    Get PDF
    Καθώς η επιστήμη των υπολογιστών αναπτύσσεται και προοδεύει, εμφανίζονται νέες τεχνολογίες. Οι πρόσφατες εξελίξεις στην επαυξημένη πραγματικότητα και την τεχνητή νοημοσύνη έχουν κάνει αυτές τις τεχνολογίες να πρωτοπορήσουν στην καινοτομία και την αλλαγή σε κάθε τομέα και κλάδο. Οι ταχύρρυθμες εξελίξεις στην μηχανική όραση και την επαυξημένη πραγματικότητα διευκόλυναν την ανάλυση και την κατανόηση του περιβάλλοντος χώρου. Η μεικτή και επαυξημένη πραγματικότητα μπορεί να επεκτείνει σε μεγάλο βαθμό τις δυνατότητες και τις εμπειρίες ενός χρήστη, φέρνοντας ψηφιακά δεδομένα απευθείας στον φυσικό κόσμο όπου και όταν είναι απαραίτητα. Τα τρέχοντα έξυπνα γυαλιά, όπως η συσκευή Microsoft HoloLens, υπερέχουν στην τοποθέτηση εντός του φυσικού περιβάλλοντος, ωστόσο η αναγνώριση αντικειμένων εξακολουθεί να είναι σχετικά πρωτόγονη. Με μια πρόσθετη σημασιολογική κατανόηση του φυσικού πλαισίου του χρήστη, οι έξυπνοι ψηφιακοί πράκτορες μπορούν να βοηθήσουν τους χρήστες στην ολοκλήρωση εργασιών. Στην παρούσα εργασία, παρουσιάζεται ένα σύστημα μεικτής πραγματικότητας που, χρησιμοποιώντας τους αισθητήρες που είναι τοποθετημένοι στα HoloLens και μια υπηρεσία cloud, αποκτά και επεξεργάζεται δεδομένα σε πραγματικό χρόνο για την ανίχνευση διαφορετικών ειδών αντικειμένων και τοποθετεί γεωγραφικά συνεκτικά ολογράμματα που περικλείουν τα εντοπισμένα αντικείμενα και δίνουν πληροφορία για την κλάση στην οποία ανήκουν. Για την αντιμετώπιση των εγγενών περιορισμών υλικού των HoloLens, εκτελούμε μέρος του συνολικού υπολογισμού σε περιβάλλον cloud. Συγκεκριμένα, οι αλγόριθμοι ανίχνευσης αντικειμένων, που βασίζονται σε Deep Neural Networks (DNNs), εκτελούνται σε ένα σύστημα που υποστηρίζει RESTful κλήσεις δεδομένων και φιλοξενείται σε ένα NVIDIA Jetson TX2, μια γρήγορη και αποδοτική ενσωματωμένη υπολογιστική συσκευή AI. Εφαρμόζουμε το YOLOv3 (You Only Look Once) ως αλγόριθμο Βαθιάς Μηχανικής μάθησης, χρησιμοποιώντας ένα μοντέλο εκπαιδευμένο στο σύνολο δεδομένων MS COCO. Αυτός ο αλγόριθμος παρέχει ταχύτητα ανίχνευσης και ακριβή αποτελέσματα με ελάχιστα σφάλματα. Ταυτόχρονα, αντισταθμίζουμε τις καθυστερήσεις μετάδοσης και υπολογισμού εκτελώντας έλεγχο ομοιότητας μεταξύ των καρέ λήψης κάμερας των HoloLens, πριν εφαρμόσουμε σε ένα καρέ τους αλγόριθμους ανίχνευσης αντικειμένων, για να αποφύγουμε την εκτέλεση της εργασίας ανίχνευσης αντικειμένων όταν το περιβάλλον του χρήστη είναι αρκετά παρόμοιο και να περιορίσουμε τους πολύπλοκους υπολογισμούς. Αυτή η εφαρμογή στοχεύει επίσης στη χρήση σύγχρονης τεχνολογίας για να βοηθήσει άτομα με προβλήματα όρασης ή τύφλωση. Ο χρήστης μπορεί με φωνητική εντολή να ξεκινήσει μια σάρωση του περιβάλλοντος χώρου. Εκτός από την οπτική ανατροφοδότηση, η εφαρμογή μπορεί να διαβάσει το όνομα κάθε αντικειμένου που ανιχνεύτηκε μαζί με τη σχετική θέση του στον χώρο και την απόστασή του από τον χρήστη με βάση το χωρικό μοντέλο των HoloLens. Έτσι, υποστηρίζει τον προσανατολισμό του χρήστη χωρίς να απαιτείται εκτενή εκπαίδευση για την χρήση της.As computer science develops and progresses, new technologies emerge. Recent advances in augmented reality and artificial intelligence have caused these technologies to pioneer innovation and alteration in any field and industry. The fast-paced developments in computer vision and augmented reality facilitated analyzing and understanding the surrounding environments. Mixed and Augmented Reality can greatly extend a user capabilities and experiences by bringing digital data directly into the physical world where and when it is most needed. Current smart glasses such as Microsoft HoloLens device excel at positioning within the physical environment, however object recognition is still relatively primitive. With an additional semantic understanding of the wearer’s physical context, intelligent digital agents can assist workers in warehouses, factories, greenhouses, etc. or guide consumers through completion of physical tasks. We present a mixed reality system that, using the sensors mounted on the Microsoft HoloLens headset and a cloud service, acquires and processes in real-time data to detect and track different kinds of objects and finally superimposes geographically coherent holographic tooltips and bounding boxes on the detected objects. Such a goal has been achieved dealing with the intrinsic headset hardware limitations, by performing part of the overall computation in an edge/cloud environment. In particular, the heavier object detection algorithms, based on Deep Neural Networks (DNNs), are executed in a cloud RESTful system hosted by a server running on an NVIDIA Jetson TX2, a fast and power-efficient embedded AI computing device. We apply YOLOv3 (You Only Look Once) as a deep learning algorithm at server side to process the data from the user side, using a model trained on the public dataset MS COCO. This algorithm improves the speed of detection and provides accurate results with minimal background errors. At the same time, we compensate for cloud transmission and computation latencies by running camera frames similarity check between the current and previous HoloLens camera capture frames, before applying the object detection algorithms on a camera frame, to avoid running the object detection task when the user surrounding environment is significantly similar and limit as much as possible complex computations. This application also aims to use modern technology to help people with visual impairment or blindness. The user can issue a voice command to initiate an environment scan. Apart from visual feedback provided for the detected objects, the application can read out the name of each detected object along with its relative position in the user's view. A distance announcement of the detected objects is also derived using the HoloLens’s spatial model. The wearable solution offers the opportunity to efficiently locate objects to support orientation without extensive training of the user
    corecore