266 research outputs found

    POINTING, ACQUISITION, AND TRACKING FOR DIRECTIONAL WIRELESS COMMUNICATIONS NETWORKS

    Get PDF
    Directional wireless communications networks (DWNs) are expected to become a workhorse of the military, as they provide great network capacity in hostile areas where omnidirectional RF systems can put their users in harm's way. These networks will also be able to adapt to new missions, change topologies, use different communications technologies, yet still reliably serve all their terminal users. DWNs also have the potential to greatly expand the capacity of civilian and commercial wireless communication. The inherently narrow beams present in these types of systems require a means of steering them, acquiring the links, and tracking to maintain connectivity. This area of technological challenges encompasses all the issues of pointing, acquisition, and tracking (PAT). iii The two main technologies for DWNs are Free-Space Optical (FSO) and millimeter wave RF (mmW). FSO offers tremendous bandwidths, long ranges, and uses existing fiber-based technologies. However, it suffers from severe turbulence effects when passing through long (>kms) atmospheric paths, and can be severely affected by obscuration. MmW systems do not suffer from atmospheric effects nearly as much, use much more sensitive coherent receivers, and have wider beam divergences allowing for easier pointing. They do, however, suffer from a lack of available small-sized power amplifiers, complicated RF infrastructure that must be steered with a platform, and the requirement that all acquisition and tracking be done with the data beam, as opposed to FSO which uses a beacon laser for acquisition and a fast steering mirror for tracking. This thesis analyzes the many considerations required for designing and implementing a FSO PAT system, and extends this work to the rapidly expanding area of mmW DWN systems. Different types of beam acquisition methods are simulated and tested, and the tradeoffs between various design specifications are analyzed and simulated to give insight into how to best implement a transceiver platform. An experimental test-bed of six FSO platforms is also designed and constructed to test some of these concepts, along with the implementation of a three-node biconnected network. Finally, experiments have been conducted to assess the performance of fixed infrastructure routing hardware when operating with a physically reconfigurable RF network

    Real-Time Human Detection Using Deep Learning on Embedded Platforms: A Review

    Get PDF
    The detection of an object such as a human is very important for image understanding in the field of computer vision. Human detection in images can provide essential information for a wide variety of applications in intelligent systems. In this paper, human detection is carried out using deep learning that has developed rapidly and achieved extraordinary success in various object detection implementations. Recently, several embedded systems have emerged as powerful computing boards to provide high processing capabilities using the graphics processing unit (GPU). This paper aims to provide a comprehensive survey of the latest achievements in this field brought about by deep learning techniques in the embedded platforms. NVIDIA Jetson was chosen as a low power system designed to accelerate deep learning applications. This review highlights the performance of human detection models such as PedNet, multiped, SSD MobileNet V1, SSD MobileNet V2, and SSD inception V2 on edge computing. This survey aims to provide an overview of these methods and compare their performance in accuracy and computation time for real-time applications. The experimental results show that the SSD MobileNet V2 model provides the highest accuracy with the fastest computation time compared to other models in our video datasets with several scenarios

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur Unterstützung des alltäglichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in Häusern, Büros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und präzise, zuverlässige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbständig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusätzlichen Bezugsinformationen vorhanden sind, das Gelände erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewählten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nämlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere Qualität der dreidimensionalen Erkundung der Umgebung Details und Strukturen der Oberflächen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit präsentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur Poseschätzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens präsentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben übernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten für die Tiefendaten, sondern werden auch für die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis

    Dataset of Panoramic Images for People Tracking in Service Robotics

    Get PDF
    We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility

    Comprehensive review of vision-based fall detection systems

    Get PDF
    Vision-based fall detection systems have experienced fast development over the last years. To determine the course of its evolution and help new researchers, the main audience of this paper, a comprehensive revision of all published articles in the main scientific databases regarding this area during the last five years has been made. After a selection process, detailed in the Materials and Methods Section, eighty-one systems were thoroughly reviewed. Their characterization and classification techniques were analyzed and categorized. Their performance data were also studied, and comparisons were made to determine which classifying methods best work in this field. The evolution of artificial vision technology, very positively influenced by the incorporation of artificial neural networks, has allowed fall characterization to become more resistant to noise resultant from illumination phenomena or occlusion. The classification has also taken advantage of these networks, and the field starts using robots to make these systems mobile. However, datasets used to train them lack real-world data, raising doubts about their performances facing real elderly falls. In addition, there is no evidence of strong connections between the elderly and the communities of researchers

    Real Time Stereo Cameras System Calibration Tool and Attitude and Pose Computation with Low Cost Cameras

    Get PDF
    The Engineering in autonomous systems has many strands. The area in which this work falls, the artificial vision, has become one of great interest in multiple contexts and focuses on robotics. This work seeks to address and overcome some real difficulties encountered when developing technologies with artificial vision systems which are, the calibration process and pose computation of robots in real-time. Initially, it aims to perform real-time camera intrinsic (3.2.1) and extrinsic (3.3) stereo camera systems calibration needed to the main goal of this work, the real-time pose (position and orientation) computation of an active coloured target with stereo vision systems. Designed to be intuitive, easy-to-use and able to run under real-time applications, this work was developed for use either with low-cost and easy-to-acquire or more complex and high resolution stereo vision systems in order to compute all the parameters inherent to this same system such as the intrinsic values of each one of the cameras and the extrinsic matrices computation between both cameras. More oriented towards the underwater environments, which are very dynamic and computationally more complex due to its particularities such as light reflections. The available calibration information, whether generated by this tool or loaded configurations from other tools allows, in a simplistic way, to proceed to the calibration of an environment colorspace and the detection parameters of a specific target with active visual markers (4.1.1), useful within unstructured environments. With a calibrated system and environment, it is possible to detect and compute, in real time, the pose of a target of interest. The combination of position and orientation or attitude is referred as the pose of an object. For performance analysis and quality of the information obtained, this tools are compared with others already existent.A engenharia de sistemas autónomos actua em diversas vertentes. Uma delas, a visão artificial, em que este trabalho assenta, tornou-se uma das de maior interesse em múltiplos contextos e focos na robótica. Assim, este trabalho procura abordar e superar algumas dificuldades encontradas aquando do desenvolvimento de tecnologias baseadas na visão artificial. Inicialmente, propõe-se a fornecer ferramentas para realizar as calibrações necessárias de intrínsecos (3.2.1) e extrínsecos (3.3) de sistemas de visão stereo em tempo real para atingir o objectivo principal, uma ferramenta de cálculo da posição e orientação de um alvo activo e colorido através de sistemas de visão stereo. Desenhadas para serem intuitivas, fáceis de utilizar e capazes de operar em tempo real, estas ferramentas foram desenvolvidas tendo em vista a sua integração quer com camaras de baixo custo e aquisição fácil como com camaras mais complexas e de maior resolução. Propõem-se a realizar a calibração dos parâmetros inerentes ao sistema de visão stereo como os intrínsecos de cada uma das camaras e as matrizes de extrínsecos que relacionam ambas as camaras. Este trabalho foi orientado para utilização em meio subaquático onde se presenciam ambientes com elevada dinâmica visual e maior complexidade computacional devido `a suas particularidades como reflexões de luz e má visibilidade. Com a informação de calibração disponível, quer gerada pelas ferramentas fornecidas, quer obtida a partir de outras, pode ser carregada para proceder a uma calibração simplista do espaço de cor e dos parâmetros de deteção de um alvo específico com marcadores ativos coloridos (4.1.1). Estes marcadores são ´uteis em ambientes não estruturados. Para análise da performance e qualidade da informação obtida, as ferramentas de calibração e cálculo de pose (posição e orientação), serão comparadas com outras já existentes

    Instrumentation and validation of a robotic cane for transportation and fall prevention in patients with affected mobility

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física, (especialização em Dispositivos, Microssistemas e Nanotecnologias)O ato de andar é conhecido por ser a forma primitiva de locomoção do ser humano, sendo que este traz muitos benefícios que motivam um estilo de vida saudável e ativo. No entanto, há condições de saúde que dificultam a realização da marcha, o que por consequência pode resultar num agravamento da saúde, e adicionalmente, levar a um maior risco de quedas. Nesse sentido, o desenvolvimento de um sistema de deteção e prevenção de quedas, integrado num dispositivo auxiliar de marcha, seria essencial para reduzir estes eventos de quedas e melhorar a qualidade de vida das pessoas. Para ultrapassar estas necessidades e limitações, esta dissertação tem como objetivo validar e instrumentar uma bengala robótica, denominada Anti-fall Robotic Cane (ARCane), concebida para incorporar um sistema de deteção de quedas e um mecanismo de atuação que possibilite a prevenção de quedas, ao mesmo tempo que assiste a marcha. Para esse fim, foi realizada uma revisão do estado da arte em bengalas robóticas para adquirir um conhecimento amplo e aprofundado dos componentes, mecanismos e estratégias utilizadas, bem como os protocolos experimentais, principais resultados, limitações e desafios em dispositivos existentes. Numa primeira fase, foi estipulado o objetivo de: (i) adaptar a missão do produto; (ii) estudar as necessidades do consumidor; e (iii) atualizar as especificações alvo da ARCane, continuação do trabalho de equipa, para obter um produto com design e engenharia compatível com o mercado. Foi depois estabelecida a arquitetura de hardware e discutidos os componentes a ser instrumentados na ARCane. Em seguida foram realizados testes de interoperabilidade a fim de validar o funcionamento singular e coletivo dos componentes. Relativamente ao controlo de movimento, foi desenvolvido um sistema inovador, de baixo custo e intuitivo, capaz de detetar a intenção do movimento e de reconhecer as fases da marcha do utilizador. Esta implementação foi validada com seis voluntários saudáveis que realizaram testes de marcha com a ARCane para testar sua operabilidade num ambiente de contexto real. Obteve-se uma precisão de 97% e de 90% em relação à deteção da intenção de movimento e ao reconhecimento da fase da marcha do utilizador. Por fim, foi projetado um método de deteção de quedas e mecanismo de prevenção de quedas para futura implementação na ARCane. Foi ainda proposta uma melhoria do método de deteção de quedas, de modo a superar as limitações associadas, bem como a proposta de dispositivos de deteção a serem implementados na ARCane para obter um sistema completo de deteção de quedas.The act of walking is known to be the primitive form of the human being, and it brings many benefits that motivate a healthy and active lifestyle. However, there are health conditions that make walking difficult, which, consequently, can result in worse health and, in addition, lead to a greater risk of falls. Thus, the development of a fall detection and prevention system integrated with a walking aid would be essential to reduce these fall events and improve people quality of life. To overcome these needs and limitations, this dissertation aims to validate and instrument a cane-type robot, called Anti-fall Robotic Cane (ARCane), designed to incorporate a fall detection system and an actuation mechanism that allow the prevention of falls, while assisting the gait. Therefore, a State-of-the-Art review concerning robotic canes was carried out to acquire a broad and in-depth knowledge of the used components, mechanisms and strategies, as well as the experimental protocols, main results, limitations and challenges on existing devices. On a first stage, it was set an objective to (i) enhance the product's mission statement; (ii) study the consumer needs; and (iii) update the target specifications of the ARCane, extending teamwork, to obtain a product with a market-compatible design and engineering that meets the needs and desires of the ARCane users. It was then established the hardware architecture of the ARCane and discussed the electronic components that will instrument the control, sensory, actuator and power units, being afterwards subjected to interoperability tests to validate the singular and collective functioning of cane components altogether. Regarding the motion control of robotic canes, an innovative, cost-effective and intuitive motion control system was developed, providing user movement intention recognition, and identification of the user's gait phases. This implementation was validated with six healthy volunteers who carried out gait trials with the ARCane, in order to test its operability in a real context environment. An accuracy of 97% was achieved for user motion intention recognition and 90% for user gait phase recognition, using the proposed motion control system. Finally, it was idealized a fall detection method and fall prevention mechanism for a future implementation in the ARCane, based on methods applied to robotic canes in the literature. It was also proposed an improvement of the fall detection method in order to overcome its associated limitations, as well as detection devices to be implemented into the ARCane to achieve a complete fall detection system

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Global Shipping Container Monitoring Using Machine Learning with Multi-Sensor Hubs and Catadioptric Imaging

    Get PDF
    We describe a framework for global shipping container monitoring using machine learning with multi-sensor hubs and infrared catadioptric imaging. A wireless mesh radio satellite tag architecture provides connectivity anywhere in the world which is a significant improvement to legacy methods. We discuss the design and testing of a low-cost long-wave infrared catadioptric imaging device and multi-sensor hub combination as an intelligent edge computing system that, when equipped with physics-based machine learning algorithms, can interpret the scene inside a shipping container to make efficient use of expensive communications bandwidth. The histogram of oriented gradients and T-channel (HOG+) feature as introduced for human detection on low-resolution infrared catadioptric images is shown to be effective for various mirror shapes designed to give wide volume coverage with controlled distortion. Initial results for through-metal communication with ultrasonic guided waves show promise using the Dynamic Wavelet Fingerprint Technique (DWFT) to identify Lamb waves in a complicated ultrasonic signal
    corecore