1,841 research outputs found

    AWARE: Platform for Autonomous self-deploying and operation of Wireless sensor-actuator networks cooperating with unmanned AeRial vehiclEs

    Get PDF
    This paper presents the AWARE platform that seeks to enable the cooperation of autonomous aerial vehicles with ground wireless sensor-actuator networks comprising both static and mobile nodes carried by vehicles or people. Particularly, the paper presents the middleware, the wireless sensor network, the node deployment by means of an autonomous helicopter, and the surveillance and tracking functionalities of the platform. Furthermore, the paper presents the first general experiments of the AWARE project that took place in March 2007 with the assistance of the Seville fire brigades

    3D Indoor Positioning in 5G networks

    Get PDF
    Over the past two decades, the challenge of accurately positioning objects or users indoors, especially in areas where Global Navigation Satellite Systems (GNSS) are not available, has been a significant focus for the research community. With the rise of 5G IoT networks, the quest for precise 3D positioning in various industries has driven researchers to explore various machine learning-based positioning techniques. Within this context, researchers are leveraging a mix of existing and emerging wireless communication technologies such as cellular, Wi-Fi, Bluetooth, Zigbee, Visible Light Communication (VLC), etc., as well as integrating any available useful data to enhance the speed and accuracy of indoor positioning. Methods for indoor positioning involve combining various parameters such as received signal strength (RSS), time of flight (TOF), time of arrival (TOA), time difference of arrival (TDOA), direction of arrival (DOA) and more. Among these, fingerprint-based positioning stands out as a popular technique in Real Time Localisation Systems (RTLS) due to its simplicity and cost-effectiveness. Positioning systems based on fingerprint maps or other relevant methods find applications in diverse scenarios, including malls for indoor navigation and geo-marketing, hospitals for monitoring patients, doctors, and critical equipment, logistics for asset tracking and optimising storage spaces, and homes for providing Ambient Assisted Living (AAL) services. A significant challenge facing all indoor positioning systems is the objective evaluation of their performance. This challenge is compounded by the coexistence of heterogeneous technologies and the rapid advancement of computation. There is a vast potential for information fusion to be explored. These observations have led to the motivation behind our work. As a result, two novel algorithms and a framework are introduced in this thesis

    Information Fusion for 5G IoT: An Improved 3D Localisation Approach Using K-DNN and Multi-Layered Hybrid Radiomap

    Get PDF
    Indoor positioning is a core enabler for various 5G identity and context-aware applications requiring precise and real-time simultaneous localisation and mapping (SLAM). In this work, we propose a K-nearest neighbours and deep neural network (K-DNN) algorithm to improve 3D indoor positioning. Our implementation uses a novel data-augmentation concept for the received signal strength (RSS)-based fingerprint technique to produce a 3D fused hybrid. In the offline phase, a machine learning (ML) approach is used to train a model on a radiomap dataset that is collected during the offline phase. The proposed algorithm is implemented on the constructed hybrid multi-layered radiomap to improve the 3D localisation accuracy. In our implementation, the proposed approach is based on the fusion of the prominent 5G IoT signals of Bluetooth Low Energy (BLE) and the ubiquitous WLAN. As a result, we achieved a 91% classification accuracy in 1D and a submeter accuracy in 2D

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    Estimation of Scalar Field Distribution in the Fourier Domain

    Full text link
    In this paper we consider the problem of estimation of scalar field distribution collected from noisy measurements. The field is modelled as a sum of Fourier components/modes, where the number of modes retained and estimated determines in a natural way the approximation quality. An algorithm for estimating the modes using an online optimization approach is presented, under the assumption that the noisy measurements are quantized. The algorithm can estimate time-varying fields through the introduction of a forgetting factor. Simulation studies demonstrate the effectiveness of the proposed approach

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202
    • 

    corecore