5,294 research outputs found

    Sub-Nanosecond Time of Flight on Commercial Wi-Fi Cards

    Full text link
    Time-of-flight, i.e., the time incurred by a signal to travel from transmitter to receiver, is perhaps the most intuitive way to measure distances using wireless signals. It is used in major positioning systems such as GPS, RADAR, and SONAR. However, attempts at using time-of-flight for indoor localization have failed to deliver acceptable accuracy due to fundamental limitations in measuring time on Wi-Fi and other RF consumer technologies. While the research community has developed alternatives for RF-based indoor localization that do not require time-of-flight, those approaches have their own limitations that hamper their use in practice. In particular, many existing approaches need receivers with large antenna arrays while commercial Wi-Fi nodes have two or three antennas. Other systems require fingerprinting the environment to create signal maps. More fundamentally, none of these methods support indoor positioning between a pair of Wi-Fi devices without~third~party~support. In this paper, we present a set of algorithms that measure the time-of-flight to sub-nanosecond accuracy on commercial Wi-Fi cards. We implement these algorithms and demonstrate a system that achieves accurate device-to-device localization, i.e. enables a pair of Wi-Fi devices to locate each other without any support from the infrastructure, not even the location of the access points.Comment: 14 page

    SANTO: Social Aerial NavigaTion in Outdoors

    Get PDF
    In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones

    Advanced crime scene mapping and technology course design

    Get PDF
    The purpose of this project is to develop an Advanced Crime Scene Mapping and Technology course that aims to strengthen crime scene documentation and mapping skills, enhance cognitive abilities, and introduce students to advanced digital technologies that are gaining popularity in several forensic science disciplines. In particular, recent advancements in 3D laser scanning, mapping, and drone technology have presented the fields of crime scene investigation and reconstruction with many exciting new possibilities for potential uses. However, due to several limitations regarding the cost of equipment and training, the availability of resources, time constraints, and limited knowledge, it is often difficult for agencies to integrate new tools into their investigative processes. This course endeavors to help alleviate some of these issues by providing students with a basic knowledge and understanding of relevant new technologies while keeping them firmly grounded in the fundamental principles of crime scene processing and reconstruction. The content and structure of this course are designed to be flexible so it can accommodate rapid changes in technological advancements and device regulations. As such, complete instructions and tutorials are not included for specific brands of equipment and software, but instead focus on general concepts and procedures that can be generally applied to most similar devices

    Design of a Drone-Flight-Enabled Wireless Isolation Chamber

    Get PDF
    The next wave of drone applications is moving from repeatable, single-drone activities such as evaluating propagation environments to team-based, multi-drone objectives such as drone-based emergency services. In parallel, testbeds have sought to evaluate emerging concepts such as highly-directional and distributed wireless communications. However, there is a lack of intersection between the two works to characterize the impact of the drone body, antenna placement, swarm topologies, and multi-dimensional connectivity needs that require in-flight experimentation with a surrounding testbed infrastructure. In this work, we design a drone-flight-enabled isolation chamber to capture complex spatial wireless channel relationships that drone links experience as applications scale from single-drone to swarm-level networks within a shared three-dimensional space. Driven by the challenges of outdoor experimentation, we identify the need for a highly-controlled indoor environment where external factors can be mitigated. To do so, we first build an open-source drone platform to provide programmable control with visibility into the internal flight control system and sensors enabling specialized coordination and accurate repeatable positioning within the isolated environment. We then design a wireless data acquisition system and integrate distributed software defined radios (SDRs) in order to inspect multi-dimensional wireless behavior from the surrounding area. Finally, we achieve and demonstrate the value of measurement perspectives from diverse altitudes and spatial locations with the same notion of time

    Social-aware drone navigation using social force model

    Get PDF
    Robot’s navigation is one of the hardest challenges to deal with, because real environments imply highly dynamic objects moving in all directions. The main ideal goal is to conduct a safe navigation within the environment, avoiding obstacles and reaching the final proposed goal. Nowadays, with the last advances in technology, we are able to see robots almost everywhere, and this can lead us to think about the robot’s role in the future, and where we would find them, and it is no exaggerated to say, that practically, flying and land-based robots are going to live together with people, interacting in our houses, streets and shopping centers. Moreover, we will notice their presence, gradually inserted in our human societies, every time doing more human tasks, which in the past years were unthinkable. Therefore, if we think about robots moving or flying around us, we must consider safety, the distance the robot should take to make the human feel comfortable, and the different reactions people would have. The main goal of this work is to accompany people making use of a flying robot. The term social navigation gives us the path to follow when we talk about a social environment. Robots must be able to navigate between humans, giving sense of security to those who are walking close to them. In this work, we present a model called Social Force Model, which states that the human social interaction between persons and objects is inspired in the fluid dynamics de- fined by Newton’s equations, and also, we introduce the extended version which complements the initial method with the human-robot interaction force. In the robotics field, the use of tools for helping the development and the implementation part are crucial. The fast advances in technology allows the international community to have access to cheaper and more compact hardware and software than a decade ago. It is becoming more and more usual to have access to more powerful technology which helps us to run complex algorithms, and because of that, we can run bigger systems in reduced space, making robots more intelligent, more compact and more robust against failures. Our case was not an exception, in the next chapters we will present the procedure we followed to implement the approaches, supported by different simulation tools and software. Because of the nature of the problem we were facing, we made use of Robotic Operating System along with Gazebo, which help us to have a good outlook of how the code will work in real-life experiments. In this work, both real and simulated experiments are presented, in which we expose the interaction conducted by the 3D Aerial Social Force Model, between humans, objects and in this case the AR.Drone, a flying drone property of the Instituto de Robótica e Informática Industrial. We focus on making the drone navigation more socially acceptable by the humans around; the main purpose of the drone is to accompany a person, which we will call the "main" person in this work, who is going to try to navigate side-by-side, with a behavior being dictated with some forces exerted by the environment, and also is going to try to be the more socially close acceptable possible to the remaining humans around. Also, it is presented a comparison between the 3D Aerial Social Force Model and the Artificial Potential Fields method, a well-known method and widely used in robot navigation. We present both methods and the description of the forces each one involves. Along with these two models, there is also another important topic to introduce. As we said, the robot must be able to accompany a pedestrian in his way, and for that reason, the forecasting capacity is an important feature since the robot does not know the final destination of the human to accompany. It is essential to give it the ability to predict the human movements. In this work, we used the differential values between the past position values to know how much is changing through time. This gives us an accurate idea of how the human would behave or which direction he/she would take next. Furthermore, we present a description of the human motion prediction model based on linear regression. The motivation behind the idea of building a Regression Model was the simplicity of the implementation, the robustness and the very accurate results of the approach. The previous main human positions are taken, in order to forecast the new position of the human, the next seconds. This is done with the main purpose of letting the drone know about the direction the human is taking, to move forward beside the human, as if the drone was accompanying him. The optimization for the linear regression model, to find the right weights for our model, was carried out by gradient descent, implementing also de RMSprop variant in order to reach convergence in a faster way. The strategy that was followed to build the prediction model is explained with detail later in this work. The presence of social robots has grown during the past years, many researchers have contributed and many techniques are being used to give them the capacity of interacting safely and effectively with the people, and it is a hot topic which has matured a lot, but still there is many research to be investigated

    A prospective geoinformatic approach to indoor navigation for Unmanned Air System (UAS) by use of quick response (QR) codes

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesThis research study explores a navigation system for autonomous indoor flight of Unmanned Aircraft Systems (UAS) dead reckoning with Inertial Navigation System (INS) and the use of low cost artificial landmarks, Quick Response (QR) codes placed on the floor and allows for fully autonomous flight with all computation done onboard UAS on embedded hardware. We provide a detailed description of all system components and application. Additionally, we show how the system is integrated with a commercial UAS and provide results of experimental autonomous flight tests. To our knowledge, this system is one of the first to allow for complete closed-loop control and goal-driven navigation of a UAS in an indoor setting without requiring connection to any external infrastructures

    Interface iOS for control of an unmanned helicopter in ROS

    Get PDF
    Práce je zaměřená na ovládání bezpilotni helikoptery za pomoci mobilního zařízení s operačním systémem iOS. Cílem je implementovat řešení, které bude schopno ovládat helikoptéru ve více módech. První je ovládání joystickem, druhá nastavení trajektorie pomocí Google Map, poslední ja navigace v interiéru budov, pro kterou je potřeba získat přesnou mapu. Práce vychází z výzkumu pracovníku na Katedře kybernetiky skupiny multi-robotických systémů.This work aims to control unmanned helicopter with a mobile device with operating system iOS. The goal is to implement a solution, that can command helicopter in three ways. The first is Joystick Control, the second set trajectory in Google Maps and last one is indoor navigation, for which a precise map is needed. The work is based on results from Department of Cybernetics at Czech Technical University in Prague group Multi-Robotic systems

    Development of a Neural Network-Based Object Detection for Multirotor Target Tracking

    Get PDF
    Unmanned aerial vehicles (UAVs) have, for the past few decades, seen an increased popularity in industry and research centres. Despite this intense utilization by both markets there exists an active demand for the development of autonomous guidance, navigation, and control strategies. One need relates to the achievement of a high level of autonomy to identify and track a target object. An elective technique for this set of tasks is neural networks. In the development and study of these networks there is a distinct lack of substantive validation techniques to qualify network performances when implemented in a multirotor UAV. This thesis will first describe the development of a neural network-based object detection subsystem for use in target tracking with an autonomous multirotor UAV. Then, the second part of this thesis will utilize a developed indoor multirotor testbed to externally verify the tracking performance of the multirotor UAV during an object following maneuver
    • …
    corecore