291 research outputs found

    Uncertainty Characterisation of Mobile Robot Localisation Techniques using Optical Surveying Grade Instruments

    Get PDF
    Recent developments in localisation systems for autonomous robotic technology have been a driving factor in the deployment of robots in a wide variety of environments. Estimating sensor measurement noise is an essential factor when producing uncertainty models for state-of-the-art robotic positioning systems. In this paper, a surveying grade optical instrument in the form of a Trimble S7 Robotic Total Station is utilised to dynamically characterise the error of positioning sensors of a ground based unmanned robot. The error characteristics are used as inputs into the construction of a Localisation Extended Kalman Filter which fuses Pozyx Ultra-wideband range measurements with odometry to obtain an optimal position estimation, all whilst using the path generated from the remote tracking feature of the Robotic Total Station as a ground truth metric. Experiments show that the proposed method yields an improved positional estimation compared to the Pozyx systems’ native firmware algorithm as well as producing a smoother trajectory

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Safe navigation and human-robot interaction in assistant robotic applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Ad hoc Acoustic Network Aided Localization for micro-AUVs

    Get PDF
    The navigation of Autonomous Underwater Vehicles (AUVs) is still an open research problem. This is further exacerbated when vehicles can only carry limited sensors as typically the case with micro-AUVs that need to survey large marine areas that can be characterized by high currents and dynamic environments. To address this problem, this work investigates the usage of ad hoc acoustic networks that can be established by a set of cooperating vehicles. Leveraging the network structure makes it possible to greatly improve the navigation of the vehicles and as a result to enlarge the operational envelope of vehicles with limited capabilities. The paper details the design and implementation of the network, and specific details of localization and navigation services made available to the vehicles by the network stack. Results are provided from a sea-trial undertaken in Croatia in October 2019. Results validate the approach, demonstrating the increased flexibility of the system and the navigational performance obtained: the deployed network was able to support long-range navigation of vehicles with no inertial navigation or Doppler Velocity Log (DVL) during a 9.5 km channel crossing, reducing the navigation error from approximately 7% to 0.27% of the distance traveled

    Integrated HBIM-GIS Models for Multi-Scale Seismic Vulnerability Assessment of Historical Buildings

    Get PDF
    The complexity of historical urban centres progressively needs a strategic improvement in methods and the scale of knowledge concerning the vulnerability aspect of seismic risk. A geographical multi-scale point of view is increasingly preferred in the scientific literature and in Italian regulation policies, that considers systemic behaviors of damage and vulnerability assessment from an urban perspective according to the scale of the data, rather than single building damage analysis. In this sense, a geospatial data sciences approach can contribute towards generating, integrating, and making virtuous relations between urban databases and emergency-related data, in order to constitute a multi-scale 3D database supporting strategies for conservation and risk assessment scenarios. The proposed approach developed a vulnerability-oriented GIS/HBIM integration in an urban 3D geodatabase, based on multi-scale data derived from urban cartography and emergency mapping 3D data. Integrated geometric and semantic information related to historical masonry buildings (specifically the churches) and structural data about architectural elements and damage were integrated in the approach. This contribution aimed to answer the research question supporting levels of knowledge required by directives and vulnerability assessment studies, both about the generative workflow phase, the role of HBIM models in GIS environments and toward user-oriented webGIS solutions for sharing and public use fruition, exploiting the database for expert operators involved in heritage preservation

    MAC layer assisted localization in wireless environments with multiple sensors and multiple emitters

    Get PDF
    Extreme emitter density (EED) RF environments, defined as 10k-100k emitters within a footprint of less than 1 km squared, are becoming increasingly common with the proliferation of personal devices containing myriad communication standards (e.g. WLAN, Bluetooth, 4G, etc). Attendees at concerts, sporting events, and other such large-scale events desire to be connected at all times, creating tremendous spectrum management challenges, especially in unlicensed frequencies such as 2.4 GHz, 5 GHz, or 900 MHz Industrial, Scientific, and Medical (ISM) bands. In licensed bands, there are often critical communication systems such as two-way radios for emergency personnel which must be free from interference. Identification and localization of a non-conforming or interfering Emitter of Interest (EoI) is important for these critical systems. In this dissertation, research is conducted to improve localization for these EED RF environments by exploiting side information available at the Medium Access Control (MAC) layer. The primary contributions of this research are: (1) A testbed in Bobby Dodd football stadium consisting of three spatially distributed, time-synchronized RF Sensor Nodes (RFSN) collecting and archiving complex baseband samples for algorithm development and validation. (2) A modeling framework and analytical results on the benefits of exploiting the structure of the MAC layer for associating physical layer measurements, such as Time Difference of Arrivals (TDoA), to emitters. (3) A three stage localization algorithm exploiting time between packets and a constrained geometry to shrink the error ellipse of the emitter position estimate. The results are expected to improve localization accuracy in wireless environments when multiple sensors observe multiple emitters using a known communications protocol within a constrained geometry.Ph.D

    Probabilistic Human-Robot Information Fusion

    Get PDF
    This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability

    Deep Learning Based Methods for Outdoor Robot Localization and Navigation

    Get PDF
    The number of elderly people is increasing around the globe. In order to support the growing of ageing society, mobile robot is one of viable choices for assisting the elders in their daily activities. These activities happen in any places, either indoor or outdoor. Although outdoor activities benefit the elders in many ways, outdoor environments contain difficulties from their unpredictable natures. Mobile robots for supporting humans in outdoor environments must automatically traverse through various difficulties in the environments using suitable navigation systems.Core components of mobile robots always include the navigation segments. Navigation system helps guiding the robot to its destination where it can perform its designated tasks. There are various tools to be chosen for navigation systems. Outdoor environments are mostly open for conventional navigation tools such as Global Positioning System (GPS) devices. In this thesis three systems for localization and navigation of mobile robots based on visual data and deep learning algorithms are proposed. The first localization system is based on landmark detection. The Faster Regional-Convolutional Neural Network (Faster R-CNN) detects landmarks and signs in the captured image. A Feed-Forward Neural Network (FFNN) is trained to determine robot location coordinates and compass orientation from detected landmarks. The dataset consists of images, geolocation data and labeled bounding boxes to train and test two proposed localization methods. Results are illustrated with absolute errors from the comparisons between localization results and reference geolocation data in the dataset. The second system is the navigation system based on visual data and a deep reinforcement learning algorithm called Deep Q Network (DQN). The employed DQN automatically guides the mobile robot with visual data in the form of images, which received from the only Universal Serial Bus (USB) camera that attached to the robot. DQN consists of a deep neural network called convolutional neural network (CNN), and a reinforcement learning algorithm named Q-Learning. It can make decisions with visual data as input, using experiences from consequences of trial-and-error attempts. Our DQN agents are trained in the simulation environments provided by a platform based on a First-Person Shooter (FPS) game named ViZDoom. Simulation is implemented for training to avoid any possible damage on the real robot during trial-and-error process. Perspective from the simulation is the same as if a camera is attached to the front of the mobile robot. There are many differences between the simulation and the real world. We applied a markerbased Augmented Reality (AR) algorithm to reduce differences between the simulation and the world by altering visual data from the camera with resources from the simulation.The second system is assigned the task of simple navigation to the robot, in which the starting location is fixed but the goal location is random in the designated zone. The robot must be able to detect and track the goal object using a USB camera as its only sensor. Once started, the robot must move from its starting location to the designated goal object. Our DQN navigation method is tested in the simulation and on the real robot. Performances of our DQN are measured quantitatively via average total scores and the number of success navigation attempts. The results show that our DQN can effectively guide a mobile robot to the goal object of the simple navigation tasks, for both the simulation and the real world.The third system employs a Transfer Learning (TL) strategy to reduce training time and resources required for the training of newly added tasks of DQN agents. The new task is the task of reaching the goal while also avoiding obstacles at the same time. Additionally, the starting and the goal locations are all random within the specified areas. The employed transfer learning strategy uses the whole network of the DQN agent trained for the first simple navigation task as the base for training the DQN agent for the second task. The training in our TL strategy decrease the exploration factor, which cause the agent to rely on the existing knowledge from the base network more than randomly selecting actions during the training. It results in the decreased training time, in which optimal solutions can be found faster than training from scratch.We evaluate performances of our TL strategy by comparing the DQN agents trained with our TL at different exploration factor values and the DQN agent trained from scratch. Additionally, agents trained from our TL are trained with the decreased number of episodes to extensively display performances of our TL agents. All DQN agents for the second navigation task are tested in the simulation to avoid any possible and uncontrollable damages from the obstacles. Performances are measured through success attempts and average total scores, same as in the first navigation task. Results show that DQN agents trained via the TL strategy can greatly outperform the agent trained from scratch, despite the lower number of training episodes.博士(工学)法政大学 (Hosei University

    Wireless Network Analytics for Next Generation Spectrum Awareness

    Get PDF
    The importance of networks, in their broad sense, is rapidly and massively growing in modern-day society thanks to unprecedented communication capabilities offered by technology. In this context, the radio spectrum will be a primary resource to be preserved and not wasted. Therefore, the need for intelligent and automatic systems for in-depth spectrum analysis and monitoring will pave the way for a new set of opportunities and potential challenges. This thesis proposes a novel framework for automatic spectrum patrolling and the extraction of wireless network analytics. It aims to enhance the physical layer security of next generation wireless networks through the extraction and the analysis of dedicated analytical features. The framework consists of a spectrum sensing phase, carried out by a patrol composed of numerous radio-frequency (RF) sensing devices, followed by the extraction of a set of wireless network analytics. The methodology developed is blind, allowing spectrum sensing and analytics extraction of a network whose key features (i.e., number of nodes, physical layer signals, medium access protocol (MAC) and routing protocols) are unknown. Because of the wireless medium, over-the-air signals captured by the sensors are mixed; therefore, blind source separation (BSS) and measurement association are used to estimate the number of sources and separate the traffic patterns. After the separation, we put together a set of methodologies for extracting useful features of the wireless network, i.e., its logical topology, the application-level traffic patterns generated by the nodes, and their position. The whole framework is validated on an ad-hoc wireless network accounting for MAC protocol, packet collisions, nodes mobility, the spatial density of sensors, and channel impairments, such as path-loss, shadowing, and noise. The numerical results obtained by extensive and exhaustive simulations show that the proposed framework is consistent and can achieve the required performance
    • …
    corecore