20 research outputs found

    Indoor Navigation and Manipulation using a Segway RMP

    Get PDF
    This project dealt with a Segway RMP, utilizing it in an assistive-technology manner, encompassing navigation and manipulation aspects of robotics. First, background research was conducted to develop a blueprint for the robot. The hardware, software, and configuration of the RMP was updated, and a robotic arm was designed to extend the RMP’s capabilities. The robot was programmed to accomplish autonomous multi-floor navigation through the use of the navigation stack in ROS, image detection, and a GUI. The robot can navigate through the hallways of the building utilizing the elevator. The robotic arm was designed to accomplish tasks such as pressing a button and picking an object up off of a table. The Segway RMP is designed to be utilized and expanded upon as a robotics research platform

    Stereo vision-based obstacle avoidance module on 3D point cloud data

    Get PDF
    This paper deals in building a 3D vision-based obstacle avoidance and navigation. In order for an autonomous system to work in real life condition, a capability of gaining surrounding environment data, interpret the data and take appropriate action is needed. One of the required capability in this matter for an autonomous system is a capability to navigate cluttered, unorganized environment and avoiding collision with any present obstacle, defined as any data with vertical orientation and able to take decision when environment update exist. Proposed in this work are two-step strategy of extracting the obstacle position and orientation from point cloud data using plane based segmentation and the resultant segmentation are mapped based on obstacle point position relative to camera using occupancy grid map to acquire obstacle cluster position and recorded the occupancy grid map for future use and global navigation, obstacle position gained in grid map is used to plan the navigation path towards target goal without going through obstacle position and modify the navigation path to avoid collision when environment update is present or platform movement is not aligned with navigation path based on timed elastic band method

    Geometric and Bayesian models for safe navigation in dynamic environments

    Get PDF
    Autonomous navigation in open and dynamic environments is an important challenge, requiring to solve several difficult research problems located on the cutting edge of the state of the art. Basically, these problems may be classified into three main categories: (a) SLAM in dynamic environments; (b) detection, characterization, and behavior prediction of the potential moving obstacles; and (c) online motion planning and safe navigation decision based on world state predictions. This paper addresses some aspects of these problems and presents our latest approaches and results. The solutions we have implemented are mainly based on the followings paradigms: multiscale world representation of static obstacles based on the wavelet occupancy grid; adaptative clustering for moving obstacle detection inspired on Kohonen networks and the growing neural gas algorithm; and characterization and motion prediction of the observed moving entities using Hidden Markov Models coupled with a novel algorithm for structure and parameter learnin

    Geometric and Bayesian Models for Safe Navigation in Dynamic Environments

    Get PDF
    International audienceAutonomous navigation in open and dynamic environments is an important challenge, requiring to solve several difficcult research problems located on the cutting edge of the state of the art. Basically, these problems may be classiffied into three main categories: a) SLAM in dynamic environments; b) Detection, characterization, and behavior prediction of the potential moving obstacles; and c) On-line motion planning and safe navigation decision based on world state predictions. This paper addresses some aspects of these problems and presents our latest approaches and results. The solutions we have implemented are mainly based on the followings paradigms: multiscale world representation of static obstacles based on the wavelet occupancy grid; adaptative clustering for moving obstacle detection inspired on Kohonen networks and the growing neural gas algorithm; and characterization and motion prediction of the observed moving entities using Hidden Markov Models coupled with a novel algorithm for structure and parameter learning

    Multi-Support Gaussian Processes for Continuous Occupancy Mapping

    Get PDF
    Robotic mapping enables an autonomous agent to build a representation of its environment based upon sensorial information. In particular, occupancy mapping aims at classifying regions of space according to whether or not they are occupied---and, therefore, inaccessible to the agent. Traditional techniques rely on discretisation to perform this task. The problems tackled by this thesis stem from the discretisation of continuous phenomena and from the inherently inaccurate and large datasets typically created by state-of-the-art robotic sensors. To approach this challenge, we make use of statistical modelling to handle the noise in the data and create continuous occupancy maps. The proposed approach makes use of Gaussian processes, a non-parametric Bayesian inference framework that uses kernels, to handle sensor noise and learn the dependencies among data points. The main drawback is the method's computational complexity, which grows cubically with the number of input points. The contributions of this work are twofold. First, we generalise kernels to be able to handle inputs in the form of areas, as well as points. This allows groups of spatially correlated data points to be condensed into a single entry, considerably reducing the size of the covariance matrix and enabling the method to deal efficiently with large amounts of data. Then, we create a mapping algorithm that makes use of Gaussian processes equipped with this kernel to build continuous occupancy maps. Experiments were conducted, using both synthetic and publicly available real data, to compare the presented algorithm with a similar previous method. They show it to be comparably accurate, yet considerably faster when dealing with large datasets

    Explorative coastal oceanographic visual analytics : oceans of data

    Get PDF
    The widely acknowledged challenge to data analysis and understanding, resulting from the exponential increase in volumes of data generated by increasingly complex modelling and sampling systems, is a problem experienced by many researchers, including ocean scientists. The thesis explores a visualization and visual analytics solution for predictive studies of coastal shelf and estuarine modelled, hydrodynamics undertaken to understand sea level rise, as a contribution to wider climate change studies, and to underpin coastal zone planning, flood prevention and extreme event management. But these studies are complex and require numerous simulations of estuarine hydrodynamics, generating extremely large datasets of multi-field data. This type\ud of data is acknowledged as difficult to visualize and analyse, as its numerous attributes present significant computational challenges, and ideally require a wide range of approaches to provide the necessary insight. These challenges are not easily overcome with the current visualization and analysis methodologies employed by coastal shelf hydrodynamic researchers, who use several software systems to generate graphs, each taking considerable time to operate, thus it is difficult to explore different scenarios and explore the data interactively and visually. The thesis, therefore, develops novel visualization and visual analytics techniques to help researchers overcome the limitations of existing methods (for example in understanding key tidal components); analyse data in a timely manner and explore different scenarios. There were a number of challenges to this: the size of the data, resulting in lengthy computing time, also many data values becoming plotted on one pixel (overplotting). The thesis presents: (1) a new visualization framework (VINCA) using caching and hierarchical aggregation techniques to make the data more interactive, plus explorative, coordinated multiple views, to enable the scientists to explore the data. (2) A novel estuarine transect profiler and flux tool, which provides instantaneous flux calculations across an estuary. Measures of flux are of great significance in oceanographic studies, yet are notoriously difficult and time consuming to calculate with the commonly used tools. This derived data is added back into the database for further investigation and analysis. (3) New views, including a novel, dynamic, spatially aggregated Parallel Coordinate Plots (Sa-PCP), are developed to provide different perspectives of the spatial, time dependent data, also methodologies for developing high-quality (journal ready) output from the visualization tool. Finally, (4) the dissertation explored the use of hierarchical data-structures and caching techniques to enable fast analysis on a desktop computer and to overcome the overplotting challenge for this data

    Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains

    Get PDF
    In recent times we are witnessing a steep increase in the availability of data coming from real–life environments. Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps. As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size. In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration. Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view. In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale. We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository. Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications

    Deep learning-based artifacts removal in video compression

    Get PDF
    Title from PDF of title page viewed December 15, 2021Dissertation advisor: Zhu LiVitaIncludes bibliographical references (pages 112-129)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2021The block-based coding structure in the hybrid video coding framework inevitably introduces compression artifacts such as blocking, ringing, etc. To compensate for those artifacts, extensive filtering techniques were proposed in the loop of video codecs, which are capable of boosting the subjective and objective qualities of reconstructed videos. Recently, neural network-based filters were presented with the power of deep learning from a large magnitude of data. Though the coding efficiency has been improved from traditional methods in High-Efficiency Video Coding (HEVC), the rich features and in- formation generated by the compression pipeline has not been fully utilized in the design of neural networks. Therefore, we propose a learning-based method to further improve the coding efficiency to its full extent. In addition, the point cloud is an essential format for three-dimensional (3-D) ob- jects capture and communication for Augmented Reality (AR) and Virtual Reality (VR) applications. In the current state of the art video-based point cloud compression (V-PCC),a dynamic point cloud is projected onto geometry and attribute videos patch by patch, each represented by its texture, depth, and occupancy map for reconstruction. To deal with oc- clusion, each patch is projected onto near and far depth fields in the geometry video. Once there are artifacts on the compressed two-dimensional (2-D) geometry video, they would be propagated to the 3-D point cloud frames. In addition, in the lossy compression, there always exists a tradeoff between the rate of bitstream and distortion (RD). Although some methods were proposed to attenuate these artifacts and improve the coding efficiency, the non-linear representation ability of Convolutional Neural Network (CNN) has not been fully considered. Therefore, we propose a learning-based approach to remove the geom- etry artifacts and improve the compressing efficiency. Besides, we propose using a CNN to improve the accuracy of the occupancy map video in V-PCC. To the best of our knowledge, these are the first learning-based solutions of the geometry artifacts removal in HEVC and occupancy map enhancement in V-PCC. The extensive experimental results show that the proposed approaches achieve significant gains in HEVC and V-PCC compared to the state-of-the-art schemes.Residual-Guided In-Loop Filter Using Convolution Neural Network -- Deep learning geometry compression artifacts removal for video-based point cloud compression -- Convolutional Neural Network-Based Occupancy Map Accuracy Improvement for Video-based Point Cloud Compressio

    Robottimanipulaattorin anturipohjainen liikesuunnittelu

    Get PDF
    When teleoperating a manipulator, obtaining sufficient situational awareness is often difficult. The operator's task can be made easier by automatically taking care of supporting tasks, such as collision avoidance, enabling the operator to concentrate on the manipulation task. In this work planning based collision avoidance methods for a robot manipulator in a changing unstructured environment are studied. Sensor based motion planning system is developed. The system detects the environment using a three-dimensional range sensor and produces an occupancy grid map. Three motion planning algorithms based on rapidly exploring random trees (RRT) are compared based on the planning time of the algorithm, the execution time of the planned motion and the end effector movement caused by the motion. For the experiments the motion planning system was implemented using Kinova JACO robot arm as the manipulator and Microsoft Kinect as the sensor. RRT-based algorithms were found to be suitable for this kind of motion planning systems. Of the algorithms compared, RRT-Connect was the fastest, but RRT* produced the best solution paths. The selection of algorithm depends on the relative value of path quality and solution speed and is application dependent.Teleoperoitaessa manipulaattoria riittävän tilannetietoisuuden välittäminen operaattorille on haastavaa. Operaattorin työtä voidaan helpottaa hoitamalla automaattisesti manipulaatiotehtävää tukevia tehtäviä, kuten törmäyksen välttämistä. Tässä työssä tutkitaan suunnittelupohjaisia menetelmiä robottimanipulaattorin törmäyksen välttämiseen muuttuvassa ja etukäteen tuntemattomassa ympäristössä. Työssä toteutettu anturipohjainen liikesuunnittelujärjestelmä havainnoi ympäristöä kolmiulotteista etäisyysmittausta hyödyntäen ja muodostaa siitä varauskartan. Kolmea nopeasti tutkiviin satunnaispuihin (RRT) perustuvaa liikesuunnittelualgoritmia vertaillaan suunnitteluun kuluneen ajan, liikkeen suoritusajan ja työkalun kulkeman matkan suhteen. Kokeita varten järjestelmä toteutettiin Kinova JACO manipulaattoria ja Microsoft Kinect anturia hyödyntäen. Kokeissa havaittiin RRT algoritmien soveltuvan tämän tyyppisiin liikesuunnittelujärjestelmiin. Vertailluista algoritmeista RRT-Connect oli nopein, mutta RRT* tuotti parhaan ratkaisun. Algoritmin valinta riippuu ratkaisun laadun ja suunnitteluun käytetyn ajan välisestä arvotuksesta ja siten sovelluskohteesta
    corecore