1,635 research outputs found

    Wo bin ich? Beiträge zum Lokalisierungsproblem mobiler Roboter

    Get PDF
    Self-localization addresses the problem of estimating the pose of mobile robots with respect to a certain coordinate system of their workspace. It is needed for various mobile robot applications like material handling in industry, disaster zone operations, vacuum cleaning, or even the exploration of foreign planets. Thus, self-localization is a very essential capability. This problem has received considerable attention over the last decades. It can be decomposed into localization on a global and local level. Global techniques are able to localize the robot without any prior knowledge about its pose with respect to an a priori known map. In contrast, local techniques aim to correct so-called odometry errors occurring during robot motion. In this thesis, the global localization problem for mobile robots is mainly addressed. The proposed method is based on matching an incremental local map to an a priori known global map. This approach is very time and memory efficient and robust to structural ambiguity as well as with respect to the occurrence of dynamic obstacles in non-static environments. The algorithm consists of several components like ego motion estimation or global point cloud matching. Nowadays most computers feature multi-core processors and thus map matching is performed by applying a parallelized variant of the Random Sample Matching (pRANSAM) approach originally devised for solving the 3D-puzzle problem. pRANSAM provides a set of hypotheses representing alleged robot poses. Techniques are discussed to postprocess the hypotheses, e.g. to decide when the robot pose is determined with a sufficient accuracy. Furthermore, runtime aspects are considered in order to facilitate localization in real-time. Finally, experimental results demonstrate the robustness of the method proposed in this thesis.Das Lokalisierungsproblem mobiler Roboter beschreibt die Aufgabe, deren Pose bezüglich eines gegebenen Weltkoordinatensystems zu bestimmen. Die Fähigkeit zur Selbstlokalisierung wird in vielen Anwendungsbereichen mobiler Roboter benötigt, wie etwa bei dem Materialtransport in der industriellen Fertigung, bei Einsätzen in Katastrophengebieten oder sogar bei der Exploration fremder Planeten. Eine Unterteilung existierender Verfahren zur Lösung des genannten Problems erfolgt je nachdem ob eine Lokalisierung auf lokaler oder auf globaler Ebene stattfindet. Globale Lokalisierungsalgorithmen bestimmen die Pose des Roboters bezüglich eines Weltkoordinatensystems ohne jegliches Vorwissen, wohingegen bei lokalen Verfahren eine grobe Schätzung der Pose vorliegt, z.B. durch gegebene Odometriedaten des Roboters. Im Rahmen dieser Dissertation wird ein neuer Ansatz zur Lösung des globalen Lokalisierungsproblems vorgestellt. Die grundlegende Idee ist, eine lokale Karte und eine globale Karte in Übereinstimmung zu bringen. Der beschriebene Ansatz ist äußerst robust sowohl gegenüber Mehrdeutigkeiten der Roboterpose als auch dem Auftreten dynamischer Hindernisse in nicht-statischen Umgebungen. Der Algorithmus besteht hauptsächlich aus drei Komponenten: Einem Scanmatcher zur Generierung der lokalen Karte, einer Methode zum matchen von lokaler und globaler Karte und einer Instanz, welche entscheidet, wann der Roboter mit hinreichender Sicherheit korrekt lokalisiert ist. Das Matching von lokaler und globaler Karte wird dabei von einer parallelisierten Variante des Random Sample Matching (pRANSAM) durchgeführt, welche eine Menge von Posenhypothesen liefert. Diese Hypothesen werden in einem weiteren Schritt analysiert, um bei hinreichender Eindeutigkeit die korrekte Roboterpose zu bestimmen. Umfangreiche Experimente belegen die Zuverlässigkeit und Genauigkeit des in dieser Dissertation vorgestellten Verfahrens

    Modeling Boundaries of Influence among Positional Uncertainty Fields

    Get PDF
    Within a CIS environment, the proper use of information requires the identification of the uncertainty associated with it. As such, there has been a substantial amount of research dedicated to describing and quantifying spatial data uncertainty. Recent advances in sensor technology and image analysis techniques are making image-derived geospatial data increasingly popular. Along with development in sensor and image analysis technologies have come departures from conventional point-by-point measurements. Current advancements support the transition from traditional point measures to novel techniques that allow the extraction of complex objects as single entities (e.g., road outlines, buildings). As the methods of data extraction advance, so too must the methods of estimating the uncertainty associated with the data. Not only will object uncertainties be modeled, but the connections between these uncertainties will also be estimated. The current methods for determining spatial accuracy for lines and areas typically involve defining a zone of uncertainty around the measured line, within which the actual line exists with some probability. Yet within the research community, the proper shape of this \u27uncertainty band\u27 is a topic with much dissent. Less contemplated is the manner in which such areas of uncertainty interact and influence one another. The development of positional error models, from the epsilon band and error band to the rigorous G-band, has focused on statistical models for estimating independent line features. Yet these models are not suited to model the interactions between uncertainty fields of adjacent features. At some point, these distributed areas of uncertainty around the features will intersect and overlap one another. In such instances, a feature\u27s uncertainty zone is defined not only by its measurement, but also by the uncertainty associated with neighboring features. It is therefore useful to understand and model the interactions between adjacent uncertainty fields. This thesis presents an analysis of estimation and modeling techniques of spatial uncertainty, focusing on the interactions among fields of positional uncertainty for image-derived linear features. Such interactions are assumed to occur between linear features derived from varying methods and sources, allowing the application of an independent error model. A synthetic uncertainty map is derived for a set of linear and aerial features, containing distributed fields of uncertainty for individual features. These uncertainty fields are shown to be advantageous for communication and user understanding, as well as being conducive to a variety of image processing techniques. Such image techniques can combine overlapping uncertainty fields to model the interaction between them. Deformable contour models are used to extract sets of continuous uncertainty boundaries for linear features, and are subsequently applied to extract a boundary of influence shared by two uncertainty fields. These methods are then applied to a complex scene of uncertainties, modeling the interactions of multiple objects within the scene. The resulting boundary uncertainty representations are unique from the previous independent error models which do not take neighboring influences into account. By modeling the boundary of interaction among the uncertainties of neighboring features, a more integrated approach to error modeling and analysis can be developed for complex spatial scenes and datasets

    Hardware neural systems for applications: a pulsed analog approach

    Get PDF

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Predictive Performance Of Machine Learning Algorithms For Ore Reserve Estimation In Sparse And Imprecise Data

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2006Traditional geostatistical estimation techniques have been used predominantly in the mining industry for the purpose of ore reserve estimation. Determination of mineral reserve has always posed considerable challenge to mining engineers due to geological complexities that are generally associated with the phenomenon of ore body formation. Considerable research over the years has resulted in the development of a number of state-of-the-art methods for the task of predictive spatial mapping such as ore reserve estimation. Recent advances in the use of the machine learning algorithms (MLA) have provided a new approach to solve the age-old problem. Therefore, this thesis is focused on the use of two MLA, viz. the neural network (NN) and support vector machine (SVM), for the purpose of ore reserve estimation. Application of the MLA have been elaborated with two complex drill hole datasets. The first dataset is a placer gold drill hole data characterized by high degree of spatial variability, sparseness and noise while the second dataset is obtained from a continuous lode deposit. The application and success of the models developed using these MLA for the purpose of ore reserve estimation depends to a large extent on the data subsets on which they are trained and subsequently on the selection of the appropriate model parameters. The model data subsets obtained by random data division are not desirable in sparse data conditions as it usually results in statistically dissimilar subsets, thereby reducing their applicability. Therefore, an ideal technique for data subdivision has been suggested in the thesis. Additionally, issues pertaining to the optimum model development have also been discussed. To investigate the accuracy and the applicability of the MLA for ore reserve estimation, their generalization ability was compared with the geostatistical ordinary kriging (OK) method. The analysis of Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Error (ME) and the coefficient of determination (R2) as the indices of the model performance indicated that they may significantly improve the predictive ability and thereby reduce the inherent risk in ore reserve estimation

    Data Mining and Machine Learning in Astronomy

    Full text link
    We review the current state of data mining and machine learning in astronomy. 'Data Mining' can have a somewhat mixed connotation from the point of view of a researcher in this field. If used correctly, it can be a powerful approach, holding the potential to fully exploit the exponentially increasing amount of available data, promising great scientific advance. However, if misused, it can be little more than the black-box application of complex computing algorithms that may give little physical insight, and provide questionable results. Here, we give an overview of the entire data mining process, from data collection through to the interpretation of results. We cover common machine learning algorithms, such as artificial neural networks and support vector machines, applications from a broad range of astronomy, emphasizing those where data mining techniques directly resulted in improved science, and important current and future directions, including probability density functions, parallel algorithms, petascale computing, and the time domain. We conclude that, so long as one carefully selects an appropriate algorithm, and is guided by the astronomical problem at hand, data mining can be very much the powerful tool, and not the questionable black box.Comment: Published in IJMPD. 61 pages, uses ws-ijmpd.cls. Several extra figures, some minor additions to the tex

    Experimental Study on 164 Algorithms Available in Software Tools for Solving Standard Non-Linear Regression Problems

    Get PDF
    In the specialized literature, researchers can find a large number of proposals for solving regression problems that come from different research areas. However, researchers tend to use only proposals from the area in which they are experts. This paper analyses the performance of a large number of the available regression algorithms from some of the most known and widely used software tools in order to help non-expert users from other areas to properly solve their own regression problems and to help specialized researchers developing well-founded future proposals by properly comparing and identifying algorithms that will enable them to focus on significant further developments. To sum up, we have analyzed 164 algorithms that come from 14 main different families available in 6 software tools (Neural Networks, Support Vector Machines, Regression Trees, Rule-Based Methods, Stacking, Random Forests, Model trees, Generalized Linear Models, Nearest Neighbor methods, Partial Least Squares and Principal Component Regression, Multivariate Adaptive Regression Splines, Bagging, Boosting, and other methods) over 52 datasets. A new measure has also been proposed to show the goodness of each algorithm with respect to the others. Finally, a statistical analysis by non-parametric tests has been carried out over all the algorithms and on the best 30 algorithms, both with and without bagging. Results show that the algorithms from Random Forest, Model Tree and Support Vector Machine families get the best positions in the rankings obtained by the statistical tests when bagging is not considered. In addition, the use of bagging techniques significantly improves the performance of the algorithms without excessive increase in computational times.This work was supported in part by the University of Córdoba under the project PPG2019-UCOSOCIAL-03, and in part by the Spanish Ministry of Science, Innovation and Universities under Grant TIN2015- 68454-R and Grant TIN2017-89517-P
    corecore