94 research outputs found

    Prediction Interval Estimation Techniques for Empirical Modeling Strategies and their Applications to Signal Validation Tasks

    Get PDF
    The basis of this work was to evaluate both parametric and non-parametric empirical modeling strategies applied to signal validation or on-line monitoring tasks. On-line monitoring methods assess signal channel performance to aid in making instrument calibration decisions, enabling the use of condition-based calibration schedules. The three non-linear empirical modeling strategies studied were: artificial neural networks (ANN), neural network partial least squares (NNPLS), and local polynomial regression (LPR). These three types are the most common nonlinear models for applications to signal validation tasks. Of the class of local polynomials (for LPR), two were studied in this work: zero-order (kernel regression), and first-order (local linear regression). The evaluation of the empirical modeling strategies includes the presentation and derivation of prediction intervals for each of three different model types studied so that estimations could be made with an associated prediction interval. An estimate and its corresponding prediction interval contain the measurements with a specified certainty, usually 95%. The prediction interval estimates were compared to results obtained from bootstrapping via Monte Carlo resampling, to validate their expected accuracy. The estimation of prediction intervals applied to on-line monitoring systems is essential if widespread use of these empirical based systems is to be attained. In response to the topical report On-Line Monitoring of Instrument Channel Performance, published by the Electric Power Research Institute [Davis 1998], the NRC issued a safety evaluation report that identified the need to evaluate the associated uncertainty of empirical model estimations from all contributing sources. This need forms the basis for the research completed and reported in this dissertation. The focus of this work, and basis of its original contributions, were to provide an accurate prediction interval estimation method for each of the mentioned empirical modeling techniques, and to verify the results via bootstrap simulation studies. Properly determined prediction interval estimates were obtained that consistently captured the uncertainty of the given model such that the level of certainty of the intervals closely matched the observed level of coverage of the prediction intervals over the measured values. In most cases the expected level of coverage of the measured values within the prediction intervals was 95%, such that the probability that an estimate and its associated prediction interval contain the corresponding measured observation was 95%. The results also indicate that instrument channel drifts are identifiable through the use of the developed prediction intervals by observing the drop in the level of coverage of the prediction intervals to relatively low values, e.g. 30%. While all empirical models exhibit optimal performance for a given set of specifications, the identification of this optimal set may be difficult to attain. The developed methods of prediction interval estimation were shown to perform as expected over a wide range of model specifications, including misspecification. Model misspecification occurs through different mechanisms dependent on the type of empirical model. The main mechanisms under which model misspecification occur for each empirical model studied are: ANN – through architecture selection, NNPLS – through latent variable selection, LPR – through bandwidth selection. In addition, all of the above empirical models are susceptible to misspecification due to inadequate data and the presence of erroneous predictor variables in the set of predictors. A study was completed to verify that the presence of erroneous variables, i.e. unrelated to the desired response or random noise components, resulted in increases in the prediction interval magnitudes while maintaining the appropriate level of coverage for the response measurements. In addition to considering the resultant prediction intervals and coverage values, a comparative evaluation of the different empirical models was performed. The evaluation considers the average estimation errors and the stability of the models under repeated Monte Carlo resampling. The results indicate the large uncertainty of ANN models applied to collinear data, and the utility of the NNPLS model for the same purpose. While the results from the LPR models remained consistent for data with or without collinearity, assuming proper regularization was applied. The quantification of the uncertainty of an empirical model\u27s estimations is a necessary task for promoting the use of on-line monitoring systems in the nuclear power industry. All of the methods studied herein were applied to a simulated data set for an initial evaluation of the methods, and data from two different U.S. nuclear power plants for the purposes of signal validation for on-line monitoring tasks

    Towards the automatic evaluation of stylistic quality of natural texts: constructing a special-­purpose corpus of stylistic edits from the Wikipedia revision history

    Get PDF
    This thesis proposes an approach to automatic evaluation of the stylistic quality of natural texts through data-driven methods of Natural Language Processing. Advantages of data driven methods and their dependency on the size of training data are discussed. Also the advantages of using Wikipedia as a source for textual data mining are presented. The method in this project crucially involves a program for quick automatic extraction of sentences edited by users from the Wikipedia Revision History. The resulting edits have been compiled in a large-scale corpus of examples of stylistic editing. The complete modular structure of the extraction program is described and its performance is analyzed. Furthermore, the need to separate stylistic edits stylistic edits from factual ones is discussed and a number of Machine Learning classification algorithms for this task are proposed and tested. The program developed in this project was able to process approximately 10% of the whole Russian Wikipedia Revision history (200 gigabytes of textual data) in one month, resulting in the extraction of more than two millions of user edits. The best algorithm for the classification of edits into factual and stylistic ones achieved 86.2% cross-validation accuracy, which is comparable with state-of-the-art performance of similar models described in published papers.Master i Datalingvistikk og sprĂĄkteknologiMAHF-DASPDASP35

    Emerging Technologies - NanoMagnets Logic (NML)

    Get PDF
    In the last decades CMOS technology has ruled the electronic scenario thanks to the constant scaling of transistor sizes. With the reduction of transistor sizes circuit area decreases, clock frequency increases and power consumption decreases accordingly. However CMOS scaling is now approaching its physical limits and many believe that CMOS technology will not be able to reach the end of the Roadmap. This is mainly due to increasing difficulties in the fabrication process, that is becoming very expensive, and to the unavoidable impact of leakage losses, particularly thanks to gate tunnel current. In this scenario many alternative technologies are studied to overcome the limitations of CMOS transistors. Among these possibilities, magnetic based technologies, like NanoMagnet Logic (NML) are among the most interesting. The reason of this interest lies in their magnetic nature, that opens up entire new possibilities in the design of logic circuits, like the possibility to mix logic and memory in the same device. Moreover they have no standby power consumption and potentially a much lower power consumption of CMOS transistors. In literature NML logic is well studied and theoretical and experimental proofs of concept were already found. However two important points are not enough considered in the analysis approach followed by most of the work in literature. First of all, no complex circuits are analyzed. NML logic is very different from CMOS technologies, so to completely understand the potential of this technology it is mandatory to investigate complex architectures. Secondly, most of the solutions proposed do not take into account the constraints derived from fabrication process, making them unrealistic and difficult to be fabricated experimentally. This thesis focuses therefore on NML logic keeping into account these two important limitations in the research approach followed in literature. The aim is to obtain a complete and accurate overview of NML logic, finding realistic circuital solutions and trying to improve at the same time their performance. After a brief and complete introduction (Chapter 1), the thesis is divided in two parts, which cover the two fundamental points followed in this three years of research: A circuits architecture analysis and a technological analysis. In the architecture analysis first an innovative VHDL model is described in Chapter 2. This model is extensively used in the analysis because it allows fast simulation of complex circuits, with, at the same time, the possibility to estimate circuit per- formance, like area and power consumption. In Chapter 3 the problem of signals synchronization in complex NML circuits is analyzed and solved, using as benchmark a simple but complete NML microprocessor. Different solutions based on asynchronous logic are studied and a new asynchronous solution, specifically designed to exploit the potential of NML logic, is developed. In Chapter 4 the layout of NML circuits is studied on a more physical level, considering the limitations of fabrication processes. The layout of NML circuits is therefore changed accordingly to these constraints. Secondly CMOS circuits architectures are compared to more simple architectures, evaluating therefore which one is more suited for NML logic. Finally the problem of interconnections in NML technology is analyzed and solutions to improve it are found. In Chapter 5 the problem of feedback signals in heavy pipelined technologies, like NML, is studied. Solutions to improve performances and synchronize signals are developed. Systolic arrays are then analyzed as possible candidate to exploit NML potential. Finally in Chapter 6 ToPoliNano, a simulator dedicated to NML and other emerging technologies, that we are developing, is described. This simulator allows to follow the same top-down approach followed for CMOS technology. The layout generator and the simulation engine are detailed described. In the first chapter of the technological analysis (Chapter 7), the performance of NML logic is explored throughout low level simulations. The aim is to understand if these circuits can be fabricated with optical lithography, allowing therefore the commercial development of NML logic. Basic logic gates and the clock system are there analyzed from a low level perspective. In Chapter 8 an innovative electric clock system for NML technology is shown and the first experimental results are reported. This clock system allows to achieve true low power for NML technology, obtaining a reduction of power consumption of 20 times considering the best CMOS transistors available. This power consumption takes into account all the losses, also the clock system losses. Moreover the solution presented can be fabricated with current technological processes. The research work behind this thesis represents an important breakthrough in NML logic. The solutions here presented allow the design and fabrication of complex NML circuits, considering the particular characteristics of this technology and considerably improving the performance. Moreover the technological solutions here presented allow the design and fabrication of circuits with available fabrication process with a considerable advantage over CMOS in terms of power consumption. This thesis represents therefore a considerable step froward in the study and development of NML technolog

    An action selection architecture for autonomous virtual humans in persistent worlds

    Get PDF
    Nowadays, virtual humans such as non-player characters in computer games need to have a strong autonomy in order to live their own life in persistent virtual worlds. When designing autonomous virtual humans, the action selection problem needs to be considered, as it is responsible for decision making at each moment in time. Indeed action selection architectures for autonomous virtual humans need to be reactive, proactive, motivational, and emotional to obtain a high degree of autonomy and individuality. The thesis can be divided into three parts. In the first part, we define each word of our title to precise their sense and raise the problematic of this work. We describe also inspirations from several domains that we used to design our model because this thesis is highly multi-disciplinary. Indeed, decision-making is essential for every autonomous entity and is studied in ethology, robotics, computer graphics, computer sciences, and cognitive sciences. However, we have chosen specific techniques to implement our model: hierarchical classifier systems and a free flow hierarchy. The second part of this thesis describes in detail our model of action selection for autonomous virtual humans. We use overlapping hierarchical classifier systems, working in parallel, to generate coherent behavioral plans. They are associated with the functionalities of a free flow hierarchy for the spreading of activation to give reactivity and flexibility to the hierarchical system. Moreover several functionalities are added to enhance and facilitate the choice of the most appropriate action at every time according to the internal and external influences. Finally, in the third part of this thesis, a complex simulated environment is created for testing the model and its functionalities with many conflicting motivations. Results demonstrate that the model is sufficiently efficient, robust and flexible for designing motivational autonomous virtual humans in persistent worlds. Moreover, we have just started to investigate on the emotional level which has to be improved in the future to have more subjective and adaptive behaviors and also manage social interactions with other virtual humans or users. Applied to video games, non player characters are more interesting and believable because they live their own life when people don't interact with them

    Multiple instance fuzzy inference.

    Get PDF
    A novel fuzzy learning framework that employs fuzzy inference to solve the problem of multiple instance learning (MIL) is presented. The framework introduces a new class of fuzzy inference systems called Multiple Instance Fuzzy Inference Systems (MI-FIS). Fuzzy inference is a powerful modeling framework that can handle computing with knowledge uncertainty and measurement imprecision effectively. Fuzzy Inference performs a non-linear mapping from an input space to an output space by deriving conclusions from a set of fuzzy if-then rules and known facts. Rules can be identified from expert knowledge, or learned from data. In multiple instance problems, the training data is ambiguously labeled. Instances are grouped into bags, labels of bags are known but not those of individual instances. MIL deals with learning a classifier at the bag level. Over the years, many solutions to this problem have been proposed. However, no MIL formulation employing fuzzy inference exists in the literature. In this dissertation, we introduce multiple instance fuzzy logic that enables fuzzy reasoning with bags of instances. Accordingly, different multiple instance fuzzy inference styles are proposed. The Multiple Instance Mamdani style fuzzy inference (MI-Mamdani) extends the standard Mamdani style inference to compute with multiple instances. The Multiple Instance Sugeno style fuzzy inference (MI-Sugeno) is an extension of the standard Sugeno style inference to handle reasoning with multiple instances. In addition to the MI-FIS inference styles, one of the main contributions of this work is an adaptive neuro-fuzzy architecture designed to handle bags of instances as input and capable of learning from ambiguously labeled data. The proposed architecture, called Multiple Instance-ANFIS (MI-ANFIS), extends the standard Adaptive Neuro Fuzzy Inference System (ANFIS). We also propose different methods to identify and learn fuzzy if-then rules in the context of MIL. In particular, a novel learning algorithm for MI-ANFIS is derived. The learning is achieved by using the backpropagation algorithm to identify the premise parameters and consequent parameters of the network. The proposed framework is tested and validated using synthetic and benchmark datasets suitable for MIL problems. Additionally, we apply the proposed Multiple Instance Inference to the problem of region-based image categorization as well as to fuse the output of multiple discrimination algorithms for the purpose of landmine detection using Ground Penetrating Radar

    Customizable Feature based Design Pattern Recognition Integrating Multiple Techniques

    Get PDF
    Die Analyse und Rückgewinnung von Architekturinformationen aus existierenden Altsystemen ist eine komplexe, teure und zeitraubende Aufgabe, was der kontinuierlich steigenden Komplexität von Software und dem Aufkommen der modernen Technologien geschuldet ist. Die Wartung von Altsystemen wird immer stärker nachgefragt und muss dabei mit den neuesten Technologien und neuen Kundenanforderungen umgehen können. Die Wiederverwendung der Artefakte aus Altsystemen für neue Entwicklungen wird sehr bedeutsam und überlebenswichtig für die Softwarebranche. Die Architekturen von Altsystemen unterliegen konstanten Veränderungen, deren Projektdokumentation oft unvollständig, inkonsistent und veraltet ist. Diese Dokumente enthalten ungenügend Informationen über die innere Struktur der Systeme. Häufig liefert nur der Quellcode zuverlässige Informationen über die Struktur von Altsystemen. Das Extrahieren von Artefakten aus Quellcode von Altsystemen unterstützt das Programmverständnis, die Wartung, das Refactoring, das Reverse Engineering, die nachträgliche Dokumentation und Reengineering Methoden. Das Ziel dieser Dissertation ist es Entwurfsinformationen von Altsystemen zu extrahieren, mit Fokus auf die Wiedergewinnung von Architekturmustern. Architekturmuster sind Schlüsselelemente, um Architekturentscheidungen aus Quellcode von Altsystemen zu extrahieren. Die Verwendung von Mustern bei der Entwicklung von Applikationen wird allgemein als qualitätssteigernd betrachtet und reduziert Entwicklungszeit und kosten. In der Vergangenheit wurden unterschiedliche Methoden entwickelt, um Muster in Altsystemen zu erkennen. Diese Techniken erkennen Muster mit unterschiedlicher Genauigkeit, da ein und dasselbe Muster unterschiedlich spezifiziert und implementiert wird. Der Lösungsansatz dieser Dissertation basiert auf anpassbaren und wiederverwendbaren Merkmal-Typen, die statische und dynamische Parameter nutzen, um variable Muster zu definieren. Jeder Merkmal-Typ verwendet eine wählbare Suchtechnik (SQL Anfragen, Reguläre Ausdrücke oder Quellcode Parser), um ein bestimmtes Merkmal eines Musters im Quellcode zu identifizieren. Insbesondere zur Erkennung verschiedener Varianten eines Musters kommen im entwickelten Verfahren statische, dynamische und semantische Analysen zum Einsatz. Die Verwendung unterschiedlicher Suchtechniken erhöht die Genauigkeit der Mustererkennung bei verschiedenen Softwaresystemen. Zusätzlich wurde eine neue Semantik für Annotationen im Quellcode von existierenden Softwaresystemen entwickelt, welche die Effizienz der Mustererkennung steigert. Eine prototypische Implementierung des Ansatzes, genannt UDDPRT, wurde zur Erkennung verschiedener Muster in Softwaresystemenen unterschiedlicher Programmiersprachen (JAVA, C/C++, C#) verwendet. UDDPRT erlaubt die Anpassung der Mustererkennung durch den Benutzer. Alle Abfragen und deren Zusammenspiel sind konfigurierbar und erlauben dadurch die Erkennung von neuen und abgewandelten Mustern. Es wurden umfangreiche Experimente mit diversen Open Source Software Systemen durchgeführt und die erzielten Ergebnisse wurden mit denen anderer Ansätze verglichen. Dabei war es möglich eine deutliche Steigerung der Genauigkeit im entwickelten Verfahren gegenüber existierenden Ansätzen zu zeigen.Recovering design information from legacy applications is a complex, expensive, quiet challenging, and time consuming task due to ever increasing complexity of software and advent of modern technology. The growing demand for maintenance of legacy systems, which can cope with the latest technologies and new business requirements, the reuse of artifacts from the existing legacy applications for new developments become very important and vital for software industry. Due to constant evolution in architecture of legacy systems, they often have incomplete, inconsistent and obsolete documents which do not provide enough information about the structure of these systems. Mostly, source code is the only reliable source of information for recovering artifacts from legacy systems. Extraction of design artifacts from the source code of existing legacy systems supports program comprehension, maintenance, code refactoring, reverse engineering, redocumentation and reengineering methodologies. The objective of approach used in this thesis is to recover design information from legacy code with particular focus on the recovery of design patterns. Design patterns are key artifacts for recovering design decisions from the legacy source code. Patterns have been extensively tested in different applications and reusing them yield quality software with reduced cost and time frame. Different techniques, methodologies and tools are used to recover patterns from legacy applications in the past. Each technique recovers patterns with different precision and recall rates due to different specifications and implementations of same pattern. The approach used in this thesis is based on customizable and reusable feature types which use static and dynamic parameters to define variant pattern definitions. Each feature type allows user to switch/select between multiple searching techniques (SQL queries, Regular Expressions and Source Code Parsers) which are used to match features of patterns with source code artifacts. The technique focuses on detecting variants of different design patterns by using static, dynamic and semantic analysis techniques. The integrated use of SQL queries, source code parsers, regular expressions and annotations improve the precision and recall for pattern extraction from different legacy systems. The approach has introduced new semantics of annotations to be used in the source code of legacy applications, which reduce search space and time for detecting patterns. The prototypical implementation of approach, called UDDPRT is used to recognize different design patterns from the source code of multiple languages (Java, C/C++, C#). The prototype is flexible and customizable that novice user can change the SQL queries and regular expressions for detecting implementation variants of design patterns. The approach has improved significant precision and recall of pattern extraction by performing experiments on number of open source systems taken as baselines for comparisons

    Planning and Navigation in Dynamic Environments for Mobile Robots and Micro Aerial Vehicles

    Get PDF
    Reliable and robust navigation planning and obstacle avoidance is key for the autonomous operation of mobile robots. In contrast to stationary industrial robots that often operate in controlled spaces, planning for mobile robots has to take changing environments and uncertainties into account during plan execution. In this thesis, planning and obstacle avoidance techniques are proposed for a variety of ground and aerial robots. Common to most of the presented approaches is the exploitation of the nature of the underlying problem to achieve short planning times by using multiresolution or hierarchical approaches. Short planning times allow for continuous and fast replanning to take the uncertainty in the environment and robot motion execution into account. The proposed approaches are evaluated in simulation and real-world experiments. The first part of this thesis addresses planning for mobile ground robots. One contribution is an approach to grasp and object removal planning to pick objects from a transport box with a mobile manipulation robot. In a multistage process, infeasible grasps are pruned in offline and online processing steps. Collision-free endeffector trajectories are planned to the remaining grasps until a valid removal trajectory can be found. An object-centric local multiresolution representation accelerates trajectory planning. The mobile manipulation components are evaluated in an integrated mobile bin-picking system. Local multiresolution planning is employed for path planning for humanoid soccer robots as well. The used Nao robot is equipped with only relatively low computing power. A resource-efficient path planner including the anticipated movements of opponents on the field is developed as part of this thesis. In soccer games an important subproblem is to reach a position behind the ball to dribble or kick it towards the goal. By the assumption that the opponents have the same intention, an explicit representation of their movements is possible. This leads to paths that facilitate the robot to reach its target position with a higher probability without being disturbed by the other robot. The evaluation for the planner is performed in a physics-based soccer simulation. The second part of this thesis covers planning and obstacle avoidance for micro aerial vehicles (MAVs), in particular multirotors. To reduce the planning complexity, the planning problem is split into a hierarchy of planners running on different levels of abstraction, i.e., from abstract to detailed environment descriptions and from coarse to fine plans. A complete planning hierarchy for MAVs is presented, from mission planners for multiple application domains to low-level obstacle avoidance. Missions planned on the top layer are executed by means of coupled allocentric and egocentric path planning. Planning is accelerated by global and local multiresolution representations. The planners can take multiple objectives into account in addition to obstacle costs and path length, e.g., sensor constraints. The path planners are supplemented by trajectory optimization to achieve dynamically feasible trajectories that can be executed by the underlying controller at higher velocities. With the initialization techniques presented in this thesis, the convergence of the optimization problem is expedited. Furthermore, frequent reoptimization of the initial trajectory allows for the reaction to changes in the environment without planning and optimizing a complete new trajectory. Fast, reactive obstacle avoidance based on artificial potential fields acts as a safety layer in the presented hierarchy. The obstacle avoidance layer employs egocentric sensor data and can operate at the data acquisition frequency of up to 40 Hz. It can slow-down and stop the MAVs in front of obstacles as well as avoid approaching dynamic obstacles. We evaluate our planning and navigation hierarchy in simulation and with a variety of MAVs in real-world applications, especially outdoor mapping missions, chimney and building inspection, and automated stocktaking.Planung und Navigation in dynamischen Umgebungen für mobile Roboter und Multikopter Zuverlässige und sichere Navigationsplanung und Hindernisvermeidung ist ein wichtiger Baustein für den autonomen Einsatz mobiler Roboter. Im Gegensatz zu klassischen Industrierobotern, die in der Regel in abgetrennten, kontrollierten Bereichen betrieben werden, ist es in der mobilen Robotik unerlässlich, Änderungen in der Umgebung und die Unsicherheit bei der Aktionsausführung zu berücksichtigen. Im Rahmen dieser Dissertation werden Verfahren zur Planung und Hindernisvermeidung für eine Reihe unterschiedlicher Boden- und Flugroboter entwickelt und vorgestellt. Den meisten beschriebenen Ansätzen ist gemein, dass die Struktur der zu lösenden Probleme ausgenutzt wird, um Planungsprozesse zu beschleunigen. Häufig ist es möglich, mit abnehmender Genauigkeit zu planen desto weiter eine Aktion in der Zeit oder im Ort entfernt ist. Dieser Ansatz wird lokale Multiresolution genannt. In anderen Fällen ist eine Zerlegung des Problems in Schichten unterschiedlicher Genauigkeit möglich. Die damit zu erreichende Beschleunigung der Planung ermöglicht ein häufiges Neuplanen und somit die Reaktion auf Änderungen in der Umgebung und Abweichungen bei den ausgeführten Aktionen. Zur Evaluation der vorgestellten Ansätze werden Experimente sowohl in der Simulation als auch mit Robotern durchgeführt. Der erste Teil dieser Dissertation behandelt Planungsmethoden für mobile Bodenroboter. Um Objekte mit einem mobilen Roboter aus einer Transportkiste zu greifen und zur Weiterverarbeitung zu einem Arbeitsplatz zu liefern, wurde ein System zur Planung möglicher Greifposen und hindernisfreier Endeffektorbahnen entwickelt. In einem mehrstufigen Prozess werden mögliche Griffe an bekannten Objekten erst in mehreren Vorverarbeitungsschritten (offline) und anschließend, passend zu den erfassten Objekten, online identifiziert. Zu den verbleibenden möglichen Griffen werden Endeffektorbahnen geplant und, bei Erfolg, ausgeführt. Die Greif- und Bahnplanung wird durch eine objektzentrische lokale Multiresolutionskarte beschleunigt. Die Einzelkomponenten werden in einem prototypischen Gesamtsystem evaluiert. Eine weitere Anwendung für die lokale Multiresolutionsplanung ist die Pfadplanung für humanoide Fußballroboter. Zum Einsatz kommen Nao-Roboter, die nur über eine sehr eingeschränkte Rechenleistung verfügen. Durch die Reduktion der Planungskomplexität mit Hilfe der lokalen Multiresolution, wurde die Entwicklung eines Planers ermöglicht, der zusätzlich zur aktuellen Hindernisfreiheit die Bewegung der Gegenspieler auf dem Feld berücksichtigt. Hierbei liegt der Fokus auf einem wichtigen Teilproblem, dem Erreichen einer guten Schussposition hinter dem Ball. Die Tatsache, dass die Gegenspieler vergleichbare Ziele verfolgen, ermöglicht es, Annahmen über mögliche Laufwege zu treffen. Dadurch ist die Planung von Pfaden möglich, die das Risiko, durch einen Gegenspieler passiv geblockt zu werden, reduzieren, so dass die Schussposition schneller erreicht wird. Dieser Teil der Arbeit wird in einer physikalischen Fußballsimulation evaluiert. Im zweiten Teil dieser Dissertation werden Methoden zur Planung und Hindernisvermeidung von Multikoptern behandelt. Um die Planungskomplexität zu reduzieren, wird das zu lösenden Planungsproblem hierarchisch zerlegt und durch verschiedene Planungsebenen verarbeitet. Dabei haben höhere Planungsebenen eine abstraktere Weltsicht und werden mit niedriger Frequenz ausgeführt, zum Beispiel die Missionsplanung. Niedrigere Ebenen haben eine Weltsicht, die mehr den Sensordaten entspricht und werden mit höherer Frequenz ausgeführt. Die Granularität der resultierenden Pläne verfeinert sich hierbei auf niedrigeren Ebenen. Im Rahmen dieser Dissertation wurde eine komplette Planungshierarchie für Multikopter entwickelt, von Missionsplanern für verschiedene Anwendungsgebiete bis zu schneller Hindernisvermeidung. Pfade zur Ausführung geplanter Missionen werden durch zwei gekoppelte Planungsebenen erstellt, erst allozentrisch, und dann egozentrisch verfeinert. Hierbei werden ebenfalls globale und lokale Multiresolutionsrepräsentationen zur Beschleunigung der Planung eingesetzt. Zusätzlich zur Hindernisfreiheit und Länge der Pfade können auf diesen Planungsebenen weitere Zielfunktionen berücksichtigt werden, wie zum Beispiel die Berücksichtigung von Sensorcharakteristika. Ergänzt werden die Planungsebenen durch die Optimierung von Flugbahnen. Diese Flugbahnen berücksichtigen eine angenäherte Flugdynamik und erlauben damit ein schnelleres Verfolgen der optimierten Pfade. Um eine schnelle Konvergenz des Optimierungsproblems zu erreichen, wurde in dieser Arbeit ein Verfahren zur Initialisierung entwickelt. Des Weiteren kommen Methoden zur schnellen Verfeinerung des Optimierungsergebnisses bei Änderungen im Weltzustand zum Einsatz, diese ermöglichen die Reaktion auf neue Hindernisse oder Abweichungen von der Flugbahn, ohne eine komplette Flugbahn neu zu planen und zu optimieren. Die Sicherheit des durch die Planungs- und Optimierungsebenen erstellten Pfades wird durch eine schnelle, reaktive Hindernisvermeidung gewährleistet. Das Hindernisvermeidungsmodul basiert auf der Methode der künstlichen Potentialfelder. Durch die Verwendung dieser schnellen Methode kombiniert mit der Verwendung von nicht oder nur über kurze Zeiträume aggregierte Sensordaten, ermöglicht die Reaktion auf unbekannte Hindernisse, kurz nachdem diese von den Sensoren wahrgenommen wurden. Dabei kann der Multikopter abgebremst oder gestoppt werden, und sich von nähernden Hindernissen entfernen. Die Komponenten der Planungs- und Hindernisvermeidungshierarchie werden sowohl in der Simulation evaluiert, als auch in integrierten Gesamtsystemen mit verschiedenen Multikoptern in realen Anwendungen. Dies sind insbesondere die Kartierung von Innen- und Außenbereichen, die Inspektion von Gebäuden und Schornsteinen sowie die automatisierte Inventur von Lägern

    Machine Learning As Tool And Theory For Computational Neuroscience

    Get PDF
    Computational neuroscience is in the midst of constructing a new framework for understanding the brain based on the ideas and methods of machine learning. This is effort has been encouraged, in part, by recent advances in neural network models. It is also driven by a recognition of the complexity of neural computation and the challenges that this poses for neuroscience’s methods. In this dissertation, I first work to describe these problems of complexity that have prompted a shift in focus. In particular, I develop machine learning tools for neurophysiology that help test whether tuning curves and other statistical models in fact capture the meaning of neural activity. Then, taking up a machine learning framework for understanding, I consider theories about how neural computation emerges from experience. Specifically, I develop hypotheses about the potential learning objectives of sensory plasticity, the potential learning algorithms in the brain, and finally the consequences for sensory representations of learning with such algorithms. These hypotheses pull from advances in several areas of machine learning, including optimization, representation learning, and deep learning theory. Each of these subfields has insights for neuroscience, offering up links for a chain of knowledge about how we learn and think. Together, this dissertation helps to further an understanding of the brain in the lens of machine learning
    • …
    corecore