521 research outputs found

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Robot Navigation in Distorted Magnetic Fields

    Get PDF
    This thesis investigates the utilization of magnetic field distortions for the localization and navigation of robotic systems. The work comprehensively illuminates the various aspects that are relevant in this context. Among other things, the characteristics of magnetic field environments are assessed and examined for their usability for robot navigation in various typical mobile robot deployment scenarios. A strong focus of this work lies in the self-induced static and dynamic magnetic field distortions of complex kinematic robots, which could hinder the use of magnetic fields because of their interference with the ambient magnetic field. In addition to the examination of typical distortions in robots of different classes, solutions for compensation and concrete tools are developed both in hardware (distributed magnetometer sensor systems) and in software. In this context, machine learning approaches for learning static and dynamic system distortions are explored and contrasted with classical methods for calibrating magnetic field sensors. In order to extend probabilistic state estimation methods towards the localization in magnetic fields, a measurement model based on Mises-Fisher distributions is developed in this thesis. Finally, the approaches of this work are evaluated in practice inside and outside the laboratory in different environments and domains (e.g. office, subsea, desert, etc.) with different types of robot systems

    Research and Technology, 1998

    Get PDF
    This report selectively summarizes the NASA Lewis Research Center's research and technology accomplishments for the fiscal year 1998. It comprises 134 short articles submitted by the staff scientists and engineers. The report is organized into five major sections: Aeronautics, Research and Technology, Space, Engineering and Technical Services, and Commercial Technology. A table of contents and an author index have been developed to assist readers in finding articles of special interest. This report is not intended to he a comprehensive summary of all the research and technology work done over the past fiscal year. Most of the work is reported in Lewis-published technical reports, journal articles, and presentations prepared by Lewis staff and contractors. In addition, university grants have enabled faculty members and graduate students to engage in sponsored research that is reported at technical meetings or in journal articles. For each article in this report, a Lewis contact person has been identified, and where possible, reference documents are listed so that additional information can be easily obtained. The diversity of topics attests to the breadth of research and technology being pursued and to the skill mix of the staff that makes it possible. At the time of publication, NASA Lewis was undergoing a name change to the NASA John H. Glenn Research Center at Lewis Field

    Spokane Intercollegiate Research Conference 2017

    Get PDF

    Exploring Potentials in Mobile Phone GPS Data Collection and Analysis

    Get PDF
    In order to support efficient transportation planning decisions, household travel survey data with high levels of accuracy are essential. Due to a number of issues associated with conventional household travel surveys, including high cost, low response rate, trip misreporting, and respondents’ self-reporting bias, government and private agencies are desperately searching for alternative data collection methods. Recent advancements in smart phones and Global Positioning System (GPS) technologies present new opportunities to track travelers’ trips. Considering the high penetration rate of smartphones, it seems reasonable to use smartphone data as a reliable source of individual travel diary. Many studies have applied GPS-Based data in planning and demand analysis but mobile phone GPS data has not received much attention. The Google Location History (GLH) data provide an opportunity to explore the potential of these data. This research presents a study using GLH data, including the data processing algorithm in deriving travel information and the potential applications in understanding travel patterns. The main goal of this study is to explore the potential of using cell phone GPS data to advance the understanding in mobility and travel behavior. The objectives of the study include: a) assessing the technical feasibility of using smartphones in transportation planning as a substitute of traditional household survey b) develop algorithms and procedures to derive travel information from smartphones; and c) identify applications in mobility and travel behavior studies that could take advantage of these smartphones GPS data, which would not have been possible with conventional data collection methods. This research aims to demonstrate how accurate travel information can be collected and analyzed with lower cost using smartphone GPS data and what analysis applications can be made possible with this new data source. Moreover, the framework developed in this study can provide valuable insights for others who are interested in using cell phone data. GLH data are obtained from 45 participants in a two-month period for the study. The results show great promise of using GLH data as a supplement or complement to conventional travel diary data. It shows that GLH provides sufficient high resolution data that can be used to study people’s movement without respondent burden, and potentially it can be applied to a large scale study easily. The developed algorithms in this study work well with the data. This study supports that transportation data can be collected with smartphones less expensively and more accurately than by traditional household travel survey. These data provide the opportunity to facilitate the investigation of various issues, such as less frequent long-distance travel, hourly variations in travel behavior, and daily variations in travel behavior

    Intelligent Navigation Service Robot Working in a Flexible and Dynamic Environment

    Get PDF
    Numerous sensor fusion techniques have been reported in the literature for a number of robotics applications. These techniques involved the use of different sensors in different configurations. However, in the case of food driving, the possibility of the implementation has been overlooked. In restaurants and food delivery spots, enhancing the food transfer to the correct table is neatly required, without running into other robots or diners or toppling over. In this project, a particular algorithm module has been proposed and implemented to enhance the robot driving methodology and maximize robot functionality, accuracy, and the food transfer experience. The emphasis has been on enhancing movement accuracy to reach the targeted table from the start to the end. Four major elements have been designed to complete this project, including mechanical, electrical, electronics, and programming. Since the floor condition greatly affecting the wheels and turning angle selection, the movement accuracy was improved during the project. The robot was successfully able to receive the command from the restaurant and go to deliver the food to the customers\u27 tables, considering any obstacles on the way to avoid. The robot has equipped with two trays to mount the food with well-configured voices to welcome and greet the customer. The performance has been evaluated and undertaken using a routine robot movement tests. As part of this study, the designed service wheeled robot required to be with a high-performance real-time processor. As long as the processor was adequate, the experimental results showed a highly effective search robot methodology. Having concluded from the study that a minimum number of sensors are needed if they are placed appropriately and used effectively on a robot\u27s body, as navigation could be performed by using a small set of sensors. The Arduino Due has been used to provide a real-time operating system. It has provided a very successful data processing and transfer throughout any regular operation. Furthermore, an easy-to-use application has been developed to improve the user experience, so that the operator can interact directly with the robot via a special setting screen. It is possible, using this feature, to modify advanced settings such as voice commands or IP address without having to return back to the code

    Exploring Visual Programming Concepts for Probabilistic Programming Languages

    Get PDF
    Probabilistic programming is a way to create systems that help us make decisions in the face of uncertainty. Lots of everyday decisions involve judgment in determining relevant factors that we do not directly observe. Historically, one way to help make decisions under uncertainty has been to use a probabilistic reasoning system. Probabilistic reasoning combines our knowledge of a situation with the laws of probability to determine those unobserved factors that are critical to the decision. Typically, the way the several observations are combined is through the usage of bayesian statistics, due to its anachronistic interpretation where existing knowledge (priors) are combined with observations in order to gather evidence towards competing hypothesis.When compared to other machine learning methods (such as random forests, neural networks or linear regression), which take homogeneous data as input (requiring the user to separate their domain into different models), probabilistic programming is used to leverage the data's original structure. Plus, it provides full probability distributions over both the predictions and parameters of the model, whereas ML methods can only give the user a certain degree of confidence on the predictions.Until recently, probabilistic reasoning systems have been limited in scope, and have been hard to apply to many real world situations. Models are communicated using a mix of natural language, pseudo code, and mathematical formulae and solved using special purpose, one-off inference methods. Rather than precise specifications suitable for automatic inference, graphical models typically serve as coarse, high-level descriptions, eliding critical aspects such as fine-grained independence, abstraction and recursion.Probabilistic programming is a new approach that makes probabilistic reasoning systems easier to build and more widely applicable. A probabilistic programming language (PPL) is a programming language designed to describe probabilistic models, in a such a way we can say that the program itself is the model, and then perform inference in those models. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. By empowering users with a common dialect in the form of a programming language, rather than requiring each one of them to the non-trivial and error-prone task of writing their own models and hand-tailored inference algorithms for the problem at hand, it encourages exploration, since different models require less time to setup and evaluate, and enables sharing knowledge in the form of best practices, patterns and tools such as optimized compilers or interpreters, debuggers, IDE's, optimizers and profilers.PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible. One can easily realize this by looking at the re-usable components PPLs offer, being one of them the inference engine, which can be plugged in into different models. For instances, it is easy to replace the exact-solution traditional Bayesian networks inference, which requires time exponential in the number of variables to run, with approximation algorithms such as the Markov Chain Monte Carlo (MCMC) or Variational Message Passing (VMP), which make it possible to compute large hierarchical models by resorting to sampling and approximation. PPLs often extend from a basic language (i.e., they are embedded in a host language like R, Java or Scala), although some PPLs such as WinBUGS and Stan offer a self-contained language, with no obvious origin in another language.There have been successful applications of visual programming among several domains, being it education (MIT's Scratch and Microsoft's VPL), general-purpose programming (NoFlo), 3D modeling (Blender) and data science (RapidMiner and Weka Knowledge Flow). The latter, being popular products, have shown that there is added value in providing a graphical representation for working with data. However, as of today no tool provides a graphical representation for a PPL.DARPA, the main funder behind PPLs' research, considers one of the main key points of its Probabilistic Programming for Advancing Machine Learning program to make models easier to write (reducing development time, encouraging experimentation and reducing the level of expertise required to develop such models). The use of visual programming is suitable for this kind of objectives, so building upon the enormous flexibility of PPLs and the advantages of probabilistic models, we want to take advantage of the graphical intuition given by data visualization that data scientists are now accustomed to, and attempt to provide model and algorithmical visualization by rethinking how to capture the (usually textual) programmatic formalisms in a graphical manner.The goal of this dissertation is thus to explore graphical representations of a probabilistic programming language through the usage of node-based programming. The hypothesis under consideration is that graphical representations (not to be confused with bayesian graphical model), are more intuitive and easy to learn that full-blown PPLs.We intend to validate such hypothesis by ensuring that classical problems solved in the literature by PPLs are also supported by our graphical representation, and then measure how quickly a group of people trained in statistics would produce a viable model in both alternatives.Probabilistic programming is a way to create systems that help us make decisions in the face of uncertainty. Lots of everyday decisions involve judgment in determining relevant factors that we do not directly observe. Historically, one way to help make decisions under uncertainty has been to use a probabilistic reasoning system. Probabilistic reasoning combines our knowledge of a situation with the laws of probability to determine those unobserved factors that are critical to the decision. Typically, the way the several observations are combined is through the usage of bayesian statistics, due to its anachronistic interpretation where existing knowledge (priors) are combined with observations in order to gather evidence towards competing hypothesis.When compared to other machine learning methods (such as random forests, neural networks or linear regression), which take homogeneous data as input (requiring the user to separate their domain into different models), probabilistic programming is used to leverage the data's original structure. Plus, it provides full probability distributions over both the predictions and parameters of the model, whereas ML methods can only give the user a certain degree of confidence on the predictions.Until recently, probabilistic reasoning systems have been limited in scope, and have been hard to apply to many real world situations. Models are communicated using a mix of natural language, pseudo code, and mathematical formulae and solved using special purpose, one-off inference methods. Rather than precise specifications suitable for automatic inference, graphical models typically serve as coarse, high-level descriptions, eliding critical aspects such as fine-grained independence, abstraction and recursion.Probabilistic programming is a new approach that makes probabilistic reasoning systems easier to build and more widely applicable. A probabilistic programming language (PPL) is a programming language designed to describe probabilistic models, in a such a way we can say that the program itself is the model, and then perform inference in those models. PPLs have seen recent interest from the artificial intelligence, programming languages, cognitive science, and natural languages communities. By empowering users with a common dialect in the form of a programming language, rather than requiring each one of them to the non-trivial and error-prone task of writing their own models and hand-tailored inference algorithms for the problem at hand, it encourages exploration, since different models require less time to setup and evaluate, and enables sharing knowledge in the form of best practices, patterns and tools such as optimized compilers or interpreters, debuggers, IDE's, optimizers and profilers.PPLs are closely related to graphical models and Bayesian networks, but are more expressive and flexible. One can easily realize this by looking at the re-usable components PPLs offer, being one of them the inference engine, which can be plugged in into different models. For instances, it is easy to replace the exact-solution traditional Bayesian networks inference, which requires time exponential in the number of variables to run, with approximation algorithms such as the Markov Chain Monte Carlo (MCMC) or Variational Message Passing (VMP), which make it possible to compute large hierarchical models by resorting to sampling and approximation. PPLs often extend from a basic language (i.e., they are embedded in a host language like R, Java or Scala), although some PPLs such as WinBUGS and Stan offer a self-contained language, with no obvious origin in another language.There have been successful applications of visual programming among several domains, being it education (MIT's Scratch and Microsoft's VPL), general-purpose programming (NoFlo), 3D modeling (Blender) and data science (RapidMiner and Weka Knowledge Flow). The latter, being popular products, have shown that there is added value in providing a graphical representation for working with data. However, as of today no tool provides a graphical representation for a PPL.DARPA, the main funder behind PPLs' research, considers one of the main key points of its Probabilistic Programming for Advancing Machine Learning program to make models easier to write (reducing development time, encouraging experimentation and reducing the level of expertise required to develop such models). The use of visual programming is suitable for this kind of objectives, so building upon the enormous flexibility of PPLs and the advantages of probabilistic models, we want to take advantage of the graphical intuition given by data visualization that data scientists are now accustomed to, and attempt to provide model and algorithmical visualization by rethinking how to capture the (usually textual) programmatic formalisms in a graphical manner.The goal of this dissertation is thus to explore graphical representations of a probabilistic programming language through the usage of node-based programming. The hypothesis under consideration is that graphical representations (not to be confused with bayesian graphical model), are more intuitive and easy to learn that full-blown PPLs.We intend to validate such hypothesis by ensuring that classical problems solved in the literature by PPLs are also supported by our graphical representation, and then measure how quickly a group of people trained in statistics would produce a viable model in both alternatives
    • …
    corecore