733 research outputs found

    A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms

    Get PDF
    The benefits of automating design cycles for Bayesian inference-based algorithms are becoming increasingly recognized by the machine learning community. As a result, interest in probabilistic programming frameworks has much increased over the past few years. This paper explores a specific probabilistic programming paradigm, namely message passing in Forney-style factor graphs (FFGs), in the context of automated design of efficient Bayesian signal processing algorithms. To this end, we developed "ForneyLab" (https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message passing-based inference in FFGs. We show by example how ForneyLab enables automatic derivation of Bayesian signal processing algorithms, including algorithms for parameter estimation and model comparison. Crucially, due to the modular makeup of the FFG framework, both the model specification and inference methods are readily extensible in ForneyLab. In order to test this framework, we compared variational message passing as implemented by ForneyLab with automatic differentiation variational inference (ADVI) and Monte Carlo methods as implemented by state-of-the-art tools "Edward" and "Stan". In terms of performance, extensibility and stability issues, ForneyLab appears to enjoy an edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate Reasonin

    Experiences on a motivational learning approach for robotics in undergraduate courses

    Get PDF
    This paper presents an educational experience carried out in robotics undergraduate courses from two different degrees: Computer Science and Industrial Engineering, having students with diverse capabilities and motivations. The experience compares two learning strategies for the practical lessons of such courses: one relies on code snippets in Matlab to cope with typical robotic problems like robot motion, localization, and mapping, while the second strategy opts for using the ROS framework for the development of algorithms facing a competitive challenge, e.g. exploration algorithms. The obtained students’ opinions were instructive, reporting, for example, that although they consider harder to master ROS when compared to Matlab, it might be more useful in their (robotic related) professional careers, which enhanced their disposition to study it. They also considered that the challenge-exercises, in addition to motivate them, helped to develop their skills as engineers to a greater extent than the skeleton-code based ones. These and other conclusions will be useful in posterior courses to boost the interest and motivation of the students.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Distributed simultaneous task allocation and motion coordination of autonomous vehicles using a parallel computing cluster

    Full text link
    Task allocation and motion coordination are the main factors that should be consi-dered in the coordination of multiple autonomous vehicles in material handling systems. Presently, these factors are handled in different stages, leading to a reduction in optimality and efficiency of the overall coordination. However, if these issues are solved simultaneously we can gain near optimal results. But, the simultaneous approach contains additional algorithmic complexities which increase computation time in the simulation environment. This work aims to reduce the computation time by adopting a parallel and distributed computation strategy for Simultaneous Task Allocation and Motion Coordination (STAMC). In the simulation experiments, each cluster node executes the motion coordination algorithm for each autonomous vehicle. This arrangement enables parallel computation of the expensive STAMC algorithm. Parallel and distributed computation is performed directly within the interpretive MATLAB environment. Results show the parallel and distributed approach provides sub-linear speedup compared to a single centralised computing node. © 2007 Springer-Verlag Berlin Heidelberg

    Cost-effective HPC clustering for computer vision applications

    Get PDF
    We will present a cost-effective and flexible realization of high performance computing (HPC) clustering and its potential in solving computationally intensive problems in computer vision. The featured software foundation to support the parallel programming is the GNU parallel Knoppix package with message passing interface (MPI) based Octave, Python and C interface capabilities. The implementation is especially of interest in applications where the main objective is to reuse the existing hardware infrastructure and to maintain the overall budget cost. We will present the benchmark results and compare and contrast the performances of Octave and MATLAB

    mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    Get PDF
    BACKGROUND: Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. RESULTS: mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. CONCLUSION: Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet

    HPCmatlab: A Framework for Fast Prototyping of Parallel Applications in Matlab

    Get PDF
    AbstractThe HPCmatlab framework has been developed for Distributed Memory Programming in Matlab/Octave using the Message Passing Interface (MPI). The communication routines in the MPI library are implemented using MEX wrappers. Point-to-point, collective as well as one-sided communication is supported. Benchmarking results show better performance than the Mathworks Distributed Computing Server. HPCmatlab has been used to successfully parallelize and speed up Matlab applications developed for scientific computing. The application results show good scalability, while preserving the ease of programmability. HPCmatlab also enables shared memory programming using Pthreads and Parallel I/O using the ADIOS package

    Long-Term Neuroadaptations Produced by Withdrawal from Repeated Cocaine Treatment: Role of Dopaminergic Receptors in Modulating Cortical Excitability

    Get PDF
    Dopamine (DA) modulates neuronal activity in the prefrontal cortex (PFC) and is necessary for optimal cognitive function. Dopamine transmission in the PFC is also important for the behavioral adaptations produced by repeated exposure to cocaine. Therefore, we investigated the effects of repeated cocaine treatment followed by withdrawal (2– 4 weeks) on the responsivity of cortical cells to electrical stimulation of the ventral tegmental area (VTA) and to systemic administration of DA D1 or D2 receptor antagonists. Cortical cells in cocaine- and saline-treated animals exhibited a similar decrease in excitability after the administration of D1 receptor antagonists. In contrast, cortical neurons from cocaine-treated rats exhibited a lack of D2-mediated regulation relative to saline rats. Furthermore, in contrast to saline-treated animals, VTA stimulation did not increase cortical excitability in the cocaine group. These data suggest that withdrawal from repeated cocaine administration elicits some long-term neuroadaptations in the PFC, including (1) reduced D2-mediated regulation of cortical excitability, (2) reduced responsivity of cortical cells to phasic increases in DA, and (3) a trend toward an overall decrease in excitability of PFC neurons

    DISROPT: a Python Framework for Distributed Optimization

    Get PDF
    open5noThis result is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 638992 - OPT4SMART)In this paper we introduce disropt, a Python package for distributed optimization over networks. We focus on cooperative set-ups in which an optimization problem must be solved by peer-to-peer processors (without central coordinators) that have access only to partial knowledge of the entire problem. To reflect this, agents in disropt are modeled as entities that are initialized with their local knowledge of the problem. Agents then run local routines and communicate with each other to solve the global optimization problem. A simple syntax has been designed to allow for an easy modeling of the problems. The package comes with many distributed optimization algorithms that are already embedded. Moreover, the package provides full-fledged functionalities for communication and local computation, which can be used to design and implement new algorithms. disropt is available at github.com/disropt/disropt under the GPL license, with a complete documentation and many examples.openFarina F.; Camisa A.; Testa A.; Notarnicola I.; Notarstefano G.Farina F.; Camisa A.; Testa A.; Notarnicola I.; Notarstefano G

    Parallel scientific computing with message-passing toolboxes

    Get PDF
    Los usuarios de Entornos de Computación Científica (SCE, por sus siglas en inglés) siempre requieren mayor potencia de cálculo para sus aplicaciones. Utilizando las herramientas propuestas, los usuarios de las conocidas plataformas Matlab® y Octave, en un cluster de computadores, pueden paralelizar sus aplicaciones interpretadas utilizando paso de mensajes, como el proporcionado por PVM (Parallel Virtual Machine) o MPI (Message Passing Interface). Para muchas aplicaciones SCE es posible encontrar un esquema de paralelización con ganancia en velocidad casi lineal. Estas herramientas son interfaces prácticamente exhaustivas a las correspondientes librerías, soportan todos los tipos de datos compatibles en el SCE base y se han diseñado teniendo en cuenta el rendimiento y la facilidad de mantenimiento. En este artículo se resumen trabajos anteriores, su repercusión, y algunos resultados obtenidos por usuarios finales. Con base en la herramienta más reciente, la Toolbox MPI para Octave, se describen brevemente sus características principales, y se presenta un estudio de caso, el conjunto de Mandelbrotusers of Scientific Computing Environments (SCE) always demand more computing power for their CPu-intensive SCE applications. using the proposed toolboxes, users of the well-known Matlab® and Octave platforms in a computer cluster can parallelize their interpreted applications using the native multi-computer programming paradigm of message-passing, such as that provided by PVM (Parallel Virtual Machine) and MPI (Message Passing Inter-face). For many SCE applications, a parallelization scheme can be found so that the resulting speedup is nearly linear on the number of computers used. The toolboxes are almost compre-hensive interfaces to the corresponding libraries, they support all the compatible data types in the base SCE and they have been designed with performance and maintainability in mind. In this paper, we summarize our previous work, its repercussion, and some results obtained by end-users. Focusing on our most recent MPI Toolbox for Octave, we briefly describe its main features, and introduce a case study: the Mandelbrot se
    corecore