5,225 research outputs found

    Spatial Aggregation: Theory and Applications

    Full text link
    Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.Comment: See http://www.jair.org/ for any accompanying file

    A modular modeling approach to simulate interactively multibody systems with Baumgarte/Uzawa formulation

    Get PDF
    International audienceIn this paper, a modular modeling approach of multibody systems adapted to interactive simulation is presented. This work is based on the study of the stability of two Differential Algebraic Equations solvers. The first one is based on the acceleration-based augmented Lagrangian formulation and the second one on the Baumgarte formulation. We show that these two solvers give the same results and have to satisfy the same criteria to stabilize the algebraic constraint acceleration error. For a modular modeling approach, we propose to use the Baumgarte formulation and an iterative Uzawa algorithm to solve external constraint forces. This work is also the first step to validate the concept of two types of numerical components for Object-Oriented Programming

    A Vision-based Scheme for Kinematic Model Construction of Re-configurable Modular Robots

    Full text link
    Re-configurable modular robotic (RMR) systems are advantageous for their reconfigurability and versatility. A new modular robot can be built for a specific task by using modules as building blocks. However, constructing a kinematic model for a newly conceived robot requires significant work. Due to the finite size of module-types, models of all module-types can be built individually and stored in a database beforehand. With this priori knowledge, the model construction process can be automated by detecting the modules and their corresponding interconnections. Previous literature proposed theoretical frameworks for constructing kinematic models of modular robots, assuming that such information was known a priori. While well-devised mechanisms and built-in sensors can be employed to detect these parameters automatically, they significantly complicate the module design and thus are expensive. In this paper, we propose a vision-based method to identify kinematic chains and automatically construct robot models for modular robots. Each module is affixed with augmented reality (AR) tags that are encoded with unique IDs. An image of a modular robot is taken and the detected modules are recognized by querying a database that maintains all module information. The poses of detected modules are used to compute: (i) the connection between modules and (ii) joint angles of joint-modules. Finally, the robot serial-link chain is identified and the kinematic model constructed and visualized. Our experimental results validate the effectiveness of our approach. While implementation with only our RMR is shown, our method can be applied to other RMRs where self-identification is not possible

    On the Development of a Generic Multi-Sensor Fusion Framework for Robust Odometry Estimation

    Get PDF
    In this work we review the design choices, the mathematical and software engineering techniques employed in the development of the ROAMFREE sensor fusion library, a general, open-source framework for pose tracking and sensor parameter self-calibration in mobile robotics. In ROAMFREE, a comprehensive logical sensor library allows to abstract from the actual sensor hardware and processing while preserving model accuracy thanks to a rich set of calibration parameters, such as biases, gains, distortion matrices and geometric placement dimensions. The modular formulation of the sensor fusion problem, which is based on state-of-the-art factor graph inference techniques, allows to handle arbitrary number of multi-rate sensors and to adapt to virtually any kind of mobile robot platform, such as Ackerman steering vehicles, quadrotor unmanned aerial vehicles, omni-directional mobile robots. Different solvers are available to target high-rate online pose tracking tasks and offline accurate trajectory smoothing and parameter calibration. The modularity, versatility and out-of-the-box functioning of the resulting framework came at the cost of an increased complexity of the software architecture, with respect to an ad-hoc implementation of a platform dependent sensor fusion algorithm, and required careful design of abstraction layers and decoupling interfaces between solvers, state variables representations and sensor error models. However, we review how a high level, clean, C++/Python API, as long as ROS interface nodes, hide the complexity of sensor fusion tasks to the end user, making ROAMFREE an ideal choice for new, and existing, mobile robot projects
    corecore