117,250 research outputs found

    Semi-automatic geometry-driven reassembly of fractured archeological objects

    Get PDF
    Nombre pages : 6International audience3D laser scanning of broken cultural heritage content is becoming increasingly popular, resulting in large collections of detailed fractured archeological 3D objects that have to be reassembled virtually. In this paper, we present a new semi-automatic reassembly approach for pairwise matching of the fragments, that makes it possible to take into account both the archeologist's expertise, as well as the power of automatic geometry-driven matching algorithms. Our semi-automatic reassembly approach is based on a real-time interaction loop: an expert user steadily specifies approximate initial relative positions and orientations between two fragments by means of a bimanual tangible user interface. These initial poses are continuously corrected and validated in real-time by an algorithm based on the Iterative Closest Point (ICP): the potential contact surface of the two fragments is identified by efficiently pruning insignificant areas of a pair of two bounding sphere hierarchies, that is combined with a k-d tree for closest vertex queries. The locally optimal relative pose for the best match is robustly estimated by taking into account the distance of the closest vertices as well as their normals. We provide feedback to the user by a visual representation of the locally optimal best match and its associated error. Our first results on a concrete dataset show that our system is capable of assisting an expert user in real-time during the pairwise matching of downsampled 3D fragments

    Multi-Objective Calibration For Agent-Based Models

    No full text
    Agent-based modelling is already proving to be an immensely useful tool for scientific and industrial modelling applications. Whilst the building of such models will always be something between an art and a science, once a detailed model has been built, the process of parameter calibration should be performed as precisely as possible. This task is often made difficult by the proliferation of model parameters with non-linear interactions. In addition to this, these models generate a large number of outputs, and their ‘accuracy’ can be measured by many different, often conflicting, criteria. In this paper we demonstrate the use of multi-objective optimisation tools to calibrate just such an agent-based model. We use an agent-based model of a financial market as an exemplar and calibrate the model using a multi-objective genetic algorithm. The technique is automated and requires no explicit weighting of criteria prior to calibration. The final choice of parameter set can be made after calibration with the additional input of the domain expert

    Portable inference engine: An extended CLIPS for real-time production systems

    Get PDF
    The present C-Language Integrated Production System (CLIPS) architecture has not been optimized to deal with the constraints of real-time production systems. Matching in CLIPS is based on the Rete Net algorithm, whose assumption of working memory stability might fail to be satisfied in a system subject to real-time dataflow. Further, the CLIPS forward-chaining control mechanism with a predefined conflict resultion strategy may not effectively focus the system's attention on situation-dependent current priorties, or appropriately address different kinds of knowledge which might appear in a given application. Portable Inference Engine (PIE) is a production system architecture based on CLIPS which attempts to create a more general tool while addressing the problems of real-time expert systems. Features of the PIE design include a modular knowledge base, a modified Rete Net algorithm, a bi-directional control strategy, and multiple user-defined conflict resolution strategies. Problems associated with real-time applications are analyzed and an explanation is given for how the PIE architecture addresses these problems

    Using Monte Carlo Search With Data Aggregation to Improve Robot Soccer Policies

    Full text link
    RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved

    Parallel processing and expert systems

    Get PDF
    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
    corecore