15 research outputs found

    Autonomous vehicle navigation with real-time 3D laser based positioning for construction

    No full text
    Autonomous Guided Vehicles (AGVs) are a way of life in manufacturing where navigation can be done in a structured environment. Construction is an unstructured environment and requires a different type of navigation system to deal with three dimensional control and rough terrain. This paper provides a review of navigation systems that utilize dead-reckoning in conjunction with absolute referencing systems such as beacon-based systems, and vision and mapping based system. The use of a real-time laser based technology is demonstrated as a new form of navigation. This, technology does not rely on dead reckoning. The paper outlines the issues and strategies in guiding an autonomous vehicle utilizing only the laser-based positioning system. Algorithms were developed to provide real-time control of the AGV. The laser based positioning system is unique in that it provides three dimensional position data with five updates per second. No other system can provide this level of performance. This allows for control of end effectors and autonomous vehicles in complex and unstructured three dimensional environments. The use of this new type of navigation makes possible the automation of large complex assemblies in rough terrain such as construction.

    A simulation tool for automated guided vehicle systems

    No full text

    Accelerating the BSM interpretation of LHC data with machine learning

    No full text
    The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators. Performing statistically convergent scans of high-dimensional BSM theories is consequently challenging, and in practice unfeasible for very high-dimensional BSM theories. We present here a new machine learning method that accelerates the interpretation of LHC data, by learning the relationship between BSM theory parameters and data. As a proof-of-concept, we demonstrate that this technique accurately predicts natural SUSY signal events in two signal regions at the High Luminosity LHC, up to four orders of magnitude faster than standard techniques. The new approach makes it possible to rapidly and accurately reconstruct the theory parameters of complex BSM theories, should an excess in the data be discovered at the LHC

    Published abstracts

    Get PDF
    In most real-world information processing problems, data is not a free resource; its acquisition is rather time-consuming and/or expensive. We investigate how these two factors can be included in supervised classication tasks by deriving classication as a sequential decision process and making it accessible to Reinforcement Learning. Our method performs a sequential feature selection that learns which features are most informative at each timestep, choosing the next feature depending on the already selected features and the internal belief of the classier. Experiments on a handwritten digits classication task show signicant reduction in required data for correct classication, while a medical diabetes prediction task illustrates variable feature cost minimization as a further property of our algorithm

    Feature Selection for Value Function Approximation Using Bayesian Model Selection

    No full text
    Abstract. Feature selection in reinforcement learning (RL), i.e. choosing basis functions such that useful approximations of the unkown value function can be obtained, is one of the main challenges in scaling RL to real-world applications. Here we consider the Gaussian process based framework GPTD for approximate policy evaluation, and propose feature selection through marginal likelihood optimization of the associated hyperparameters. Our approach has two appealing benefits: (1) given just sample transitions, we can solve the policy evaluation problem fully automatically (without looking at the learning task, and, in theory, independent of the dimensionality of the state space), and (2) model selection allows us to consider more sophisticated kernels, which in turn enable us to identify relevant subspaces and eliminate irrelevant state variables such that we can achieve substantial computational savings and improved prediction performance.
    corecore