9,656 research outputs found

    A two teraflop swarm

    Get PDF
    © 2018 Jones, Studley, Hauert and Winfield. We introduce the Xpuck swarm, a research platform with an aggregate raw processing power in excess of two teraflops. The swarm uses 16 e-puck robots augmented with custom hardware that uses the substantial CPU and GPU processing power available from modern mobile system-on-chip devices. The augmented robots, called Xpucks, have at least an order of magnitude greater performance than previous swarm robotics platforms. The platform enables new experiments that require high individual robot computation and multiple robots. Uses include online evolution or learning of swarm controllers, simulation for answering what-if questions about possible actions, distributed super-computing for mobile platforms, and real-world applications of swarm robotics that requires image processing, or SLAM. The teraflop swarm could also be used to explore swarming in nature by providing platforms with similar computational power as simple insects. We demonstrate the computational capability of the swarm by implementing a fast physics-based robot simulator and using this within a distributed island model evolutionary system, all hosted on the Xpucks

    Spartan Daily, February 2, 1938

    Get PDF
    Volume 26, Issue 76https://scholarworks.sjsu.edu/spartandaily/2713/thumbnail.jp

    Linear Regression and Unsupervised Learning For Tracking and Embodied Robot Control.

    Get PDF
    Computer vision problems, such as tracking and robot navigation, tend to be solved using models of the objects of interest to the problem. These models are often either hard-coded, or learned in a supervised manner. In either case, an engineer is required to identify the visual information that is important to the task, which is both time consuming and problematic. Issues with these engineered systems relate to the ungrounded nature of the knowledge imparted by the engineer, where the systems have no meaning attached to the representations. This leads to systems that are brittle and are prone to failure when expected to act in environments not envisaged by the engineer. The work presented in this thesis removes the need for hard-coded or engineered models of either visual information representations or behaviour. This is achieved by developing novel approaches for learning from example, in both input (percept) and output (action) spaces. This approach leads to the development of novel feature tracking algorithms, and methods for robot control. Applying this approach to feature tracking, unsupervised learning is employed, in real time, to build appearance models of the target that represent the input space structure, and this structure is exploited to partition banks of computationally efficient, linear regression based target displacement estimators. This thesis presents the first application of regression based methods to the problem of simultaneously modeling and tracking a target object. The computationally efficient Linear Predictor (LP) tracker is investigated, along with methods for combining and weighting flocks of LP’s. The tracking algorithms developed operate with accuracy comparable to other state of the art online approaches and with a significant gain in computational efficiency. This is achieved as a result of two specific contributions. First, novel online approaches for the unsupervised learning of modes of target appearance that identify aspects of the target are introduced. Second, a general tracking framework is developed within which the identified aspects of the target are adaptively associated to subsets of a bank of LP trackers. This results in the partitioning of LP’s and the online creation of aspect specific LP flocks that facilitate tracking through significant appearance changes. Applying the approach to the percept action domain, unsupervised learning is employed to discover the structure of the action space, and this structure is used in the formation of meaningful perceptual categories, and to facilitate the use of localised input-output (percept-action) mappings. This approach provides a realisation of an embodied and embedded agent that organises its perceptual space and hence its cognitive process based on interactions with its environment. Central to the proposed approach is the technique of clustering an input-output exemplar set, based on output similarity, and using the resultant input exemplar groupings to characterise a perceptual category. All input exemplars that are coupled to a certain class of outputs form a category - the category of a given affordance, action or function. In this sense the formed perceptual categories have meaning and are grounded in the embodiment of the agent. The approach is shown to identify the relative importance of perceptual features and is able to solve percept-action tasks, defined only by demonstration, in previously unseen situations. Within this percept-action learning framework, two alternative approaches are developed. The first approach employs hierarchical output space clustering of point-to-point mappings, to achieve search efficiency and input and output space generalisation as well as a mechanism for identifying the important variance and invariance in the input space. The exemplar hierarchy provides, in a single structure, a mechanism for classifying previously unseen inputs and generating appropriate outputs. The second approach to a percept-action learning framework integrates the regression mappings used in the feature tracking domain, with the action space clustering and imitation learning techniques developed in the percept-action domain. These components are utilised within a novel percept-action data mining methodology, that is able to discover the visual entities that are important to a specific problem, and to map from these entities onto the action space. Applied to the robot control task, this approach allows for real-time generation of continuous action signals, without the use of any supervision or definition of representations or rules of behaviour

    Focal Spot, Winter 2007/2008

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1107/thumbnail.jp

    Bayesian cosmic density field inference from redshift space dark matter maps

    Get PDF
    We present a self-consistent Bayesian formalism to sample the primordial density fields compatible with a set of dark matter density tracers after cosmic evolution observed in redshift space. Previous works on density reconstruction did not self-consistently consider redshift space distortions or included an additional iterative distortion correction step. We present here the analytic solution of coherent flows within a Hamiltonian Monte Carlo posterior sampling of the primordial density field. We test our method within the Zel'dovich approximation, presenting also an analytic solution including tidal fields and spherical collapse on small scales using augmented Lagrangian perturbation theory. Our resulting reconstructed fields are isotropic and their power spectra are unbiased compared to the true one defined by our mock observations. Novel algorithmic implementations are introduced regarding the mass assignment kernels when defining the dark matter density field and optimization of the time step in the Hamiltonian equations of motions. Our algorithm, dubbed barcode, promises to be specially suited for analysis of the dark matter cosmic web down to scales of a few Megaparsecs. This large scale structure is implied by the observed spatial distribution of galaxy clusters --- such as obtained from X-ray, SZ or weak lensing surveys --- as well as that of the intergalactic medium sampled by the Lyman alpha forest or perhaps even by deep hydrogen intensity mapping. In these cases, virialized motions are negligible, and the tracers cannot be modeled as point-like objects. It could be used in all of these contexts as a baryon acoustic oscillation reconstruction algorithm.Comment: 34 pages, 25 figures, 1 table. Submitted to MNRAS. Accompanying code at https://github.com/egpbos/barcod

    PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm

    Full text link
    In contrast to numerous NLP and 2D computer vision foundational models, the learning of a robust and highly generalized 3D foundational model poses considerably greater challenges. This is primarily due to the inherent data variability and the diversity of downstream tasks. In this paper, we introduce a comprehensive 3D pre-training framework designed to facilitate the acquisition of efficient 3D representations, thereby establishing a pathway to 3D foundational models. Motivated by the fact that informative 3D features should be able to encode rich geometry and appearance cues that can be utilized to render realistic images, we propose a novel universal paradigm to learn point cloud representations by differentiable neural rendering, serving as a bridge between 3D and 2D worlds. We train a point cloud encoder within a devised volumetric neural renderer by comparing the rendered images with the real images. Notably, our approach demonstrates the seamless integration of the learned 3D encoder into diverse downstream tasks. These tasks encompass not only high-level challenges such as 3D detection and segmentation but also low-level objectives like 3D reconstruction and image synthesis, spanning both indoor and outdoor scenarios. Besides, we also illustrate the capability of pre-training a 2D backbone using the proposed universal methodology, surpassing conventional pre-training methods by a large margin. For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks. The consistent improvements in various settings imply the effectiveness of the proposed method. Code and models will be made available at https://github.com/OpenGVLab/PonderV2.Comment: arXiv admin note: text overlap with arXiv:2301.0015

    Facilitating Understanding of Movements in Dynamic Visualizations: An Embodied Perspective

    Get PDF
    Learners studying mechanical or technical processes via dynamic visualizations often fail to build an accurate mental representation of the system's movements. Based on embodied theories of cognition assuming that action, perception, and cognition are closely intertwined, this paper proposes that the learning effectiveness of dynamic visualizations could be enhanced by grounding the movements of the presentation in people's own bodily experiences during learning. We discuss recent research on embodied cognition and provide specific strategies for how the body can be used to ground movements during the learning process: (1) making or observing gestures, (2) manipulating and interacting with objects, (3) using body metaphors, and (4) using eye movements as retrieval cues. Implications for the design of dynamic visualizations as well as directions for future research are presented

    Factors Affecting the Performance of Photo-voltaic Solar Energy Storage

    Get PDF
    One of the most important factors in a nation\u27s development is energy availability. All the aspects of its economy are directly proportional to the energy resources. Oil is one of the most sought energy resources currently. Solar energy is one of the most important renewable sources of energy available to us. With oil deposits depleting and current global warming, there is an emphasis on using more and more renewable sources or clean energy. This has led to immense research on solar cells and how it could better be used to get maximum output. Storage of energy is another aspect that is studied most as this stored energy could be used as and when required. This study aims to study the factors that affect the performance of solar energy storage. This study will be conducted by identifying and analyzing different factors that influence the solar energy storage. The goal of this research is to find the factors that affect energy storage and identify which factors has the greatest effect on its efficiency and suggest better and innovative ways that could help energy storage in a positive way
    • …
    corecore