5,279 research outputs found

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Phase Locked Loop Test Methodology

    Get PDF
    Phase locked loops are incorporated into almost every large-scale mixed signal and digital system on chip (SOC). Various types of PLL architectures exist including fully analogue, fully digital, semi-digital, and software based. Currently the most commonly used PLL architecture for SOC environments and chipset applications is the Charge-Pump (CP) semi-digital type. This architecture is commonly used for clock synthesis applications, such as the supply of a high frequency on-chip clock, which is derived from a low frequency board level clock. In addition, CP-PLL architectures are now frequently used for demanding RF (Radio Frequency) synthesis, and data synchronization applications. On chip system blocks that rely on correct PLL operation may include third party IP cores, ADCs, DACs and user defined logic (UDL). Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function is reliably verified during both the design and debug phase and through production testing. This chapter focuses on test approaches related to embedded CP-PLLs used for the purpose of clock generation for SOC. However, methods discussed will generally apply to CP-PLLs used for other applications

    Ultrasoft Amplitudes in Hot QCD

    Get PDF
    By using the Boltzmann equation describing the relaxation of colour excitations in the QCD plasma, we obtain effective amplitudes for the ultrasoft colour fields carrying momenta of order g2Tg^2 T. These amplitudes are of the same order in gg as the hard thermal loops (HTL), which they generalize by including the effects of the collisions among the hard particles. The ultrasoft amplitudes share many of the remarkable properties of the HTL's: they are gauge invariant, obey simple Ward identities, and, in the static limit, reduce to the usual Debye mass for the electric fields. However, unlike the HTL's, which correspond effectively to one-loop diagrams, the ultrasoft amplitudes resum an infinite number of diagrams of the bare perturbation theory. By solving the linearized Boltzmann equation, we obtain a formula for the colour conductivity which accounts for the contributions of the hard and soft modes beyond the leading logarithmic approximation.Comment: 38 pages, 10 figures, LaTeX. An extension of the previous results is included (in Sec. 4.2); also, some minor modifications here and there. Final version, to appear in Nucl. Phys.

    Influence of conformational fluctuations on enzymatic activity: modelling the functional motion of beta-secretase

    Full text link
    Considerable insight into the functional activity of proteins and enzymes can be obtained by studying the low-energy conformational distortions that the biopolymer can sustain. We carry out the characterization of these large scale structural changes for a protein of considerable pharmaceutical interest, the human ÎČ\beta-secretase. Starting from the crystallographic structure of the protein, we use the recently introduced beta-Gaussian model to identify, with negligible computational expenditure, the most significant distortion occurring in thermal equilibrium and the associated time scales. The application of this strategy allows to gain considerable insight into the putative functional movements and, furthermore, helps to identify a handful of key regions in the protein which have an important mechanical influence on the enzymatic activity despite being spatially distant from the active site. The results obtained within the Gaussian model are validated through an extensive comparison against an all-atom Molecular Dynamics simulation.Comment: To be published in a special issue of J. Phys.: Cond. Mat. (Bedlewo Workshop

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies

    STV-based Video Feature Processing for Action Recognition

    Get PDF
    In comparison to still image-based processes, video features can provide rich and intuitive information about dynamic events occurred over a period of time, such as human actions, crowd behaviours, and other subject pattern changes. Although substantial progresses have been made in the last decade on image processing and seen its successful applications in face matching and object recognition, video-based event detection still remains one of the most difficult challenges in computer vision research due to its complex continuous or discrete input signals, arbitrary dynamic feature definitions, and the often ambiguous analytical methods. In this paper, a Spatio-Temporal Volume (STV) and region intersection (RI) based 3D shape-matching method has been proposed to facilitate the definition and recognition of human actions recorded in videos. The distinctive characteristics and the performance gain of the devised approach stemmed from a coefficient factor-boosted 3D region intersection and matching mechanism developed in this research. This paper also reported the investigation into techniques for efficient STV data filtering to reduce the amount of voxels (volumetric-pixels) that need to be processed in each operational cycle in the implemented system. The encouraging features and improvements on the operational performance registered in the experiments have been discussed at the end
    • 

    corecore