5,864 research outputs found

    Serve or Skip: The Power of Rejection in Online Bottleneck Matching

    Get PDF
    We consider the online matching problem, where n server-vertices lie in a metric space and n request-vertices that arrive over time each must immediately be permanently assigned to a server-vertex.We focus on the egalitarian bottleneck objective, where the goal is to minimize the maximum distance between any request and its server. It has been demonstrated that while there are effective algorithms for the utilitarian objective (minimizing total cost) in the resource augmentation setting where the offline adversary has half the resources, these are not effective for the egalitarian objective. Thus, we propose a new Serve-or-Skip bicriteria analysis model, where the online algorithm may reject or skip up to a specified number of requests, and propose two greedy algorithms: GRI NN(t) and GRIN(t) . We show that the Serve-or-Skip model of resource augmentation analysis can essentially simulate the doubled-server capacity model, and then examine the performance of GRI NN(t) and GRIN(t)

    An in-band full-duplex radio receiver with a passive vector modulator downmixer for self-interference cancellation

    Get PDF
    In-band full-duplex (FD) wireless, i.e., simultaneous transmission and reception at the same frequency, introduces strong self-interference (SI) that masks the signal to be received. This paper proposes a receiver in which a copy of the transmit signal is fed through a switched-resistor vector modulator (VM)that provides simultaneous downmixing, phase shift, and amplitude scaling and subtracts it in the analog baseband for up to 27 dB SI-cancellation. Cancelling before active baseband amplification avoids self-blocking, and highly linear mixers keep SIinduced distortion low, for a receiver SI-to-noise-and-distortionratio (SINDR) of up to 71.5 dB in 16.25 MHz BW. When combined with a two-port antenna with only 20 dB isolation, the low RX distortion theoretically allows sufficient digital cancellation for over 90 dB link budget, sufficient for short-range, low-power FD links

    Reconfigurable nanoelectronics using graphene based spintronic logic gates

    Full text link
    This paper presents a novel design concept for spintronic nanoelectronics that emphasizes a seamless integration of spin-based memory and logic circuits. The building blocks are magneto-logic gates based on a hybrid graphene/ferromagnet material system. We use network search engines as a technology demonstration vehicle and present a spin-based circuit design with smaller area, faster speed, and lower energy consumption than the state-of-the-art CMOS counterparts. This design can also be applied in applications such as data compression, coding and image recognition. In the proposed scheme, over 100 spin-based logic operations are carried out before any need for a spin-charge conversion. Consequently, supporting CMOS electronics requires little power consumption. The spintronic-CMOS integrated system can be implemented on a single 3-D chip. These nonvolatile logic circuits hold potential for a paradigm shift in computing applications.Comment: 14 pages (single column), 6 figure

    Leveraging Deep Visual Descriptors for Hierarchical Efficient Localization

    Full text link
    Many robotics applications require precise pose estimates despite operating in large and changing environments. This can be addressed by visual localization, using a pre-computed 3D model of the surroundings. The pose estimation then amounts to finding correspondences between 2D keypoints in a query image and 3D points in the model using local descriptors. However, computational power is often limited on robotic platforms, making this task challenging in large-scale environments. Binary feature descriptors significantly speed up this 2D-3D matching, and have become popular in the robotics community, but also strongly impair the robustness to perceptual aliasing and changes in viewpoint, illumination and scene structure. In this work, we propose to leverage recent advances in deep learning to perform an efficient hierarchical localization. We first localize at the map level using learned image-wide global descriptors, and subsequently estimate a precise pose from 2D-3D matches computed in the candidate places only. This restricts the local search and thus allows to efficiently exploit powerful non-binary descriptors usually dismissed on resource-constrained devices. Our approach results in state-of-the-art localization performance while running in real-time on a popular mobile platform, enabling new prospects for robotics research.Comment: CoRL 2018 Camera-ready (fix typos and update citations

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    A Parallel Monte Carlo Code for Simulating Collisional N-body Systems

    Full text link
    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find that our results are in good agreement with self-similar core-collapse solutions, and the core collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within less than 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7. The runtime reaches a saturation with the addition of more processors beyond these limits which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60x, 100x, and 220x, respectively.Comment: 53 pages, 13 figures, accepted for publication in ApJ Supplement

    Topological Data Analysis on Simple English Wikipedia Articles

    Get PDF
    Single-parameter persistent homology, a key tool in topological data analysis, has been widely applied to data problems along with statistical techniques that quantify the significance of the results. In contrast, statistical techniques for two-parameter persistence, while highly desirable for real-world applications, have scarcely been considered. We present three statistical approaches for comparing geometric data using two-parameter persistent homology; these approaches rely on the Hilbert function, matching distance, and barcodes obtained from two-parameter persistence modules computed from the point-cloud data. Our statistical methods are broadly applicable for analysis of geometric data indexed by a real-valued parameter. We apply these approaches to analyze high-dimensional point-cloud data obtained from Simple English Wikipedia articles. In particular, we show how our methods can be utilized to distinguish certain subsets of the Wikipedia data and to compare with random data. These results yield insights into the construction of null distributions and stability of our methods with respect to noisy data.Comment: 17 pages, 13 figure

    On the Design and Analysis of Multiple View Descriptors

    Full text link
    We propose an extension of popular descriptors based on gradient orientation histograms (HOG, computed in a single image) to multiple views. It hinges on interpreting HOG as a conditional density in the space of sampled images, where the effects of nuisance factors such as viewpoint and illumination are marginalized. However, such marginalization is performed with respect to a very coarse approximation of the underlying distribution. Our extension leverages on the fact that multiple views of the same scene allow separating intrinsic from nuisance variability, and thus afford better marginalization of the latter. The result is a descriptor that has the same complexity of single-view HOG, and can be compared in the same manner, but exploits multiple views to better trade off insensitivity to nuisance variability with specificity to intrinsic variability. We also introduce a novel multi-view wide-baseline matching dataset, consisting of a mixture of real and synthetic objects with ground truthed camera motion and dense three-dimensional geometry

    New developments in prosthetic arm systems

    Get PDF
    Absence of an upper limb leads to severe impairments in everyday life, which can further influence the social and mental state. For these reasons, early developments in cosmetic and body-driven prostheses date some centuries ago, and they have been evolving ever since. Following the end of the Second World War, rapid developments in technology resulted in powered myoelectric hand prosthetics. In the years to come, these devices were common on the market, though they still suffered high user abandonment rates. The reasons for rejection were trifold - insufficient functionality of the hardware, fragile design, and cumbersome control. In the last decade, both academia and industry have reached major improvements concerning technical features of upper limb prosthetics and methods for their interfacing and control. Advanced robotic hands are offered by several vendors and research groups, with a variety of active and passive wrist options that can be articulated across several degrees of freedom. Nowadays, elbow joint designs include active solutions with different weight and power options. Control features are getting progressively more sophisticated, offering options for multiple sensor integration and multi-joint articulation. Latest developments in socket designs are capable of facilitating implantable and multiple surface electromyography sensors in both traditional and osseointegration-based systems. Novel surgical techniques in combination with modern, sophisticated hardware are enabling restoration of dexterous upper limb functionality. This article is aimed at reviewing the latest state of the upper limb prosthetic market, offering insights on the accompanying technologies and techniques. We also examine the capabilities and features of some of academia’s flagship solutions and methods
    • …
    corecore