4,602 research outputs found

    Measured pedestrian movement and bodyworn terminal effects for the indoor channel at 5.2 GHz

    Get PDF
    [Summary]: Human body effects such as antenna-body interaction and scattering caused by pedestrian movement are important indoor radio propagation phenomena at microwave frequencies. This paper reports measurements and statistical analysis of the indoor narrowband propagation channel at 5.2 GHz for two scenarios: a fixed line-of-sight (LOS) link perturbed by pedestrian movement and a mobile link incorporating a moving bodyworn terminal. Two indoor environments were considered for both types of measurements: an 18 m long corridor and a 42 m2 office. The fixed-link results show that the statistical distribution of the received envelope was dependent on the number of pedestrians present. However, fading was slower than expected, with an average fade duration of more than 100 ms for a Doppler frequency of 8.67 Hz. For the bodyworn terminal, mean received power values were dependent on whether or not the user's body obstructed the LOS. For example, in the corridor the average non-line-of-sight (NLOS) pathloss was 5.4 dB greater than with LO

    SeGAN: Segmenting and Generating the Invisible

    Full text link
    Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder occludee relations, our method can infer depth layering.Comment: Accepted to CVPR18 as spotligh

    Reconstructive Sparse Code Transfer for Contour Detection and Semantic Labeling

    Get PDF
    We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.Comment: to appear in Asian Conference on Computer Vision (ACCV), 201

    The Use of Multi-beam Sonars to Image Bubbly Ship Wakes

    Get PDF
    During the past five years, researchers at Penn State University (PSU) have used upward-looking multi-beam (MB) sonar to image the bubbly wakes of surface ships. In 2000, a 19-beam, 5° beam width, 120° sector, 250 kHz MB sonar integrated into an autonomous vehicle was used to obtain a first-of-a-kind look at the three-dimensional variability of bubbles in a large ship wake. In 2001 we acquired a Reson 8101 MB sonar, which operates at 240 kHz and features 101-1.5º beams spanning a 150º sector. In July 2002, the Reson sonar was deployed looking upward from a 1.4 m diameter buoy moored at 29.5 m depth in 550 m of water using three anchor lines. A fiber optic cable connected the sonar to a support ship 500 m away. Images of the wake of a small research vessel provided new information about the persistence of bubble clouds in the ocean. An important goal is to use the MB sonar to estimate wake bubble distributions, as has been done with single beam sonar. Here we show that multipath interference and strong, specular reflections from the sea surface adversely affect the use of MB sonars to unambiguously estimate wake bubble distributio
    • …
    corecore