53,715 research outputs found

    Towards Opportunistic Data Dissemination in Mobile Phone Sensor Networks

    Get PDF
    Recently, there has been a growing interest within the research community in developing opportunistic routing protocols. Many schemes have been proposed; however, they differ greatly in assumptions and in type of network for which they are evaluated. As a result, researchers have an ambiguous understanding of how these schemes compare against each other in their specific applications. To investigate the performance of existing opportunistic routing algorithms in realistic scenarios, we propose a heterogeneous architecture including fixed infrastructure, mobile infrastructure, and mobile nodes. The proposed architecture focuses on how to utilize the available, low cost short-range radios of mobile phones for data gathering and dissemination. We also propose a new realistic mobility model and metrics. Existing opportunistic routing protocols are simulated and evaluated with the proposed heterogeneous architecture, mobility models, and transmission interfaces. Results show that some protocols suffer long time-to-live (TTL), while others suffer short TTL. We show that heterogeneous sensor network architectures need heterogeneous routing algorithms, such as a combination of Epidemic and Spray and Wait

    Modeling Camera Effects to Improve Visual Learning from Synthetic Data

    Full text link
    Recent work has focused on generating synthetic imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling variation in the sensor domain. Sensor effects can degrade real images, limiting generalizability of network performance on visual tasks trained on synthetic data and tested in real environments. This paper proposes an efficient, automatic, physically-based augmentation pipeline to vary sensor effects --chromatic aberration, blur, exposure, noise, and color cast-- for synthetic imagery. In particular, this paper illustrates that augmenting synthetic training datasets with the proposed pipeline reduces the domain gap between synthetic and real domains for the task of object detection in urban driving scenes

    Domain Randomization and Generative Models for Robotic Grasping

    Full text link
    Deep learning-based robotic grasping has made significant progress thanks to algorithmic improvements and increased data availability. However, state-of-the-art models are often trained on as few as hundreds or thousands of unique object instances, and as a result generalization can be a challenge. In this work, we explore a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis. We generate millions of unique, unrealistic procedurally generated objects, and train a deep neural network to perform grasp planning on these objects. Since the distribution of successful grasps for a given object can be highly multimodal, we propose an autoregressive grasp planning model that maps sensor inputs of a scene to a probability distribution over possible grasps. This model allows us to sample grasps efficiently at test time (or avoid sampling entirely). We evaluate our model architecture and data generation pipeline in simulation and the real world. We find we can achieve a >>90% success rate on previously unseen realistic objects at test time in simulation despite having only been trained on random objects. We also demonstrate an 80% success rate on real-world grasp attempts despite having only been trained on random simulated objects.Comment: 8 pages, 11 figures. Submitted to 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018
    corecore