186 research outputs found

    EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras

    Full text link
    Event-based cameras have shown great promise in a variety of situations where frame based cameras suffer, such as high speed motions and high dynamic range scenes. However, developing algorithms for event measurements requires a new class of hand crafted algorithms. Deep learning has shown great success in providing model free solutions to many problems in the vision community, but existing networks have been developed with frame based images in mind, and there does not exist the wealth of labeled data for events as there does for images for supervised training. To these points, we present EV-FlowNet, a novel self-supervised deep learning pipeline for optical flow estimation for event based cameras. In particular, we introduce an image based representation of a given event stream, which is fed into a self-supervised neural network as the sole input. The corresponding grayscale images captured from the same camera at the same time as the events are then used as a supervisory signal to provide a loss function at training time, given the estimated flow from the network. We show that the resulting network is able to accurately predict optical flow from events only in a variety of different scenes, with performance competitive to image based networks. This method not only allows for accurate estimation of dense optical flow, but also provides a framework for the transfer of other self-supervised methods to the event-based domain.Comment: 9 pages, 5 figures, 1 table. Accompanying video: https://youtu.be/eMHZBSoq0sE. Dataset: https://daniilidis-group.github.io/mvsec/, Robotics: Science and Systems 201

    An Improved Bernstein-type Inequality for C-Mixing-type Processes and Its Application to Kernel Smoothing

    Full text link
    There are many processes, particularly dynamic systems, that cannot be described as strong mixing processes. \citet{maume2006exponential} introduced a new mixing coefficient called C-mixing, which includes a large class of dynamic systems. Based on this, \citet{hang2017bernstein} obtained a Bernstein-type inequality for a geometric C-mixing process, which, modulo a logarithmic factor and some constants, coincides with the standard result for the iid case. In order to honor this pioneering work, we conduct follow-up research in this paper and obtain an improved result under more general preconditions. We allow for a weaker requirement for the semi-norm condition, fully non-stationarity, non-isotropic sampling behavior. Our result covers the case in which the index set of processes lies in Zd+\mathbf{Z}^{d+} for any given positive integer dd. Here Zd+\mathbf{Z}^{d+} denotes the collection of all nonnegative integer-valued dd-dimensional vector. This setting of index set takes both time and spatial data into consideration. For our application, we investigate the theoretical guarantee of multiple kernel-based nonparametric curve estimators for C-Mixing-type processes. More specifically we firstly obtain the L∞L^{\infty}-convergence rate of the kernel density estimator and then discuss the attainability of optimality, which can also be regarded as an upate of the result of \citet{hang2018kernel}. Furthermore, we investigate the uniform convergence of the kernel-based estimators of the conditional mean and variance function in a heteroscedastic nonparametric regression model. Under a mild smoothing condition, these estimators are optimal. At last, we obtain the uniform convergence rate of conditional mode function

    Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

    Get PDF
    In this work, we propose a novel framework for unsupervised learning for event cameras that learns motion information from only the event stream. In particular, we propose an input representation of the events in the form of a discretized volume that maintains the temporal distribution of the events, which we pass through a neural network to predict the motion of the events. This motion is used to attempt to remove any motion blur in the event image. We then propose a loss function applied to the motion compensated event image that measures the motion blur in this image. We train two networks with this framework, one to predict optical flow, and one to predict egomotion and depths, and evaluate these networks on the Multi Vehicle Stereo Event Camera dataset, along with qualitative results from a variety of different scenes.Comment: 9 pages, 7 figure

    Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation

    Full text link
    Previous studies have shown that leveraging domain index can significantly boost domain adaptation performance (arXiv:2007.01807, arXiv:2202.03628). However, such domain indices are not always available. To address this challenge, we first provide a formal definition of domain index from the probabilistic perspective, and then propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data, thereby providing additional insight on domain relations and improving domain adaptation performance. Our theoretical analysis shows that our adversarial variational Bayesian framework finds the optimal domain index at equilibrium. Empirical results on both synthetic and real data verify that our model can produce interpretable domain indices which enable us to achieve superior performance compared to state-of-the-art domain adaptation methods. Code is available at https://github.com/Wang-ML-Lab/VDI.Comment: ICLR 2023 Spotlight (notable-top-25%

    Crocs: Cross-Technology Clock Synchronization for WiFi and ZigBee

    Full text link
    Clock synchronization is a key function in embedded wireless systems and networks. This issue is equally important and more challenging in IoT systems nowadays, which often include heterogeneous wireless devices that follow different wireless standards. Conventional solutions to this problem employ gateway-based indirect synchronization, which suffers low accuracy. This paper for the first time studies the problem of cross-technology clock synchronization. Our proposal called Crocs synchronizes WiFi and ZigBee devices by direct cross-technology communication. Crocs decouples the synchronization signal from the transmission of a timestamp. By incorporating a barker-code based beacon for time alignment and cross-technology transmission of timestamps, Crocs achieves robust and accurate synchronization among WiFi and ZigBee devices, with the synchronization error lower than 1 millisecond. We further make attempts to implement different cross-technology communication methods in Crocs and provide insight findings with regard to the achievable accuracy and expected overhead
    • …
    corecore