47,450 research outputs found

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Adaptive multiscale detection of filamentary structures in a background of uniform random points

    Full text link
    We are given a set of nn points that might be uniformly distributed in the unit square [0,1]2[0,1]^2. We wish to test whether the set, although mostly consisting of uniformly scattered points, also contains a small fraction of points sampled from some (a priori unknown) curve with CαC^{\alpha}-norm bounded by β\beta. An asymptotic detection threshold exists in this problem; for a constant T−(α,β)>0T_-(\alpha,\beta)>0, if the number of points sampled from the curve is smaller than T−(α,β)n1/(1+α)T_-(\alpha,\beta)n^{1/(1+\alpha)}, reliable detection is not possible for large nn. We describe a multiscale significant-runs algorithm that can reliably detect concentration of data near a smooth curve, without knowing the smoothness information α\alpha or β\beta in advance, provided that the number of points on the curve exceeds T∗(α,β)n1/(1+α)T_*(\alpha,\beta)n^{1/(1+\alpha)}. This algorithm therefore has an optimal detection threshold, up to a factor T∗/T−T_*/T_-. At the heart of our approach is an analysis of the data by counting membership in multiscale multianisotropic strips. The strips will have area 2/n2/n and exhibit a variety of lengths, orientations and anisotropies. The strips are partitioned into anisotropy classes; each class is organized as a directed graph whose vertices all are strips of the same anisotropy and whose edges link such strips to their ``good continuations.'' The point-cloud data are reduced to counts that measure membership in strips. Each anisotropy graph is reduced to a subgraph that consist of strips with significant counts. The algorithm rejects H0\mathbf{H}_0 whenever some such subgraph contains a path that connects many consecutive significant counts.Comment: Published at http://dx.doi.org/10.1214/009053605000000787 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Analysing correlated noise on the surface code using adaptive decoding algorithms

    Get PDF
    Laboratory hardware is rapidly progressing towards a state where quantum error-correcting codes can be realised. As such, we must learn how to deal with the complex nature of the noise that may occur in real physical systems. Single qubit Pauli errors are commonly used to study the behaviour of error-correcting codes, but in general we might expect the environment to introduce correlated errors to a system. Given some knowledge of structures that errors commonly take, it may be possible to adapt the error-correction procedure to compensate for this noise, but performing full state tomography on a physical system to analyse this structure quickly becomes impossible as the size increases beyond a few qubits. Here we develop and test new methods to analyse blue a particular class of spatially correlated errors by making use of parametrised families of decoding algorithms. We demonstrate our method numerically using a diffusive noise model. We show that information can be learnt about the parameters of the noise model, and additionally that the logical error rates can be improved. We conclude by discussing how our method could be utilised in a practical setting blue and propose extensions of our work to study more general error models.Comment: 19 pages, 8 figures, comments welcome; v2 - minor typos corrected some references added; v3 - accepted to Quantu

    On the genericity properties in networked estimation: Topology design and sensor placement

    Full text link
    In this paper, we consider networked estimation of linear, discrete-time dynamical systems monitored by a network of agents. In order to minimize the power requirement at the (possibly, battery-operated) agents, we require that the agents can exchange information with their neighbors only \emph{once per dynamical system time-step}; in contrast to consensus-based estimation where the agents exchange information until they reach a consensus. It can be verified that with this restriction on information exchange, measurement fusion alone results in an unbounded estimation error at every such agent that does not have an observable set of measurements in its neighborhood. To over come this challenge, state-estimate fusion has been proposed to recover the system observability. However, we show that adding state-estimate fusion may not recover observability when the system matrix is structured-rank (SS-rank) deficient. In this context, we characterize the state-estimate fusion and measurement fusion under both full SS-rank and SS-rank deficient system matrices.Comment: submitted for IEEE journal publicatio

    Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture

    Get PDF
    Deep neural networks are applied to a wide range of problems in recent years. In this work, Convolutional Neural Network (CNN) is applied to the problem of determining the depth from a single camera image (monocular depth). Eight different networks are designed to perform depth estimation, each of them suitable for a feature level. Networks with different pooling sizes determine different feature levels. After designing a set of networks, these models may be combined into a single network topology using graph optimization techniques. This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common network layers, and can be further optimized by retraining to achieve an improved model compared to the individual topologies. In this study, four SPDNN models are trained and have been evaluated at 2 stages on the KITTI dataset. The ground truth images in the first part of the experiment are provided by the benchmark, and for the second part, the ground truth images are the depth map results from applying a state-of-the-art stereo matching method. The results of this evaluation demonstrate that using post-processing techniques to refine the target of the network increases the accuracy of depth estimation on individual mono images. The second evaluation shows that using segmentation data alongside the original data as the input can improve the depth estimation results to a point where performance is comparable with stereo depth estimation. The computational time is also discussed in this study.Comment: 44 pages, 25 figure
    • …
    corecore