81,757 research outputs found

    Multi-Scale Depth from Slope with Weights

    Full text link

    Multidecadal warming of Antarctic waters

    Get PDF
    Decadal trends in the properties of seawater adjacent to Antarctica are poorly known, and the mechanisms responsible for such changes are uncertain. Antarctic ice sheet mass loss is largely driven by ice shelf basal melt, which is influenced by ocean-ice interactions and has been correlated with Antarctic Continental Shelf Bottom Water (ASBW) temperature. We document the spatial distribution of long-term large-scale trends in temperature, salinity, and core depth over the Antarctic continental shelf and slope. Warming at the seabed in the Bellingshausen and Amundsen seas is linked to increased heat content and to a shoaling of the mid-depth temperature maximum over the continental slope, allowing warmer, saltier water greater access to the shelf in recent years. Regions of ASBW warming are those exhibiting increased ice shelf melt

    Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions

    Full text link
    Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features

    Localization of random walks to competing manifolds of distinct dimensions

    Full text link
    We consider localization of a random walk (RW) when attracted or repelled by multiple extended manifolds of different dimensionalities. In particular, we focus on (d−1)(d-1)- and (d−2)(d-2)-dimensional manifolds in dd-dimensional space, where attractive interactions are (fully or marginally) relevant. The RW can then be in one of four phases where it is localized to neither, one, or both manifolds. The four phases merge at a special multi-critical point where (away from the manifolds) the RW spreads diffusively. Extensive numerical analyses on two dimensional RWs confined inside or outside a rectangular wedge confirm general features expected from a continuum theory, but also exhibit unexpected attributes, such as a reentrant localization to the corner while repelled by it

    Searching for Exoplanets Using Artificial Intelligence

    Full text link
    In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labor intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects which, unlike current methods uses a neural network. Neural networks, also called "deep learning" or "deep nets" are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time-series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.Comment: Accepted, 16 Pages, 14 Figures, https://github.com/pearsonkyle/Exoplanet-Artificial-Intelligenc

    On The Robustness of a Neural Network

    Get PDF
    With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the \textit{black box} aspect of neural networks as it becomes crucial to understand their limits and capabilities. With the rise of neuromorphic hardware, it is even more critical to understand how a neural network, as a distributed system, tolerates the failures of its computing nodes, neurons, and its communication channels, synapses. Experimentally assessing the robustness of neural networks involves the quixotic venture of testing all the possible failures, on all the possible inputs, which ultimately hits a combinatorial explosion for the first, and the impossibility to gather all the possible inputs for the second. In this paper, we prove an upper bound on the expected error of the output when a subset of neurons crashes. This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case. It involves a polynomial dependency on the Lipschitz coefficient of the neurons activation function, and an exponential dependency on the depth of the layer where a failure occurs. We back up our theoretical results with experiments illustrating the extent to which our prediction matches the dependencies between the network parameters and robustness. Our results show that the robustness of neural networks to the average crash can be estimated without the need to neither test the network on all failure configurations, nor access the training set used to train the network, both of which are practically impossible requirements.Comment: 36th IEEE International Symposium on Reliable Distributed Systems 26 - 29 September 2017. Hong Kong, Chin
    • …
    corecore