7,718 research outputs found

    Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes

    Get PDF
    International audienceSatellite and airborne optical sensors are increasingly used by scientists, and policy makers, and managers for studying and managing forests, agriculture crops, and urban areas. Their data acquired with given instrumental specifications (spectral resolution, viewing direction, sensor field-of-view, etc.) and for a specific experimental configuration (surface and atmosphere conditions, sun direction, etc.) are commonly translated into qualitative and quantitative Earth surface parameters. However, atmosphere properties and Earth surface 3D architecture often confound their interpretation. Radiative transfer models capable of simulating the Earth and atmosphere complexity are, therefore, ideal tools for linking remotely sensed data to the surface parameters. Still, many existing models are oversimplifying the Earth-atmosphere system interactions and their parameterization of sensor specifications is often neglected or poorly considered. The Discrete Anisotropic Radiative Transfer (DART) model is one of the most comprehensive physically based 3D models simulating the Earth-atmosphere radiation interaction from visible to thermal infrared wavelengths. It has been developed since 1992. It models optical signals at the entrance of imaging radiometers and laser scanners on board of satellites and airplanes, as well as the 3D radiative budget, of urban and natural landscapes for any experimental configuration and instrumental specification. It is freely distributed for research and teaching activities. This paper presents DART physical bases and its latest functionality for simulating imaging spectroscopy of natural and urban landscapes with atmosphere, including the perspective projection of airborne acquisitions and LIght Detection And Ranging (LIDAR) waveform and photon counting signals

    Manifold Forests: Closing the Gap on Neural Networks

    Full text link
    Decision forests (DFs), in particular random forests and gradient boosting trees, have demonstrated state-of-the-art accuracy compared to other methods in many supervised learning scenarios. In particular, DFs dominate other methods in tabular data, that is, when the feature space is unstructured, so that the signal is invariant to permuting feature indices. However, in structured data lying on a manifold---such as images, text, and speech---deep networks (DNs), specifically convolutional deep networks (ConvNets), tend to outperform DFs. We conjecture that at least part of the reason for this is that the input to DNs is not simply the feature magnitudes, but also their indices (for example, the convolution operation uses feature locality). In contrast, naive DF implementations fail to explicitly consider feature indices. A recently proposed DF approach demonstrates that DFs, for each node, implicitly sample a random matrix from some specific distribution. These DFs, like some classes of DNs, learn by partitioning the feature space into convex polytopes corresponding to linear functions. We build on that approach and show that one can choose distributions in a manifold-aware fashion to incorporate feature locality. We demonstrate the empirical performance on data whose features live on three different manifolds: a torus, images, and time-series. In all simulations, our Manifold Oblique Random Forest (MORF) algorithm empirically dominates other state-of-the-art approaches that ignore feature space structure and challenges the performance of ConvNets. Moreover, MORF runs significantly faster than ConvNets and maintains interpretability and theoretical justification. This approach, therefore, has promise to enable DFs and other machine learning methods to close the gap to deep networks on manifold-valued data.Comment: 12 pages, 4 figure

    Learning Prescriptive ReLU Networks

    Full text link
    We study the problem of learning optimal policy from a set of discrete treatment options using observational data. We propose a piecewise linear neural network model that can balance strong prescriptive performance and interpretability, which we refer to as the prescriptive ReLU network, or P-ReLU. We show analytically that this model (i) partitions the input space into disjoint polyhedra, where all instances that belong to the same partition receive the same treatment, and (ii) can be converted into an equivalent prescriptive tree with hyperplane splits for interpretability. We demonstrate the flexibility of the P-ReLU network as constraints can be easily incorporated with minor modifications to the architecture. Through experiments, we validate the superior prescriptive accuracy of P-ReLU against competing benchmarks. Lastly, we present examples of interpretable prescriptive trees extracted from trained P-ReLUs using a real-world dataset, for both the unconstrained and constrained scenarios.Comment: 17 pages, 6 figures, accepted at ICML 2

    A NOVEL SPLIT SELECTION OF A LOGISTIC REGRESSION TREE FOR THE CLASSIFICATION OF DATA WITH HETEROGENEOUS SUBGROUPS

    Get PDF
    A logistic regression tree (LRT) is a hybrid machine learning method that combines a decision tree model and logistic regression models. An LRT recursively partitions the input data space through splitting and learns multiple logistic regression models optimized for each subpopulation. The split selection is a critical procedure for improving the predictive performance of the LRT. In this paper, we present a novel separability-based split selection method for the construction of an LRT. The separability measure, defined on the feature space of logistic regression models, evaluates the performance of potential child models without fitting, and the optimal split is selected based on the results. Heterogeneous subgroups that have different class-separating patterns can be identified in the split process when they exist in the data. In addition, we compare the performance of our proposed method with the benchmark algorithms through experiments on both synthetic and real-world datasets. The experimental results indicate the effectiveness and generality of our proposed method

    New SAR Target Imaging Algorithm based on Oblique Projection for Clutter Reduction

    Get PDF
    International audienceWe have developed a new Synthetic Aperture Radar (SAR) algorithm based on physical models for the detection of a Man-Made Target (MMT) embedded in strong clutter (trunks in a forest). The physical models for the MMT and the clutter are represented by low-rank subspaces and are based on scattering and polarimetric properties. Our SAR algorithm applies the oblique projection of the received signal along the clutter subspace onto the target subspace. We compute its statistical performance in terms of probabilities of detection and false alarms. The performances of the proposed SAR algorithm are improved compared to those obtained with existing SAR algorithms: the MMT detection is greatly improved and the clutter is rejected. We also studied the robustness of our new SAR algorithm to interference modeling errors. Results on real FoPen (Foliage Penetration) data showed the usefulness of this approach
    • …
    corecore