21,193 research outputs found

    Three-dimensional multifractal analysis of trabecular bone under clinical computed tomography

    Get PDF
    Purpose: An adequate understanding of bone structural properties is critical for predicting fragility conditions caused by diseases such as osteoporosis, and in gauging the success of fracture prevention treatments. In this work we aim to develop multiresolution image analysis techniques to extrapolate high-resolution images predictive power to images taken in clinical conditions. Methods: We performed multifractal analysis (MFA) on a set of 17 ex vivo human vertebrae clinical CT scans. The vertebræ failure loads (FFailure) were experimentally measured. We combined bone mineral density (BMD) with different multifractal dimensions, and BMD with multiresolution statistics (e.g., skewness, kurtosis) of MFA curves, to obtain linear models to predict FFailure. Furthermore we obtained short- and long-term precisions from simulated in vivo scans, using a clinical CT scanner. Ground-truth data - high-resolution images - were obtained with a High-Resolution Peripheral Quantitative Computed Tomography (HRpQCT) scanner. Results: At the same level of detail, BMD combined with traditional multifractal descriptors (Lipschitz-Hölder exponents), and BMD with monofractal features showed similar prediction powers in predicting FFailure (87%, adj. R2). However, at different levels of details, the prediction power of BMD with multifractal features raises to 92% (adj. R2) of FFailure. Our main finding is that a simpler but slightly less accurate model, combining BMD and the skewness of the resulting multifractal curves, predicts 90% (adj. R2) of FFailure. Conclusions: Compared to monofractal and standard bone measures, multifractal analysis captured key insights in the conditions leading to FFailure. Instead of raw multifractal descriptors, the statistics of multifractal curves can be used in several other contexts, facilitating further research.Fil: Baravalle, Rodrigo Guillermo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; ArgentinaFil: Thomsen, Felix Sebastian Leo. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional del Sur; ArgentinaFil: Delrieux, Claudio Augusto. Universidad Nacional del Sur; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Lu, Yongtao. Dalian University of Technology; ChinaFil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; ArgentinaFil: Stošić, Borko. Universidade Federal Rural Pernambuco; BrasilFil: Stošić, Tatijana. Universidade Federal Rural Pernambuco; Brasi

    Unsupervised Visual Feature Learning with Spike-timing-dependent Plasticity: How Far are we from Traditional Feature Learning Approaches?

    Full text link
    Spiking neural networks (SNNs) equipped with latency coding and spike-timing dependent plasticity rules offer an alternative to solve the data and energy bottlenecks of standard computer vision approaches: they can learn visual features without supervision and can be implemented by ultra-low power hardware architectures. However, their performance in image classification has never been evaluated on recent image datasets. In this paper, we compare SNNs to auto-encoders on three visual recognition datasets, and extend the use of SNNs to color images. The analysis of the results helps us identify some bottlenecks of SNNs: the limits of on-center/off-center coding, especially for color images, and the ineffectiveness of current inhibition mechanisms. These issues should be addressed to build effective SNNs for image recognition

    Pedestrian Attribute Recognition: A Survey

    Full text link
    Recognizing pedestrian attributes is an important task in computer vision community due to it plays an important role in video surveillance. Many algorithms has been proposed to handle this task. The goal of this paper is to review existing works using traditional methods or based on deep learning networks. Firstly, we introduce the background of pedestrian attributes recognition (PAR, for short), including the fundamental concepts of pedestrian attributes and corresponding challenges. Secondly, we introduce existing benchmarks, including popular datasets and evaluation criterion. Thirdly, we analyse the concept of multi-task learning and multi-label learning, and also explain the relations between these two learning algorithms and pedestrian attribute recognition. We also review some popular network architectures which have widely applied in the deep learning community. Fourthly, we analyse popular solutions for this task, such as attributes group, part-based, \emph{etc}. Fifthly, we shown some applications which takes pedestrian attributes into consideration and achieve better performance. Finally, we summarized this paper and give several possible research directions for pedestrian attributes recognition. The project page of this paper can be found from the following website: \url{https://sites.google.com/view/ahu-pedestrianattributes/}.Comment: Check our project page for High Resolution version of this survey: https://sites.google.com/view/ahu-pedestrianattributes

    The Deep Weight Prior

    Get PDF
    Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution. In this work, we propose a new type of prior distributions for convolutional neural networks, deep weight prior (DWP), that exploit generative models to encourage a specific structure of trained convolutional filters e.g., spatial correlations of weights. We define DWP in the form of an implicit distribution and propose a method for variational inference with such type of implicit priors. In experiments, we show that DWP improves the performance of Bayesian neural networks when training data are limited, and initialization of weights with samples from DWP accelerates training of conventional convolutional neural networks.Comment: TL;DR: The deep weight prior learns a generative model for kernels of convolutional neural networks, that acts as a prior distribution while training on new dataset
    corecore