1,413 research outputs found

    CayleyNets: Graph Convolutional Neural Networks with Complex Rational Spectral Filters

    Full text link
    The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification and matrix completion tasks

    Password-based group key exchange in a constant number of rounds

    Get PDF
    Abstract. With the development of grids, distributed applications are spread across multiple computing resources and require efficient security mechanisms among the processes. Although protocols for authenticated group Diffie-Hellman key exchange protocols seem to be the natural mechanisms for supporting these applications, current solutions are either limited by the use of public key infrastructures or by their scalability, requiring a number of rounds linear in the number of group members. To overcome these shortcomings, we propose in this paper the first provably-secure password-based constant-round group key exchange protocol. It is based on the protocol of Burmester and Desmedt and is provably-secure in the random-oracle and ideal-cipher models, under the Decisional Diffie-Hellman assumption. The new protocol is very efficient and fully scalable since it only requires four rounds of communication and four multi-exponentiations per user. Moreover, the new protocol avoids intricate authentication infrastructures by relying on passwords for authentication.

    Submicrometric Films of Surface-Attached Polymer Network with Temperature-Responsive Properties

    Get PDF
    Temperature-responsive properties of surface-attached poly(N-isopropylacrylamide) (PNIPAM) network films with well-controlled chemistry are investigated. The synthesis consists of cross-linking and grafting preformed ene-reactive polymer chains through thiol--ene click chemistry. The formation of surface-attached and cross-linked polymer films has the advantage of being wellcontrolled without any caution of no-oxygen atmosphere or addition of initiators. PNIPAM hydrogel films with same cross-link density are synthesized on a wide range of thickness, from nanometers to micrometers. The swelling-collapse transition with temperature is studied by using ellipsometry, neutron reflectivity, and atomic force microscopy as complementary surface-probing techniques. Sharp and high amplitude temperature-induced phase transition is observed for all submicrometric PNIPAM hydrogel films. For temperature above LCST, surface-attached PNIPAM hydrogels collapse similarly but without complete expulsion of water. For temperature below LCST, the swelling of PNIPAM hydrogels depends on the film thickness. It is shown that the swelling is strongly affected by the surface attachment for ultrathin films below \sim150 nm. For thicker films above 150 nm (to micrometers), surface-attached polymer networks with the same cross-link density swell equally. The density profile of the hydrogel films in the direction normal to the substrate is confronted with in-plane topography of the free surface. It results that the free interface width is much larger than the roughness of the hydrogel film, suggesting pendant chains at the free surface.Comment: in Langmuir, American Chemical Society, 2015, LANGMUIR, 31 (42), pp.11516-1152

    Homogeneous, heterogeneous or shrinkage estimators? Some empirical evidence from French regional gasoline consumption

    Get PDF
    This paper contrasts the performance of heterogeneous and shrinkage estimators versus the more traditional homogeneous panel data estimators. The analysis utilizes a panel data set from 21 French regions over the period 1973-1998 and a dynamic demand specification to study the gasoline demand in France. Out-of-sample forecast performance as well as the plausibility of the various estimators are contrasted.Panel data; French gasoline demand; Error components; Heterogeneous estimators; Shrinkage estimators

    When more does not necessarily mean better: Health-related illfare comparisons with non-monotone wellbeing relationships

    Get PDF
    Most welfare studies assume that wellbeing is monotonically related to the variables used for the analysis. While this assumption is reasonable for many dimensions of wellbeing like income, education, or empowerment, there are some cases where it is definitively not relevant, in particular with respect to health. For instance, health status is often proxied using the Body Mass Index (BMI). Low BMI values can capture undernutrition or the incidence of severe illness, yet a high BMI is neither desirable as it indicates obesity. Usual illfare indices derived from poverty measurement are then not appropriate. This paper proposes illfare indices that are consistent with some situations of non-monotonic wellbeing relationships and examines the partial orderings of different distributions derived from various classes of illfare indices. An illustration is provided for child health as proxied by a weight-for-age indicator using DHS data for Bangladesh, Colombia and Egypt during the last few decades

    Dense Deformation Field Estimation for Atlas Registration using the Active Contour Framework

    Get PDF
    In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem

    Functional correspondence by matrix completion

    Full text link
    In this paper, we consider the problem of finding dense intrinsic correspondence between manifolds using the recently introduced functional framework. We pose the functional correspondence problem as matrix completion with manifold geometric structure and inducing functional localization with the L1L_1 norm. We discuss efficient numerical procedures for the solution of our problem. Our method compares favorably to the accuracy of state-of-the-art correspondence algorithms on non-rigid shape matching benchmarks, and is especially advantageous in settings when only scarce data is available

    Decreasing time consumption of microscopy image segmentation through parallel processing on the GPU

    Get PDF
    The computational performance of graphical processing units (GPUs) has improved significantly. Achieving speedup factors of more than 50x compared to single-threaded CPU execution are not uncommon due to parallel processing. This makes their use for high throughput microscopy image analysis very appealing. Unfortunately, GPU programming is not straightforward and requires a lot of programming skills and effort. Additionally, the attainable speedup factor is hard to predict, since it depends on the type of algorithm, input data and the way in which the algorithm is implemented. In this paper, we identify the characteristic algorithm and data-dependent properties that significantly relate to the achievable GPU speedup. We find that the overall GPU speedup depends on three major factors: (1) the coarse-grained parallelism of the algorithm, (2) the size of the data and (3) the computation/memory transfer ratio. This is illustrated on two types of well-known segmentation methods that are extensively used in microscopy image analysis: SLIC superpixels and high-level geometric active contours. In particular, we find that our used geometric active contour segmentation algorithm is very suitable for parallel processing, resulting in acceleration factors of 50x for 0.1 megapixel images and 100x for 10 megapixel images

    Leaf segmentation and tracking using probabilistic parametric active contours

    Get PDF
    Active contours or snakes are widely used for segmentation and tracking. These techniques require the minimization of an energy function, which is generally a linear combination of a data fit term and a regularization term. This energy function can be adjusted to exploit the intrinsic object and image features. This can be done by changing the weighting parameters of the data fit and regularization term. There is, however, no rule to set these parameters optimally for a given application. This results in trial and error parameter estimation. In this paper, we propose a new active contour framework defined using probability theory. With this new technique there is no need for ad hoc parameter setting, since it uses probability distributions, which can be learned from a given training dataset
    corecore