1,210 research outputs found

    Socially Constrained Structural Learning for Groups Detection in Crowd

    Full text link
    Modern crowd theories agree that collective behavior is the result of the underlying interactions among small groups of individuals. In this work, we propose a novel algorithm for detecting social groups in crowds by means of a Correlation Clustering procedure on people trajectories. The affinity between crowd members is learned through an online formulation of the Structural SVM framework and a set of specifically designed features characterizing both their physical and social identity, inspired by Proxemic theory, Granger causality, DTW and Heat-maps. To adhere to sociological observations, we introduce a loss function (G-MITRE) able to deal with the complexity of evaluating group detection performances. We show our algorithm achieves state-of-the-art results when relying on both ground truth trajectories and tracklets previously extracted by available detector/tracker systems

    End-to-End Kernel Learning with Supervised Convolutional Kernel Networks

    Get PDF
    In this paper, we introduce a new image representation based on a multilayer kernel machine. Unlike traditional kernel methods where data representation is decoupled from the prediction task, we learn how to shape the kernel with supervision. We proceed by first proposing improvements of the recently-introduced convolutional kernel networks (CKNs) in the context of unsupervised learning; then, we derive backpropagation rules to take advantage of labeled training data. The resulting model is a new type of convolutional neural network, where optimizing the filters at each layer is equivalent to learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We show that our method achieves reasonably competitive performance for image classification on some standard "deep learning" datasets such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating the applicability of our approach to a large variety of image-related tasks.Comment: to appear in Advances in Neural Information Processing Systems (NIPS

    SurfNet: Generating 3D shape surfaces using deep residual networks

    Full text link
    3D shape models are naturally parameterized using vertices and faces, \ie, composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent `geometry images' representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images.Comment: CVPR 2017 pape

    Recent Advances in Graph Partitioning

    Full text link
    We survey recent trends in practical algorithms for balanced graph partitioning together with applications and future research directions

    Response-based methods to measure road surface irregularity: a state-of-the-art review

    Get PDF
    "jats:sec" "jats:title"Purpose"/jats:title" "jats:p"With the development of smart technologies, Internet of Things and inexpensive onboard sensors, many response-based methods to evaluate road surface conditions have emerged in the recent decade. Various techniques and systems have been developed to measure road profiles and detect road anomalies for multiple purposes such as expedient maintenance of pavements and adaptive control of vehicle dynamics to improve ride comfort and ride handling. A holistic review of studies into modern response-based techniques for road pavement applications is found to be lacking. Herein, the focus of this article is threefold: to provide an overview of the state-of-the-art response-based methods, to highlight key differences between methods and thereby to propose key focus areas for future research."/jats:p" "/jats:sec" "jats:sec" "jats:title"Methods"/jats:title" "jats:p"Available articles regarding response-based methods to measure road surface condition were collected mainly from “Scopus” database and partially from “Google Scholar”. The search period is limited to the recent 15 years. Among the 130 reviewed documents, 37% are for road profile reconstruction, 39% for pothole detection and the remaining 24% for roughness index estimation."/jats:p" "/jats:sec" "jats:sec" "jats:title"Results"/jats:title" "jats:p"The results show that machine-learning techniques/data-driven methods have been used intensively with promising results but the disadvantages on data dependence have limited its application in some instances as compared to analytical/data processing methods. Recent algorithms to reconstruct/estimate road profiles are based mainly on passive suspension and quarter-vehicle-model, utilise fewer key parameters, being independent on speed variation and less computation for real-time/online applications. On the other hand, algorithms for pothole detection and road roughness index estimation are increasingly focusing on GPS accuracy, data aggregation and crowdsourcing platform for large-scale application. However, a novel and comprehensive system that is comparable to existing International Roughness Index and conventional Pavement Management System is still lacking."/jats:p" "/jats:sec Document type: Articl

    An integrated workflow for robust alignment and simplified quantitative analysis of NMR spectrometry data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Nuclear magnetic resonance spectroscopy (NMR) is a powerful technique to reveal and compare quantitative metabolic profiles of biological tissues. However, chemical and physical sample variations make the analysis of the data challenging, and typically require the application of a number of preprocessing steps prior to data interpretation. For example, noise reduction, normalization, baseline correction, peak picking, spectrum alignment and statistical analysis are indispensable components in any NMR analysis pipeline.</p> <p>Results</p> <p>We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data.</p> <p>Conclusions</p> <p>The workflow performance was evaluated using a previously published dataset. Correlation maps, spectral and grey scale plots show clear improvements in comparison to other methods, and the down-to-earth quantitative analysis works well for the CluPA-aligned spectra. The whole workflow is embedded into a modular and statistically sound framework that is implemented as an R package called "speaq" ("spectrum alignment and quantitation"), which is freely available from <url>http://code.google.com/p/speaq/</url>.</p

    Shaping the Place - A Digital Design Heuristics Tool to Support Creation of Urban Design Proposals by Non-professionals

    Get PDF
    This paper is exploring a solution to foster civic engagement in urban design projects by applying the concepts of creativity to ICT tools. We propose a framework to support interactions between non-professionals and professionals that will ease the understanding of urban design and creation of design proposals for non-trained people and, on the other hand, offer valuable propositions and inspiration to experts. This make tool should have the presented creativity affordances known as fluency, flexibility and originality during the divergent phase of the creation process. We propose to implement a 3D collage metaphor to facilitate creative expression with 3D models. An underlying technical challenge of our application is to provide an interactive 3D mesh cutting tool to help users to express their creative potential in urban design projects. We present a non-exhaustive survey of mesh segmentation and cutting methodologies and finally, first results of implementation of a cutting algorithm
    • 

    corecore