2,674 research outputs found

    Crosstalk Cascades for Frame-rate Pedestrian Detection

    Get PDF
    Cascades help make sliding window object detection fast, nevertheless, computational demands remain prohibitive for numerous applications. Currently, evaluation of adjacent windows proceeds independently; this is suboptimal as detector responses at nearby locations and scales are correlated. We propose to exploit these correlations by tightly coupling detector evaluation of nearby windows. We introduce two opposing mechanisms: detector excitation of promising neighbors and inhibition of inferior neighbors. By enabling neighboring detectors to communicate, crosstalk cascades achieve major gains (4-30x speedup) over cascades evaluated independently at each image location. Combined with recent advances in fast multi-scale feature computation, for which we provide an optimized implementation, our approach runs at 35-65 fps on 640 x 480 images while attaining state-of-the-art accuracy

    Mieux comprendre le rÎle de l'économie sociale dans les services sociaux et de santé: exemples choisis en France et au Canada

    Get PDF
    Ce texte dĂ©construit l’assertion qui laisse accroire que les organisations du champ de l’économie sociale (associations en France, organismes communautaires et secteur du volontariat en AmĂ©rique du Nord) sont une solution Ă  la crise des modĂšles de protection sociale et Ă  la rarĂ©faction des ressources dans les systĂšmes de santĂ©. Ce processus de dĂ©construction se base sur une posture thĂ©orique Ă  l’intersection de la dimension du territoire (qu’il soit rural, urbain ou mĂ©tropolitain, par exemple), de la dimension sectorielle (la santĂ©, le social...), qu’il convient de croiser avec le champde l’économie sociale et solidaire (ESS). AprĂšs une prĂ©sentation gĂ©nĂ©rale du contexte, cette contribution suggĂšre l’utilisation d’un modĂšle thĂ©orique comme grille de lecture des rĂ©alitĂ©s Ă  l’échelle locale. Nous prĂ©sentons ensuite les rĂ©sultats de nos observations sur le terrain (deux Ă©tudes de cas en Ontario, sur l’importance du volontariat dans les services aux personnes ĂągĂ©es des communautĂ©s rurales, et en France de l’ouest), mettant ainsi en exergue le rĂŽle des acteurs du secteur mĂ©dico-social relevant de l’ESS dans la dĂ©finition de nombreux enjeux de dĂ©veloppement durable des territoires

    Real-time Person Re-identification at the Edge: A Mixed Precision Approach

    Full text link
    A critical part of multi-person multi-camera tracking is person re-identification (re-ID) algorithm, which recognizes and retains identities of all detected unknown people throughout the video stream. Many re-ID algorithms today exemplify state of the art results, but not much work has been done to explore the deployment of such algorithms for computation and power constrained real-time scenarios. In this paper, we study the effect of using a light-weight model, MobileNet-v2 for re-ID and investigate the impact of single (FP32) precision versus half (FP16) precision for training on the server and inference on the edge nodes. We further compare the results with the baseline model which uses ResNet-50 on state of the art benchmarks including CUHK03, Market-1501, and Duke-MTMC. The MobileNet-V2 mixed precision training method can improve both inference throughput on the edge node, and training time on server 3.25×3.25\times reaching to 27.77fps and 1.75×1.75\times, respectively and decreases power consumption on the edge node by 1.45×1.45\times, while it deteriorates accuracy only 5.6\% in respect to ResNet-50 single precision on the average for three different datasets. The code and pre-trained networks are publicly available at https://github.com/TeCSAR-UNCC/person-reid.Comment: This is a pre-print of an article published in International Conference on Image Analysis and Recognition (ICIAR 2019), Lecture Notes in Computer Science. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-27272-2_

    Spin physics at A Fixed-Target ExpeRiment at the LHC (AFTER@LHC)

    Full text link
    We outline the opportunities for spin physics which are offered by a next generation and multi-purpose fixed-target experiment exploiting the proton LHC beam extracted by a bent crystal. In particular, we focus on the study of single transverse spin asymetries with the polarisation of the target.Comment: Contributed to the 20th International Spin Physics Symposium, SPIN2012, 17-22 September 2012, Dubna, Russia, 4 pages, LaTe

    Prospectives for A Fixed-Target ExpeRiment at the LHC: AFTER@LHC

    Full text link
    We argue that the concept of a multi-purpose fixed-target experiment with the proton or lead-ion LHC beams extracted by a bent crystal would offer a number of ground-breaking precision-physics opportunities. The multi-TeV LHC beams will allow for the most energetic fixed-target experiments ever performed. The fixed-target mode has the advantage of allowing for high luminosities, spin measurements with a polarised target, and access over the full backward rapidity domain --uncharted until now-- up to x_F ~ -1.Comment: 6 pages, 1 table, LaTeX. Proceedings of the 36th International Conference on High Energy Physics (ICHEP2012), 4-11 July 2012, Melbourne, Australi

    Resampling methods for parameter-free and robust feature selection with mutual information

    Get PDF
    Combining the mutual information criterion with a forward feature selection strategy offers a good trade-off between optimality of the selected feature subset and computation time. However, it requires to set the parameter(s) of the mutual information estimator and to determine when to halt the forward procedure. These two choices are difficult to make because, as the dimensionality of the subset increases, the estimation of the mutual information becomes less and less reliable. This paper proposes to use resampling methods, a K-fold cross-validation and the permutation test, to address both issues. The resampling methods bring information about the variance of the estimator, information which can then be used to automatically set the parameter and to calculate a threshold to stop the forward procedure. The procedure is illustrated on a synthetic dataset as well as on real-world examples

    Interpretations of J/ψJ/\psi suppression

    Full text link
    We review the two main interpretations of J/ψJ/\psi suppression proposed in the literature. The phase transition (or deconfining) scenario assumes that below some critical value of the local energy density (or of some other geometrical quantity which depends both on the colliding systems and on the centrality of the collision), there is only nuclear absorption. Above this critical value the absorptive cross-section is taken to be infinite, i.e. no J/ψJ/\psi can survive in this hot region. In the hadronic scenario the J/ψJ/\psi dissociates due both to nuclear absorption and to its interactions with co-moving hadrons produced in the collision. No discontinuity exists in physical observables. We show that an equally good description of the present data is possible in either scenario.Comment: 12 pages, LaTeX, uses epsfig and ioplppt; review talk given by A. Capella at the International Symposium on Strangness in Quark Matter, Santorini (Greece), April 1997; Figs. 1 and 2 not available but can be found in Refs. 13 and 6 respectivel

    Predictions for p+p+Pb Collisions at sNN=5\sqrt{s_{NN}} = 5 TeV: Comparison with Data

    Full text link
    Predictions made in Albacete {\it et al} prior to the LHC p+p+Pb run at sNN=5\sqrt{s_{NN}} = 5 TeV are compared to currently available data. Some predictions shown here have been updated by including the same experimental cuts as the data. Some additional predictions are also presented, especially for quarkonia, that were provided to the experiments before the data were made public but were too late for the original publication are also shown here.Comment: 55 pages 35 figure

    Prioritising references for systematic reviews with RobotAnalyst: A user study

    Get PDF
    Screening references is a time-consuming step necessary for systematic reviews and guideline development. Previous studies have shown that human effort can be reduced by using machine learning software to prioritise large reference collections such that most of the relevant references are identified before screening is completed. We describe and evaluate RobotAnalyst, a Web-based software system that combines text-mining and machine learning algorithms for organising references by their content and actively prioritising them based on a relevancy classification model trained and updated throughout the process. We report an evaluation over 22 reference collections (most are related to public health topics) screened using RobotAnalyst with a total of 43 610 abstract-level decisions. The number of references that needed to be screened to identify 95% of the abstract-level inclusions for the evidence review was reduced on 19 of the 22 collections. Significant gains over random sampling were achieved for all reviews conducted with active prioritisation, as compared with only two of five when prioritisation was not used. RobotAnalyst's descriptive clustering and topic modelling functionalities were also evaluated by public health analysts. Descriptive clustering provided more coherent organisation than topic modelling, and the content of the clusters was apparent to the users across a varying number of clusters. This is the first large-scale study using technology-assisted screening to perform new reviews, and the positive results provide empirical evidence that RobotAnalyst can accelerate the identification of relevant studies. The results also highlight the issue of user complacency and the need for a stopping criterion to realise the work savings
    • 

    corecore