58 research outputs found

    RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices

    Full text link
    Mobile devices are becoming an important carrier for deep learning tasks, as they are being equipped with powerful, high-end mobile CPUs and GPUs. However, it is still a challenging task to execute 3D Convolutional Neural Networks (CNNs) targeting for real-time performance, besides high inference accuracy. The reason is more complex model structure and higher model dimensionality overwhelm the available computation/storage resources on mobile devices. A natural way may be turning to deep learning weight pruning techniques. However, the direct generalization of existing 2D CNN weight pruning methods to 3D CNNs is not ideal for fully exploiting mobile parallelism while achieving high inference accuracy. This paper proposes RT3D, a model compression and mobile acceleration framework for 3D CNNs, seamlessly integrating neural network weight pruning and compiler code generation techniques. We propose and investigate two structured sparsity schemes i.e., the vanilla structured sparsity and kernel group structured (KGS) sparsity that are mobile acceleration friendly. The vanilla sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained structured sparsity that enjoys higher flexibility while exploiting full on-device parallelism. We propose a reweighted regularization pruning algorithm to achieve the proposed sparsity schemes. The inference time speedup due to sparsity is approaching the pruning rate of the whole model FLOPs (floating point operations). RT3D demonstrates up to 29.1Ă—\times speedup in end-to-end inference time comparing with current mobile frameworks supporting 3D CNNs, with moderate 1%-1.5% accuracy loss. The end-to-end inference time for 16 video frames could be within 150 ms, when executing representative C3D and R(2+1)D models on a cellphone. For the first time, real-time execution of 3D CNNs is achieved on off-the-shelf mobiles.Comment: To appear in Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI-21

    Methodology and experiences of rapid advice guideline development for children with COVID-19: responding to the COVID-19 outbreak quickly and efficiently.

    Get PDF
    BACKGROUND Rapid Advice Guidelines (RAG) provide decision makers with guidance to respond to public health emergencies by developing evidence-based recommendations in a short period of time with a scientific and standardized approach. However, the experience from the development process of a RAG has so far not been systematically summarized. Therefore, our working group will take the experience of the development of the RAG for children with COVID-19 as an example to systematically explore the methodology, advantages, and challenges in the development of the RAG. We shall propose suggestions and reflections for future research, in order to provide a more detailed reference for future development of RAGs. RESULT The development of the RAG by a group of 67 researchers from 11 countries took 50 days from the official commencement of the work (January 28, 2020) to submission (March 17, 2020). A total of 21 meetings were held with a total duration of 48 h (average 2.3 h per meeting) and an average of 16.5 participants attending. Only two of the ten recommendations were fully supported by direct evidence for COVID-19, three recommendations were supported by indirect evidence only, and the proportion of COVID-19 studies among the body of evidence in the remaining five recommendations ranged between 10 and 83%. Six of the ten recommendations used COVID-19 preprints as evidence support, and up to 50% of the studies with direct evidence on COVID-19 were preprints. CONCLUSIONS In order to respond to public health emergencies, the development of RAG also requires a clear and transparent formulation process, usually using a large amount of indirect and non-peer-reviewed evidence to support the formation of recommendations. Strict following of the WHO RAG handbook does not only enhance the transparency and clarity of the guideline, but also can speed up the guideline development process, thereby saving time and labor costs

    Methodology and experiences of rapid advice guideline development for children with COVID-19: responding to the COVID-19 outbreak quickly and efficiently

    Get PDF
    BACKGROUND: Rapid Advice Guidelines (RAG) provide decision makers with guidance to respond to public health emergencies by developing evidence-based recommendations in a short period of time with a scientific and standardized approach. However, the experience from the development process of a RAG has so far not been systematically summarized. Therefore, our working group will take the experience of the development of the RAG for children with COVID-19 as an example to systematically explore the methodology, advantages, and challenges in the development of the RAG. We shall propose suggestions and reflections for future research, in order to provide a more detailed reference for future development of RAGs. RESULT: The development of the RAG by a group of 67 researchers from 11 countries took 50 days from the official commencement of the work (January 28, 2020) to submission (March 17, 2020). A total of 21 meetings were held with a total duration of 48 h (average 2.3 h per meeting) and an average of 16.5 participants attending. Only two of the ten recommendations were fully supported by direct evidence for COVID-19, three recommendations were supported by indirect evidence only, and the proportion of COVID-19 studies among the body of evidence in the remaining five recommendations ranged between 10 and 83%. Six of the ten recommendations used COVID-19 preprints as evidence support, and up to 50% of the studies with direct evidence on COVID-19 were preprints. CONCLUSIONS: In order to respond to public health emergencies, the development of RAG also requires a clear and transparent formulation process, usually using a large amount of indirect and non-peer-reviewed evidence to support the formation of recommendations. Strict following of the WHO RAG handbook does not only enhance the transparency and clarity of the guideline, but also can speed up the guideline development process, thereby saving time and labor costs

    Régularisation spatiale de représentations distribuées de mots

    Get PDF
    Stimulée par l’usage intensif des téléphones mobiles, l’exploitation conjointe des don-nées textuelles et des données spatiales présentes dans les objets spatio-textuels (p. ex. tweets)est devenue la pierre angulaire à de nombreuses applications comme la recherche de lieux d’attraction. Du point de vue scientifique, ces tâches reposent de façon critique sur la représentation d’objets spatiaux et la définition de fonctions d’appariement entre ces objets. Dans cet article,nous nous intéressons au problème de représentation de ces objets. Plus spécifiquement, confortés par le succès des représentations distribuées basées sur les approches neuronales, nous proposons de régulariser les représentations distribuées de mots (c.-à-d. plongements lexicaux ou word embeddings), pouvant être combinées pour construire des représentations d’objets,grâce à leurs répartitions spatiales. L’objectif sous-jacent est de révéler d’éventuelles relations sémantiques locales entre mots ainsi que la multiplicité des sens d’un même mot. Les expérimentations basées sur une tâche de recherche d’information qui consiste à retourner le lieu physique faisant l’objet (sujet) d’un géo-texte montrent que l’intégration notre méthode de régularisation spatiale de représentations distribuées de mots dans un modèle d’appariement de base permet d’obtenir des améliorations significatives par rapport aux modèles de référence

    Using Acoustic Signal and Image to Achieve Accurate Indoor Localization

    No full text
    Location information plays a key role in pervasive computing and application, especially indoor location-based service, even though a mass of systems have been proposed, an accurate and practical indoor localization system remains unsettled. To tackle this issue, in this paper, we present a new localization scheme, SITE, combining acoustic Signals and Images to achieve accurate and robust indoor locaTion sErvice. Relying on a pre-deployed platform of acoustic sources with different frequencies, using proactively generated Doppler effect signals, SITE could track relative directions between the phone and the sources. Given m (m≥5) relative directions, SITE can use the angle differences to compute a set of locations corresponding to different subsets of sources. Then, based on a key observation—while the simultaneously estimated locations using different sets of acoustic anchors are within a small circle, the results converge to a point near the true location—SITE proposes a decision scheme that confirms whether these locations satisfy the demand of localization accuracy and can be used to search the user’s location. If not, SITE utilizes VSFM(Visual Structure from Motion) technique to achieve a set of relative locations using some images captured by the phone’s camera. By exploiting the synergy between the set of relative locations and the set of initial locations computed by relative directions, an optimal transformation relationship is obtained and applied to refine the initial calculated results. The refined result will be regarded as the user’s location. In the evaluation, we implemented a prototype and deployed a real platform of acoustic sources in different scenarios. Experimental results show that SITE has excellent performance of localization accuracy, robustness and feasibility in practical application
    • …
    corecore