1,185 research outputs found

    Physical Representation-based Predicate Optimization for a Visual Analytics Database

    Full text link
    Querying the content of images, video, and other non-textual data sources requires expensive content extraction methods. Modern extraction techniques are based on deep convolutional neural networks (CNNs) and can classify objects within images with astounding accuracy. Unfortunately, these methods are slow: processing a single image can take about 10 milliseconds on modern GPU-based hardware. As massive video libraries become ubiquitous, running a content-based query over millions of video frames is prohibitive. One promising approach to reduce the runtime cost of queries of visual content is to use a hierarchical model, such as a cascade, where simple cases are handled by an inexpensive classifier. Prior work has sought to design cascades that optimize the computational cost of inference by, for example, using smaller CNNs. However, we observe that there are critical factors besides the inference time that dramatically impact the overall query time. Notably, by treating the physical representation of the input image as part of our query optimization---that is, by including image transforms, such as resolution scaling or color-depth reduction, within the cascade---we can optimize data handling costs and enable drastically more efficient classifier cascades. In this paper, we propose Tahoma, which generates and evaluates many potential classifier cascades that jointly optimize the CNN architecture and input data representation. Our experiments on a subset of ImageNet show that Tahoma's input transformations speed up cascades by up to 35 times. We also find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy, and a 280x speedup if some accuracy is sacrificed.Comment: Camera-ready version of the paper submitted to ICDE 2019, In Proceedings of the 35th IEEE International Conference on Data Engineering (ICDE 2019

    Deep Learning in the Automotive Industry: Applications and Tools

    Full text link
    Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.Comment: 10 page

    Towards automatic model specialization for edge video analytics

    Get PDF
    The number of cameras deployed to the edge of the network increases by the day, while emerging use cases, such as smart cities or autonomous driving, also grow to expect images to be analyzed in real-time by increasingly accurate and complex neural networks. Unfortunately, state-of-the-art accuracy comes at a computational cost rarely available in the edge cloud. At the same time, due to strict latency constraints and the vast amount of bandwidth edge cameras generate, we can no longer rely on offloading the task to a centralized cloud. Consequently, there is a need for a meeting point between the resource-constrained edge cloud and accurate real-time video analytics. If state-of-the-art models are too expensive to run on the edge, and lightweight models are not accurate enough for the use cases in the edge, one solution is to demand less from the lightweight model and specialize it in a narrower scope of the problem, a technique known as model specialization. By specializing a model to the context of a single camera, we can boost its accuracy while keeping its computational cost constant. However, this also involves one training per camera, which quickly becomes unfeasible unless the entire process is fully automated. In this paper, we present and evaluate COVA (Contextually Optimized Video Analytics), a framework to assist in the automatic specialization of models for video analytics in edge cloud cameras. COVA aims to automatically improve the accuracy of lightweight models by specializing them to the context to which they will be deployed. Moreover, we discuss and analyze each step involved in the process to understand the different trade-offs that each one entails. Using COVA, we demonstrate that the whole pipeline can be effectively automated by leveraging large neural networks used as teachers whose predictions are used to train and specialize lightweight neural networks. Results show that COVA can automatically improve pre-trained models by an average of 21% mAP on the different scenes of the VIRAT dataset.This work has been partially supported by the Spanish Government (contract PID2019-107255GB) and by Generalitat de Catalunya, Spain (contract 2014-SGR-1051).Peer ReviewedPostprint (published version
    • …
    corecore