40,386 research outputs found

    Physical Representation-based Predicate Optimization for a Visual Analytics Database

    Full text link
    Querying the content of images, video, and other non-textual data sources requires expensive content extraction methods. Modern extraction techniques are based on deep convolutional neural networks (CNNs) and can classify objects within images with astounding accuracy. Unfortunately, these methods are slow: processing a single image can take about 10 milliseconds on modern GPU-based hardware. As massive video libraries become ubiquitous, running a content-based query over millions of video frames is prohibitive. One promising approach to reduce the runtime cost of queries of visual content is to use a hierarchical model, such as a cascade, where simple cases are handled by an inexpensive classifier. Prior work has sought to design cascades that optimize the computational cost of inference by, for example, using smaller CNNs. However, we observe that there are critical factors besides the inference time that dramatically impact the overall query time. Notably, by treating the physical representation of the input image as part of our query optimization---that is, by including image transforms, such as resolution scaling or color-depth reduction, within the cascade---we can optimize data handling costs and enable drastically more efficient classifier cascades. In this paper, we propose Tahoma, which generates and evaluates many potential classifier cascades that jointly optimize the CNN architecture and input data representation. Our experiments on a subset of ImageNet show that Tahoma's input transformations speed up cascades by up to 35 times. We also find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy, and a 280x speedup if some accuracy is sacrificed.Comment: Camera-ready version of the paper submitted to ICDE 2019, In Proceedings of the 35th IEEE International Conference on Data Engineering (ICDE 2019

    New instruments and technologies for Cultural Heritage survey: full integration between point clouds and digital photogrammetry

    Get PDF
    In the last years the Geomatic Research Group of the Politecnico di Torino faced some new research topics about new instruments for point cloud generation (e.g. Time of Flight cameras) and strong integration between multi-image matching techniques and 3D Point Cloud information in order to solve the ambiguities of the already known matching algorithms. ToF cameras can be a good low cost alternative to LiDAR instruments for the generation of precise and accurate point clouds: up to now the application range is still limited but in a near future they will be able to satisfy the most part of the Cultural Heritage metric survey requirements. On the other hand multi-image matching techniques with a correct and deep integration of the point cloud information can give the correct solution for an "intelligent" survey of the geometric object break-lines, which are the correct starting point for a complete survey. These two research topics are strictly connected to a modern Cultural Heritage 3D survey approach. In this paper after a short analysis of the achieved results, an alternative possible scenario for the development of the metric survey approach inside the wider topic of Cultural Heritage Documentation is reporte

    Knowledge-based vision for space station object motion detection, recognition, and tracking

    Get PDF
    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed

    Perceptual Pluralism

    Get PDF
    Perceptual systems respond to proximal stimuli by forming mental representations of distal stimuli. A central goal for the philosophy of perception is to characterize the representations delivered by perceptual systems. It may be that all perceptual representations are in some way proprietarily perceptual and differ from the representational format of thought (Dretske 1981; Carey 2009; Burge 2010; Block ms.). Or it may instead be that perception and cognition always trade in the same code (Prinz 2002; Pylyshyn 2003). This paper rejects both approaches in favor of perceptual pluralism, the thesis that perception delivers a multiplicity of representational formats, some proprietary and some shared with cognition. The argument for perceptual pluralism marshals a wide array of empirical evidence in favor of iconic (i.e., image-like, analog) representations in perception as well as discursive (i.e., language-like, digital) perceptual object representations
    corecore