15,753 research outputs found
Quantifying and Transferring Contextual Information in Object Detection
(c) 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other work
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Automated reliability assessment for spectroscopic redshift measurements
We present a new approach to automate the spectroscopic redshift reliability
assessment based on machine learning (ML) and characteristics of the redshift
probability density function (PDF).
We propose to rephrase the spectroscopic redshift estimation into a Bayesian
framework, in order to incorporate all sources of information and uncertainties
related to the redshift estimation process, and produce a redshift posterior
PDF that will be the starting-point for ML algorithms to provide an automated
assessment of a redshift reliability.
As a use case, public data from the VIMOS VLT Deep Survey is exploited to
present and test this new methodology. We first tried to reproduce the existing
reliability flags using supervised classification to describe different types
of redshift PDFs, but due to the subjective definition of these flags, soon
opted for a new homogeneous partitioning of the data into distinct clusters via
unsupervised classification. After assessing the accuracy of the new clusters
via resubstitution and test predictions, unlabelled data from preliminary mock
simulations for the Euclid space mission are projected into this mapping to
predict their redshift reliability labels.Comment: Submitted on 02 June 2017 (v1). Revised on 08 September 2017 (v2).
Latest version 28 September 2017 (this version v3
A Novel Scheme for Intelligent Recognition of Pornographic Images
Harmful contents are rising in internet day by day and this motivates the
essence of more research in fast and reliable obscene and immoral material
filtering. Pornographic image recognition is an important component in each
filtering system. In this paper, a new approach for detecting pornographic
images is introduced. In this approach, two new features are suggested. These
two features in combination with other simple traditional features provide
decent difference between porn and non-porn images. In addition, we applied
fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron)
and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of
system was evaluated over 18354 download images from internet. The attained
precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on
test dataset. Achieved results verify the performance of proposed system versus
other related works
Spott : on-the-spot e-commerce for television using deep learning-based video analysis techniques
Spott is an innovative second screen mobile multimedia application which offers viewers relevant information on objects (e.g., clothing, furniture, food) they see and like on their television screens. The application enables interaction between TV audiences and brands, so producers and advertisers can offer potential consumers tailored promotions, e-shop items, and/or free samples. In line with the current views on innovation management, the technological excellence of the Spott application is coupled with iterative user involvement throughout the entire development process. This article discusses both of these aspects and how they impact each other. First, we focus on the technological building blocks that facilitate the (semi-) automatic interactive tagging process of objects in the video streams. The majority of these building blocks extensively make use of novel and state-of-the-art deep learning concepts and methodologies. We show how these deep learning based video analysis techniques facilitate video summarization, semantic keyframe clustering, and (similar) object retrieval. Secondly, we provide insights in user tests that have been performed to evaluate and optimize the application's user experience. The lessons learned from these open field tests have already been an essential input in the technology development and will further shape the future modifications to the Spott application
- …