9 research outputs found

    A Review on Video Search Engine Ranking

    Get PDF
    Search reranking is considered as a best and basic approach to enhance recovery accuracy. The recordings are recovered utilizing the related literary data, for example, encompassing content from the website page. The execution of such frameworks basically depends on the importance between the content and the recordings. In any case, they may not generally coordinate all around ok, which causes boisterous positioning results. For example, outwardly comparative recordings may have altogether different positions. So reranking has been proposed to tackle the issue. Video reranking, as a compelling approach to enhance the consequences of electronic video look however the issue is not paltry particularly when we are thinking about different elements or modalities for pursuit in video and video recovery. This paper proposes another sort of reranking calculation, the round reranking, that backings the common trade of data over numerous modalities for enhancing seek execution and takes after the rationality of solid performing methodology could gain from weaker ones

    A framework for automatic semantic video annotation

    Get PDF
    The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other. This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation

    Human-machine cooperation in large-scale multimedia retrieval : a survey

    Get PDF
    Large-Scale Multimedia Retrieval(LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more interdisciplinary approach is necessary to develop an LSMR system that is really meaningful for humans. To this end, this paper aims to stimulate attention to the LSMR problem from diverse research fields. By explaining basic terminologies in LSMR, we first survey several representative methods in chronological order. This reveals that due to prioritizing the generality and scalability for large-scale data, recent methods interpret semantic meanings with a completely different mechanism from humans, though such humanlike mechanisms were used in classical heuristic-based methods. Based on this, we discuss human-machine cooperation, which incorporates knowledge about human interpretation into LSMR without sacrificing the generality and scalability. In particular, we present three approaches to human-machine cooperation (cognitive, ontological, and adaptive), which are attributed to cognitive science, ontology engineering, and metacognition, respectively. We hope that this paper will create a bridge to enable researchers in different fields to communicate about the LSMR problem and lead to a ground-breaking next generation of LSMR systems

    Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey

    Get PDF
    Large-Scale Multimedia Retrieval(LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more interdisciplinary approach is necessary to develop an LSMR system that is really meaningful for humans. To this end, this paper aims to stimulate attention to the LSMR problem from diverse research fields. By explaining basic terminologies in LSMR, we first survey several representative methods in chronological order. This reveals that due to prioritizing the generality and scalability for large-scale data, recent methods interpret semantic meanings with a completely different mechanism from humans, though such humanlike mechanisms were used in classical heuristic-based methods. Based on this, we discuss human-machine cooperation, which incorporates knowledge about human interpretation into LSMR without sacrificing the generality and scalability. In particular, we present three approaches to human-machine cooperation (cognitive, ontological, and adaptive), which are attributed to cognitive science, ontology engineering, and metacognition, respectively. We hope that this paper will create a bridge to enable researchers in different fields to communicate about the LSMR problem and lead to a ground-breaking next generation of LSMR systems

    Learning Hierarchical Representations For Video Analysis Using Deep Learning

    Get PDF
    With the exponential growth of the digital data, video content analysis (e.g., action, event recognition) has been drawing increasing attention from computer vision researchers. Effective modeling of the objects, scenes, and motions is critical for visual understanding. Recently there has been a growing interest in the bio-inspired deep learning models, which has shown impressive results in speech and object recognition. The deep learning models are formed by the composition of multiple non-linear transformations of the data, with the goal of yielding more abstract and ultimately more useful representations. The advantages of the deep models are three fold: 1) They learn the features directly from the raw signal in contrast to the hand-designed features. 2) The learning can be unsupervised, which is suitable for large data where labeling all the data is expensive and unpractical. 3) They learn a hierarchy of features one level at a time and the layerwise stacking of feature extraction, this often yields better representations. However, not many deep learning models have been proposed to solve the problems in video analysis, especially videos “in a wild”. Most of them are either dealing with simple datasets, or limited to the low-level local spatial-temporal feature descriptors for action recognition. Moreover, as the learning algorithms are unsupervised, the learned features preserve generative properties rather than the discriminative ones which are more favorable in the classification tasks. In this context, the thesis makes two major contributions. First, we propose several formulations and extensions of deep learning methods which learn hierarchical representations for three challenging video analysis tasks, including complex event recognition, object detection in videos and measuring action similarity. The proposed methods are extensively demonstrated for each work on the state-of-the-art challenging datasets. Besides learning the low-level local features, higher level representations are further designed to be learned in the context of applications. The data-driven concept representations and sparse representation of the events are learned for complex event recognition; the representations for object body parts iii and structures are learned for object detection in videos; and the relational motion features and similarity metrics between video pairs are learned simultaneously for action verification. Second, in order to learn discriminative and compact features, we propose a new feature learning method using a deep neural network based on auto encoders. It differs from the existing unsupervised feature learning methods in two ways: first it optimizes both discriminative and generative properties of the features simultaneously, which gives our features a better discriminative ability. Second, our learned features are more compact, while the unsupervised feature learning methods usually learn a redundant set of over-complete features. Extensive experiments with quantitative and qualitative results on the tasks of human detection and action verification demonstrate the superiority of our proposed models

    Integrating Deep Learning with Correlation-based Multimedia Semantic Concept Detection

    Get PDF
    The rapid advances in technologies make the explosive growth of multimedia data possible and available to the public. Multimedia data can be defined as data collection, which is composed of various data types and different representations. Due to the fact that multimedia data carries knowledgeable information, it has been widely adopted to different genera, like surveillance event detection, medical abnormality detection, and many others. To fulfil various requirements for different applications, it is important to effectively classify multimedia data into semantic concepts across multiple domains. In this dissertation, a correlation-based multimedia semantic concept detection framework is seamlessly integrated with the deep learning technique. The framework aims to explore implicit and explicit correlations among features and concepts while adopting different Convolutional Neural Network (CNN) architectures accordingly. First, the Feature Correlation Maximum Spanning Tree (FC-MST) is proposed to remove the redundant and irrelevant features based on the correlations between the features and positive concepts. FC-MST identifies the effective features and decides the initial layer\u27s dimension in CNNs. Second, the Negative-based Sampling method is proposed to alleviate the data imbalance issue by keeping only the representative negative instances in the training process. To adjust dierent sizes of training data, the number of iterations for the CNN is determined adaptively and automatically. Finally, an Indirect Association Rule Mining (IARM) approach and a correlation-based re-ranking method are proposed to reveal the implicit relationships from the correlations among concepts, which are further utilized together with the classification scores to enhance the re-ranking process. The framework is evaluated using two benchmark multimedia data sets, TRECVID and NUS-WIDE, which contain large amounts of multimedia data and various semantic concepts

    Concept-driven multi-modality fusion for video search

    No full text

    Utilisation du contexte pour l’indexation sĂ©mantique des images et vidĂ©os

    Get PDF
    The automated indexing of image and video is a difficult problem because of the``distance'' between the arrays of numbers encoding these documents and the concepts (e.g. people, places, events or objects) with which we wish to annotate them. Methods exist for this but their results are far from satisfactory in terms of generality and accuracy. Existing methods typically use a single set of such examples and consider it as uniform. This is not optimal because the same concept may appear in various contexts and its appearance may be very different depending upon these contexts. In this thesis, we considered the use of context for indexing multimedia documents. The context has been widely used in the state of the art to treat various problems. In our work, we use relationships between concepts as a source of semantic context. For the case of videos, we exploit the temporal context that models relationships between the shots of the same video. We propose several approaches using both types of context and their combination, in different levels of an indexing system. We also present the problem of multiple concept detection. We assume that it is related to the context use problematic. We consider that detecting simultaneously a set of concepts is equivalent to detecting one or more concepts forming the group in a context where the others are present. To do that, we studied and compared two types of approaches. All our proposals are generic and can be applied to any system for the detection of any concept. We evaluated our contributions on TRECVID and VOC collections, which are of international standards and recognized by the community. We achieved good results comparable to those of the best indexing systems evaluated in recent years in the evaluation campaigns cited previously.L'indexation automatisĂ©e des documents image fixe et vidĂ©o est un problĂšme difficile en raison de la ``distance'' existant entre les tableaux de nombres codant ces documents et les concepts avec lesquels on souhaite les annoter (personnes, lieux, Ă©vĂ©nements ou objets, par exemple). Des mĂ©thodes existent pour cela mais leurs rĂ©sultats sont loin d'ĂȘtre satisfaisants en termes de gĂ©nĂ©ralitĂ© et de prĂ©cision. Elles utilisent en gĂ©nĂ©ral un ensemble unique de tels exemples et le considĂšre d'une maniĂšre uniforme. Ceci n'est pas optimal car un mĂȘme concept peut apparaĂźtre dans des contextes trĂšs divers et son apparence peut ĂȘtre trĂšs diffĂ©rente en fonction de ces contextes. Dans le cadre de cette thĂšse, nous avons considĂ©rĂ© l'utilisation du contexte pour l'indexation des documents multimĂ©dia. Le contexte a largement Ă©tĂ© utilisĂ© dans l'Ă©tat de l'art pour traiter diverses problĂ©matiques. Dans notre travail, nous retenons les relations entre les concepts comme source de contexte sĂ©mantique. Pour le cas des vidĂ©os, nous exploitons le contexte temporel qui modĂ©lise les relations entre les plans d'une mĂȘme vidĂ©o. Nous proposons plusieurs approches utilisant les deux types de contexte ainsi que leur combinaison, dans diffĂ©rents niveaux d'un systĂšme d'indexation. Nous prĂ©sentons Ă©galement le problĂšme de dĂ©tection simultanĂ©e de groupes de concepts que nous jugeons liĂ© Ă  la problĂ©matique de l'utilisation du contexte. Nous considĂ©rons que la dĂ©tection d'un groupe de concepts revient Ă  dĂ©tecter un ou plusieurs concepts formant le groupe dans un contexte ou les autres sont prĂ©sents. Nous avons Ă©tudiĂ© et comparĂ© pour cela deux catĂ©gories d'approches. Toutes nos propositions sont gĂ©nĂ©riques et peuvent ĂȘtre appliquĂ©es Ă  n'importe quel systĂšme pour la dĂ©tection de n'importe quel concept. Nous avons Ă©valuĂ© nos contributions sur les collections de donnĂ©es TRECVid et VOC, qui sont des standards internationaux et reconnues par la communautĂ©. Nous avons obtenu de bons rĂ©sultats, comparables Ă  ceux des meilleurs systĂšmes d'indexation Ă©valuĂ©s ces derniĂšres annĂ©es dans les compagnes d'Ă©valuation prĂ©cĂ©demment citĂ©es
    corecore