6 research outputs found

    Exploring Hidden Coherent Feature Groups and Temporal Semantics for Multimedia Big Data Analysis

    Get PDF
    Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management

    A Systematic Review of Existing Data Mining Approaches Envisioned for Knowledge Discovery from Multimedia

    Get PDF
    The extensive use of multimedia technologies extended the applicability of information technology to a large extent which results enormous generation of complex multimedia contents over the internet. Therefore the number of multimedia contents available to the user is also exponentially increasing. In this digital era of the cloud-enabled Internet of Things (IoT), analysis of complex video and image data plays a crucial role.It aims to extract meaningful information as the distributed storages and processing elements within a bandwidth constraint network seek optimal solutions to increase the throughput along with an optimal trade-off between computational complexity and power consumption. However, due to complex characteristics of visual patterns and variations in video frames, it is not a trivial task to discover meaningful information and correlation. Hence, data mining has emerged as a field which has diverse aspects presently in extracting meaningful hidden patterns from the complex image and video data considering different pattern classification approach. The study mostly investigates the existing data-mining tools and their performance metric for the purpose of reviewing this research track.It also highlights the relationship between frequent patterns and discriminativefeatures associated with a video object. Finally, the study addresses the existing research issues to strengthen up the future direction of research towards video analytics and pattern recognition

    Integrating Deep Learning with Correlation-based Multimedia Semantic Concept Detection

    Get PDF
    The rapid advances in technologies make the explosive growth of multimedia data possible and available to the public. Multimedia data can be defined as data collection, which is composed of various data types and different representations. Due to the fact that multimedia data carries knowledgeable information, it has been widely adopted to different genera, like surveillance event detection, medical abnormality detection, and many others. To fulfil various requirements for different applications, it is important to effectively classify multimedia data into semantic concepts across multiple domains. In this dissertation, a correlation-based multimedia semantic concept detection framework is seamlessly integrated with the deep learning technique. The framework aims to explore implicit and explicit correlations among features and concepts while adopting different Convolutional Neural Network (CNN) architectures accordingly. First, the Feature Correlation Maximum Spanning Tree (FC-MST) is proposed to remove the redundant and irrelevant features based on the correlations between the features and positive concepts. FC-MST identifies the effective features and decides the initial layer\u27s dimension in CNNs. Second, the Negative-based Sampling method is proposed to alleviate the data imbalance issue by keeping only the representative negative instances in the training process. To adjust dierent sizes of training data, the number of iterations for the CNN is determined adaptively and automatically. Finally, an Indirect Association Rule Mining (IARM) approach and a correlation-based re-ranking method are proposed to reveal the implicit relationships from the correlations among concepts, which are further utilized together with the classification scores to enhance the re-ranking process. The framework is evaluated using two benchmark multimedia data sets, TRECVID and NUS-WIDE, which contain large amounts of multimedia data and various semantic concepts

    Multimodal Data Analytics and Fusion for Data Science

    Get PDF
    Advances in technologies have rapidly accumulated a zettabyte of “new” data every two years. The huge amount of data have a powerful impact on various areas in science and engineering and generates enormous research opportunities, which calls for the design and development of advanced approaches in data analytics. Given such demands, data science has become an emerging hot topic in both industry and academia, ranging from basic business solutions, technological innovations, and multidisciplinary research to political decisions, urban planning, and policymaking. Within the scope of this dissertation, a multimodal data analytics and fusion framework is proposed for data-driven knowledge discovery and cross-modality semantic concept detection. The proposed framework can explore useful knowledge hidden in different formats of data and incorporate representation learning from data in multimodalities, especial for disaster information management. First, a Feature Affinity-based Multiple Correspondence Analysis (FA-MCA) method is presented to analyze the correlations between low-level features from different features, and an MCA-based Neural Network (MCA-NN) ispro- posedto capture the high-level features from individual FA-MCA models and seamlessly integrate the semantic data representations for video concept detection. Next, a genetic algorithm-based approach is presented for deep neural network selection. Furthermore, the improved genetic algorithm is integrated with deep neural networks to generate populations for producing optimal deep representation learning models. Then, the multimodal deep representation learning framework is proposed to incorporate the semantic representations from data in multiple modalities efficiently. At last, fusion strategies are applied to accommodate multiple modalities. In this framework, cross-modal mapping strategies are also proposed to organize the features in a better structure to improve the overall performance

    Partitionnement des images hyperspectrales de grande dimension spatiale par propagation d'affinité

    Get PDF
    The interest in hyperspectral image data has been constantly increasing during the last years. Indeed, hyperspectral images provide more detailed information about the spectral properties of a scene and allow a more precise discrimination of objects than traditional color images or even multispectral images. High spatial and spectral resolutions of hyperspectral images enable to precisely characterize the information pixel content. Though the potentialities of hyperspectral technology appear to be relatively wide, the analysis and the treatment of these data remain complex. In fact, exploiting such large data sets presents a great challenge. In this thesis, we are mainly interested in the reduction and partitioning of hyperspectral images of high spatial dimension. The proposed approach consists essentially of two steps: features extraction and classification of pixels of an image. A new approach for features extraction based on spatial and spectral tri-occurrences matrices defined on cubic neighborhoods is proposed. A comparative study shows the discrimination power of these new features over conventional ones as well as spectral signatures. Concerning the classification step, we are mainly interested in this thesis to the unsupervised and non-parametric classification approach because it has several advantages: no a priori knowledge, image partitioning for any application domain, and adaptability to the image information content. A comparative study of the most well-known semi-supervised (knowledge of number of classes) and unsupervised non-parametric methods (K-means, FCM, ISODATA, AP) showed the superiority of affinity propagation (AP). Despite its high correct classification rate, affinity propagation has two major drawbacks. Firstly, the number of classes is over-estimated when the preference parameter p value is initialized as the median value of the similarity matrix. Secondly, the partitioning of large size hyperspectral images is hampered by its quadratic computational complexity. Therefore, its application to this data type remains impossible. To overcome these two drawbacks, we propose an approach which consists of reducing the number of pixels to be classified before the application of AP by automatically grouping data points with high similarity. We also introduce a step to optimize the preference parameter value by maximizing a criterion related to the interclass variance, in order to correctly estimate the number of classes. The proposed approach was successfully applied on synthetic images, mono-component and multi-component and showed a consistent discrimination of obtained classes. It was also successfully applied and compared on hyperspectral images of high spatial dimension (1000 × 1000 pixels × 62 bands) in the context of a real application for the detection of invasive and non-invasive vegetation species.Les images hyperspectrales suscitent un intérêt croissant depuis une quinzaine d'années. Elles fournissent une information plus détaillée d'une scène et permettent une discrimination plus précise des objets que les images couleur RVB ou multi-spectrales. Bien que les potentialités de la technologie hyperspectrale apparaissent relativement grandes, l'analyse et l'exploitation de ces données restent une tâche difficile et présentent aujourd'hui un défi. Les travaux de cette thèse s'inscrivent dans le cadre de la réduction et de partitionnement des images hyperspectrales de grande dimension spatiale. L'approche proposée se compose de deux étapes : calcul d'attributs et classification des pixels. Une nouvelle approche d'extraction d'attributs à partir des matrices de tri-occurrences définies sur des voisinages cubiques est proposée en tenant compte de l'information spatiale et spectrale. Une étude comparative a été menée afin de tester le pouvoir discriminant de ces nouveaux attributs par rapport aux attributs classiques. Les attributs proposés montrent un large écart discriminant par rapport à ces derniers et par rapport aux signatures spectrales. Concernant la classification, nous nous intéressons ici au partitionnement des images par une approche de classification non supervisée et non paramétrique car elle présente plusieurs avantages: aucune connaissance a priori, partitionnement des images quel que soit le domaine applicatif, adaptabilité au contenu informationnel des images. Une étude comparative des principaux classifieurs semi-supervisés (connaissance du nombre de classes) et non supervisés (C-moyennes, FCM, ISODATA, AP) a montré la supériorité de la méthode de propagation d'affinité (AP). Mais malgré un meilleur taux de classification, cette méthode présente deux inconvénients majeurs: une surestimation du nombre de classes dans sa version non supervisée, et l'impossibilité de l'appliquer sur des images de grande taille (complexité de calcul quadratique). Nous avons proposé une approche qui apporte des solutions à ces deux problèmes. Elle consiste tout d'abord à réduire le nombre d'individus à classer avant l'application de l'AP en agrégeant les pixels à très forte similarité. Pour estimer le nombre de classes, la méthode AP utilise de manière implicite un paramètre de préférence p dont la valeur initiale correspond à la médiane des valeurs de la matrice de similarité. Cette valeur conduisant souvent à une sur-segmentation des images, nous avons introduit une étape permettant d'optimiser ce paramètre en maximisant un critère lié à la variance interclasse. L'approche proposée a été testée avec succès sur des images synthétiques, mono et multi-composantes. Elle a été également appliquée et comparée sur des images hyperspectrales de grande taille spatiale (1000 × 1000 pixels × 62 bandes) avec succès dans le cadre d'une application réelle pour la détection des plantes invasives

    Term selection in information retrieval

    Get PDF
    Systems trained on linguistically annotated data achieve strong performance for many language processing tasks. This encourages the idea that annotations can improve any language processing task if applied in the right way. However, despite widespread acceptance and availability of highly accurate parsing software, it is not clear that ad hoc information retrieval (IR) techniques using annotated documents and requests consistently improve search performance compared to techniques that use no linguistic knowledge. In many cases, retrieval gains made using language processing components, such as part-of-speech tagging and head-dependent relations, are offset by significant negative effects. This results in a minimal positive, or even negative, overall impact for linguistically motivated approaches compared to approaches that do not use any syntactic or domain knowledge. In some cases, it may be that syntax does not reveal anything of practical importance about document relevance. Yet without a convincing explanation for why linguistic annotations fail in IR, the intuitive appeal of search systems that ‘understand’ text can result in the repeated application, and mis-application, of language processing to enhance search performance. This dissertation investigates whether linguistics can improve the selection of query terms by better modelling the alignment process between natural language requests and search queries. It is the most comprehensive work on the utility of linguistic methods in IR to date. Term selection in this work focuses on identification of informative query terms of 1-3 words that both represent the semantics of a request and discriminate between relevant and non-relevant documents. Approaches to word association are discussed with respect to linguistic principles, and evaluated with respect to semantic characterization and discriminative ability. Analysis is organised around three theories of language that emphasize different structures for the identification of terms: phrase structure theory, dependency theory and lexicalism. The structures identified by these theories play distinctive roles in the organisation of language. Evidence is presented regarding the value of different methods of word association based on these structures, and the effect of method and term combinations. Two highly effective, novel methods for the selection of terms from verbose queries are also proposed and evaluated. The first method focuses on the semantic phenomenon of ellipsis with a discriminative filter that leverages diverse text features. The second method exploits a term ranking algorithm, PhRank, that uses no linguistic information and relies on a network model of query context. The latter focuses queries so that 1-5 terms in an unweighted model achieve better retrieval effectiveness than weighted IR models that use up to 30 terms. In addition, unlike models that use a weighted distribution of terms or subqueries, the concise terms identified by PhRank are interpretable by users. Evaluation with newswire and web collections demonstrates that PhRank-based query reformulation significantly improves performance of verbose queries up to 14% compared to highly competitive IR models, and is at least as good for short, keyword queries with the same models. Results illustrate that linguistic processing may help with the selection of word associations but does not necessarily translate into improved IR performance. Statistical methods are necessary to overcome the limits of syntactic parsing and word adjacency measures for ad hoc IR. As a result, probabilistic frameworks that discover, and make use of, many forms of linguistic evidence may deliver small improvements in IR effectiveness, but methods that use simple features can be substantially more efficient and equally, or more, effective. Various explanations for this finding are suggested, including the probabilistic nature of grammatical categories, a lack of homomorphism between syntax and semantics, the impact of lexical relations, variability in collection data, and systemic effects in language systems
    corecore