49 research outputs found

    A tensor-based approach for big data representation and dimensionality reduction

    Get PDF
    PublishedJournal Article© 2013 IEEE. Variety and veracity are two distinct characteristics of large-scale and heterogeneous data. It has been a great challenge to efficiently represent and process big data with a unified scheme. In this paper, a unified tensor model is proposed to represent the unstructured, semistructured, and structured data. With tensor extension operator, various types of data are represented as subtensors and then are merged to a unified tensor. In order to extract the core tensor which is small but contains valuable information, an incremental high order singular value decomposition (IHOSVD) method is presented. By recursively applying the incremental matrix decomposition algorithm, IHOSVD is able to update the orthogonal bases and compute the new core tensor. Analyzes in terms of time complexity, memory usage, and approximation accuracy of the proposed method are provided in this paper. A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general. Theoretical analyzes and experimental results demonstrate that the proposed unified tensor model and IHOSVD method are efficient for big data representation and dimensionality reduction

    Towards Addressing Key Visual Processing Challenges in Social Media Computing

    Get PDF
    abstract: Visual processing in social media platforms is a key step in gathering and understanding information in the era of Internet and big data. Online data is rich in content, but its processing faces many challenges including: varying scales for objects of interest, unreliable and/or missing labels, the inadequacy of single modal data and difficulty in analyzing high dimensional data. Towards facilitating the processing and understanding of online data, this dissertation primarily focuses on three challenges that I feel are of great practical importance: handling scale differences in computer vision tasks, such as facial component detection and face retrieval, developing efficient classifiers using partially labeled data and noisy data, and employing multi-modal models and feature selection to improve multi-view data analysis. For the first challenge, I propose a scale-insensitive algorithm to expedite and accurately detect facial landmarks. For the second challenge, I propose two algorithms that can be used to learn from partially labeled data and noisy data respectively. For the third challenge, I propose a new framework that incorporates feature selection modules into LDA models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Exploring geometrical structures in high-dimensional computer vision data

    Get PDF
    In computer vision, objects such as local features, images and video sequences are often represented as high dimensional data points, although it is commonly believed that there are low dimensional geometrical structures that underline the data set. The low dimensional geometric information enables us to have a better understanding of the high dimensional data sets and is useful in solving computer vision problems. In this thesis, the geometrical structures are investigated from different perspectives according to different computer vision applications. For spectral clustering, the distribution of data points in the local region is summarised by a covariance matrix which is viewed as the Mahalanobis distance. For the action recognition problem, we extract subspace information for each action class. The query video sequence is labeled by information regarding its distance to the subspaces of the corresponding video classes. Three new algorithms are introduced for hashing-based approaches for approximate nearest neighbour (ANN) search problems, NOKMeans relaxes the orthogonal condition of the encoding functions in previous quantisation error based methods by representing data points in a new feature space; Auto-JacoBin uses a robust auto-encoder model to preserve the geometric information from the original space into the binary codes; and AGreedy assigns a score, which reflects the ability to preserve the order information in the local regions, for any set of encoding functions and an alternating greedy method is used to find a local optimal solution. The geometric information has the potential to bring better solutions for computer vision problems. As shown in our experiments, the benefits include increasing clustering accuracy, reducing the computation for recognising actions in videos and increasing retrieval performance for ANN problems

    Exploring sparsity, self-similarity, and low rank approximation in action recognition, motion retrieval, and action spotting

    Get PDF
    This thesis consists of 4 major parts. In the first part (Chapters 1-2), we introduce the overview, motivation, and contribution of our works, and extensively survey the current literature for 6 related topics. In the second part (Chapters 3-7), we explore the concept of Self-Similarity in two challenging scenarios, namely, the Action Recognition and the Motion Retrieval. We build three-dimensional volume representations for both scenarios, and devise effective techniques that can produce compact representations encoding the internal dynamics of data. In the third part (Chapter 8), we explore the challenging action spotting problem, and propose a feature-independent unsupervised framework that is effective in spotting action under various real situations, even under heavily perturbed conditions. The final part (Chapters 9) is dedicated to conclusions and future works. For action recognition, we introduce a generic method that does not depend on one particular type of input feature vector. We make three main contributions: (i) We introduce the concept of Joint Self-Similarity Volume (Joint SSV) for modeling dynamical systems, and show that by using a new optimized rank-1 tensor approximation of Joint SSV one can obtain compact low-dimensional descriptors that very accurately preserve the dynamics of the original system, e.g. an action video sequence; (ii) The descriptor vectors derived from the optimized rank-1 approximation make it possible to recognize actions without explicitly aligning the action sequences of varying speed of execution or difference frame rates; (iii) The method is generic and can be applied using different low-level features such as silhouettes, histogram of oriented gradients (HOG), etc. Hence, it does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on five public datasets demonstrate that our method produces very good results and outperforms many baseline methods. For action recognition for incomplete videos, we determine whether incomplete videos that are often discarded carry useful information for action recognition, and if so, how one can represent such mixed collection of video data (complete versus incomplete, and labeled versus unlabeled) in a unified manner. We propose a novel framework to handle incomplete videos in action classification, and make three main contributions: (i) We cast the action classification problem for a mixture of complete and incomplete data as a semi-supervised learning problem of labeled and unlabeled data. (ii) We introduce a two-step approach to convert the input mixed data into a uniform compact representation. (iii) Exhaustively scrutinizing 280 configurations, we experimentally show on our two created benchmarks that, even when the videos are extremely sparse and incomplete, it is still possible to recover useful information from them, and classify unknown actions by a graph based semi-supervised learning framework. For motion retrieval, we present a framework that allows for a flexible and an efficient retrieval of motion capture data in huge databases. The method first converts an action sequence into a self-similarity matrix (SSM), which is based on the notion of self-similarity. This conversion of the motion sequences into compact and low-rank subspace representations greatly reduces the spatiotemporal dimensionality of the sequences. The SSMs are then used to construct order-3 tensors, and we propose a low-rank decomposition scheme that allows for converting the motion sequence volumes into compact lower dimensional representations, without losing the nonlinear dynamics of the motion manifold. Thus, unlike existing linear dimensionality reduction methods that distort the motion manifold and lose very critical and discriminative components, the proposed method performs well, even when inter-class differences are small or intra-class differences are large. In addition, the method allows for an efficient retrieval and does not require the time-alignment of the motion sequences. We evaluate the performance of our retrieval framework on the CMU mocap dataset under two experimental settings, both demonstrating very good retrieval rates. For action spotting, our framework does not depend on any specific feature (e.g. HOG/HOF, STIP, silhouette, bag-of-words, etc.), and requires no human localization, segmentation, or framewise tracking. This is achieved by treating the problem holistically as that of extracting the internal dynamics of video cuboids by modeling them in their natural form as multilinear tensors. To extract their internal dynamics, we devised a novel Two-Phase Decomposition (TP-Decomp) of a tensor that generates very compact and discriminative representations that are robust to even heavily perturbed data. Technically, a Rank-based Tensor Core Pyramid (Rank-TCP) descriptor is generated by combining multiple tensor cores under multiple ranks, allowing to represent video cuboids in a hierarchical tensor pyramid. The problem then reduces to a template matching problem, which is solved efficiently by using two boosting strategies: (i) to reduce the search space, we filter the dense trajectory cloud extracted from the target video; (ii) to boost the matching speed, we perform matching in an iterative coarse-to-fine manner. Experiments on 5 benchmarks show that our method outperforms current state-of-the-art under various challenging conditions. We also created a challenging dataset called Heavily Perturbed Video Arrays (HPVA) to validate the robustness of our framework under heavily perturbed situations

    Advanced tensor based signal processing techniques for wireless communication systems and biomedical signal processing

    Get PDF
    Many observed signals in signal processing applications including wireless communications, biomedical signal processing, image processing, and machine learning are multi-dimensional. Tensors preserve the multi-dimensional structure and provide a natural representation of these signals/data. Moreover, tensors provide often an improved identifiability. Therefore, we benefit from using tensor algebra in the above mentioned applications and many more. In this thesis, we present the benefits of utilizing tensor algebra in two signal processing areas. These include signal processing for MIMO (Multiple-Input Multiple-Output) wireless communication systems and biomedical signal processing. Moreover, we contribute to the theoretical aspects of tensor algebra by deriving new properties and ways of computing tensor decompositions. Often, we only have an element-wise or a slice-wise description of the signal model. This representation of the signal model does not reveal the explicit tensor structure. Therefore, the derivation of all tensor unfoldings is not always obvious. Consequently, exploiting the multi-dimensional structure of these models is not always straightforward. We propose an alternative representation of the element-wise multiplication or the slice-wise multiplication based on the generalized tensor contraction operator. Later in this thesis, we exploit this novel representation and the properties of the contraction operator such that we derive the final tensor models. There exist a number of different tensor decompositions that describe different signal models such as the HOSVD (Higher Order Singular Value Decomposition), the CP/PARAFAC (Canonical Polyadic / PARallel FACtors) decomposition, the BTD (Block Term Decomposition), the PARATUCK2 (PARAfac and TUCker2) decomposition, and the PARAFAC2 (PARAllel FACtors2) decomposition. Among these decompositions, the CP decomposition is most widely spread and used. Therefore, the development of algorithms for the efficient computation of the CP decomposition is important for many applications. The SECSI (Semi-Algebraic framework for approximate CP decomposition via SImultaneaous matrix diagonalization) framework is an efficient and robust tool for the calculation of the approximate low-rank CP decomposition via simultaneous matrix diagonalizations. In this thesis, we present five extensions of the SECSI framework that reduce the computational complexity of the original framework and/or introduce constraints to the factor matrices. Moreover, the PARAFAC2 decomposition and the PARATUCK2 decomposition are usually described using a slice-wise notation that can be expressed in terms of the generalized tensor contraction as proposed in this thesis. We exploit this novel representation to derive explicit tensor models for the PARAFAC2 decomposition and the PARATUCK2 decomposition. Furthermore, we use the PARAFAC2 model to derive an ALS (Alternating Least-Squares) algorithm for the computation of the PARAFAC2 decomposition. Moreover, we exploit the novel contraction properties for element wise and slice-wise multiplications to model MIMO multi-carrier wireless communication systems. We show that this very general model can be used to derive the tensor model of the received signal for MIMO-OFDM (Multiple-Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao coded MIMO-OFDM, and randomly coded MIMO-OFDM systems. We propose the transmission techniques Khatri-Rao coding and random coding in order to impose an additional tensor structure of the transmit signal tensor that otherwise does not have a particular structure. Moreover, we show that this model can be extended to other multi-carrier techniques such as GFDM (Generalized Frequency Division Multiplexing). Utilizing these models at the receiver side, we design several types for receivers for these systems that outperform the traditional matrix based solutions in terms of the symbol error rate. In the last part of this thesis, we show the benefits of using tensor algebra in biomedical signal processing by jointly decomposing EEG (ElectroEncephaloGraphy) and MEG (MagnetoEncephaloGraphy) signals. EEG and MEG signals are usually acquired simultaneously, and they capture aspects of the same brain activity. Therefore, EEG and MEG signals can be decomposed using coupled tensor decompositions such as the coupled CP decomposition. We exploit the proposed coupled SECSI framework (one of the proposed extensions of the SECSI framework) for the computation of the coupled CP decomposition to first validate and analyze the photic driving effect. Moreover, we validate the effects of scull defects on the measurement EEG and MEG signals by means of a joint EEG-MEG decomposition using the coupled SECSI framework. Both applications show that we benefit from coupled tensor decompositions and the coupled SECSI framework is a very practical tool for the analysis of biomedical data.Zahlreiche messbare Signale in verschiedenen Bereichen der digitalen Signalverarbeitung, z.B. in der drahtlosen Kommunikation, im Mobilfunk, biomedizinischen Anwendungen, der Bild- oder akustischen Signalverarbeitung und dem maschinellen Lernen sind mehrdimensional. Tensoren erhalten die mehrdimensionale Struktur und stellen eine natürliche Darstellung dieser Signale/Daten dar. Darüber hinaus bieten Tensoren oft eine verbesserte Trennbarkeit von enthaltenen Signalkomponenten. Daher profitieren wir von der Verwendung der Tensor-Algebra in den oben genannten Anwendungen und vielen mehr. In dieser Arbeit stellen wir die Vorteile der Nutzung der Tensor-Algebra in zwei Bereichen der Signalverarbeitung vor: drahtlose MIMO (Multiple-Input Multiple-Output) Kommunikationssysteme und biomedizinische Signalverarbeitung. Darüber hinaus tragen wir zu theoretischen Aspekten der Tensor-Algebra bei, indem wir neue Eigenschaften und Berechnungsmethoden für die Tensor-Zerlegung ableiten. Oftmals verfügen wir lediglich über eine elementweise oder ebenenweise Beschreibung des Signalmodells, welche nicht die explizite Tensorstruktur zeigt. Daher ist die Ableitung aller Tensor-Unfoldings nicht offensichtlich, wodurch die multidimensionale Struktur dieser Modelle nicht trivial nutzbar ist. Wir schlagen eine alternative Darstellung der elementweisen Multiplikation oder der ebenenweisen Multiplikation auf der Grundlage des generalisierten Tensor-Kontraktionsoperators vor. Weiterhin nutzen wir diese neuartige Darstellung und deren Eigenschaften zur Ableitung der letztendlichen Tensor-Modelle. Es existieren eine Vielzahl von Tensor-Zerlegungen, die verschiedene Signalmodelle beschreiben, wie die HOSVD (Higher Order Singular Value Decomposition), CP/PARAFAC (Canonical Polyadic/ PARallel FACtors) Zerlegung, die BTD (Block Term Decomposition), die PARATUCK2-(PARAfac und TUCker2) und die PARAFAC2-Zerlegung (PARAllel FACtors2). Dabei ist die CP-Zerlegung am weitesten verbreitet und wird findet in zahlreichen Gebieten Anwendung. Daher ist die Entwicklung von Algorithmen zur effizienten Berechnung der CP-Zerlegung von besonderer Bedeutung. Das SECSI (Semi-Algebraic Framework for approximate CP decomposition via Simultaneaous matrix diagonalization) Framework ist ein effizientes und robustes Werkzeug zur Berechnung der approximierten Low-Rank CP-Zerlegung durch simultane Matrixdiagonalisierung. In dieser Arbeit stellen wir fünf Erweiterungen des SECSI-Frameworks vor, welche die Rechenkomplexität des ursprünglichen Frameworks reduzieren bzw. Einschränkungen für die Faktormatrizen einführen. Darüber hinaus werden die PARAFAC2- und die PARATUCK2-Zerlegung in der Regel mit einer ebenenweisen Notation beschrieben, die sich in Form der allgemeinen Tensor-Kontraktion, wie sie in dieser Arbeit vorgeschlagen wird, ausdrücken lässt. Wir nutzen diese neuartige Darstellung, um explizite Tensormodelle für diese beiden Zerlegungen abzuleiten. Darüber hinaus verwenden wir das PARAFAC2-Modell, um einen ALS-Algorithmus (Alternating Least-Squares) für die Berechnung der PARAFAC2-Zerlegungen abzuleiten. Weiterhin nutzen wir die neuartigen Kontraktionseigenschaften für elementweise und ebenenweise Multiplikationen, um MIMO Multi-Carrier-Mobilfunksysteme zu modellieren. Wir zeigen, dass dieses sehr allgemeine Modell verwendet werden kann, um das Tensor-Modell des empfangenen Signals für MIMO-OFDM- (Multiple- Input Multiple-Output - Orthogonal Frequency Division Multiplexing), Khatri-Rao codierte MIMO-OFDM- und zufällig codierte MIMO-OFDM-Systeme abzuleiten. Wir schlagen die Übertragungstechniken der Khatri-Rao-Kodierung und zufällige Kodierung vor, um eine zusätzliche Tensor-Struktur des Sendesignal-Tensors einzuführen, welcher gewöhnlich keine bestimmte Struktur aufweist. Darüber hinaus zeigen wir, dass dieses Modell auf andere Multi-Carrier-Techniken wie GFDM (Generalized Frequency Division Multiplexing) erweitert werden kann. Unter Verwendung dieser Modelle auf der Empfängerseite entwerfen wir verschiedene Typen von Empfängern für diese Systeme, die die traditionellen matrixbasierten Lösungen in Bezug auf die Symbolfehlerrate übertreffen. Im letzten Teil dieser Arbeit zeigen wir die Vorteile der Verwendung von Tensor-Algebra in der biomedizinischen Signalverarbeitung durch die gemeinsame Zerlegung von EEG-(ElectroEncephaloGraphy) und MEG- (MagnetoEncephaloGraphy) Signalen. Diese werden in der Regel gleichzeitig erfasst, wobei sie gemeinsame Aspekte derselben Gehirnaktivität beschreiben. Daher können EEG- und MEG-Signale mit gekoppelten Tensor-Zerlegungen wie der gekoppelten CP Zerlegung analysiert werden. Wir nutzen das vorgeschlagene gekoppelte SECSI-Framework (eine der vorgeschlagenen Erweiterungen des SECSI-Frameworks) für die Berechnung der gekoppelten CP Zerlegung, um zunächst den photic driving effect zu validieren und zu analysieren. Darüber hinaus validieren wir die Auswirkungen von Schädeldefekten auf die Messsignale von EEG und MEG durch eine gemeinsame EEG-MEG-Zerlegung mit dem gekoppelten SECSI-Framework. Beide Anwendungen zeigen, dass wir von gekoppelten Tensor-Zerlegungen profitieren, wobei die Methoden des gekoppelten SECSI-Frameworks erfolgreich zur Analyse biomedizinischer Daten genutzt werden können

    Semantic multimedia analysis using knowledge and context

    Get PDF
    PhDThe difficulty of semantic multimedia analysis can be attributed to the extended diversity in form and appearance exhibited by the majority of semantic concepts and the difficulty to express them using a finite number of patterns. In meeting this challenge there has been a scientific debate on whether the problem should be addressed from the perspective of using overwhelming amounts of training data to capture all possible instantiations of a concept, or from the perspective of using explicit knowledge about the concepts’ relations to infer their presence. In this thesis we address three problems of pattern recognition and propose solutions that combine the knowledge extracted implicitly from training data with the knowledge provided explicitly in structured form. First, we propose a BNs modeling approach that defines a conceptual space where both domain related evi- dence and evidence derived from content analysis can be jointly considered to support or disprove a hypothesis. The use of this space leads to sig- nificant gains in performance compared to analysis methods that can not handle combined knowledge. Then, we present an unsupervised method that exploits the collective nature of social media to automatically obtain large amounts of annotated image regions. By proving that the quality of the obtained samples can be almost as good as manually annotated images when working with large datasets, we significantly contribute towards scal- able object detection. Finally, we introduce a method that treats images, visual features and tags as the three observable variables of an aspect model and extracts a set of latent topics that incorporates the semantics of both visual and tag information space. By showing that the cross-modal depen- dencies of tagged images can be exploited to increase the semantic capacity of the resulting space, we advocate the use of all existing information facets in the semantic analysis of social media
    corecore