20 research outputs found

    Permutation distribution clustering and structural equation model trees

    Get PDF
    The primary goal of this thesis is to present novel methodologies for the exploratory analysis of psychological data sets that support researchers in informed theory development. Psychological data analysis bears a long tradition of confirming hypotheses generated prior to data collection. However, in practical research, the following two situations are commonly observed: In the first instance, there are no initial hypotheses about the data. In that case, there is no model available and one has to resort to uninformed methods to reveal structure in the data. In the second instance, existing models that reflect prior hypotheses need to be extended and improved, thereby altering and renewing hypotheses about the data and refining descriptions of the observed phenomena. This dissertation introduces a novel method for the exploratory analysis of psychological data sets for each of the two situations. Both methods focus on time series analysis, which is particularly interesting for the analysis of psychophysiological data and longitudinal data typically collected by developmental psychologists. Nonetheless, the methods are generally applicable and useful for other fields that analyze time series data, e.g., sociology, economics, neuroscience, and genetics. The first part of the dissertation proposes a clustering method for time series. A dissimilarity measure of time series based on the permutation distribution is developed. Employing this measure in a hierarchical scheme allows for a novel clustering method for time series based on their relative complexity: Permutation Distribution Clustering (PDC). Two methods for the determination of the number of distinct clusters are discussed based on a statistical and an information-theoretic criterion. Structural Equation Models (SEMs) constitute a versatile modeling technique, which is frequently employed in psychological research. The second part of the dissertation introduces an extension of SEMs to Structural Equation Modeling Trees (SEM Trees). SEM Trees describe partitions of a covariate-space which explain differences in the model parameters. They can provide solutions in situations in which hypotheses in the form of a model exist but may potentially be refined by integrating other variables. By harnessing the full power of SEM, they represent a general data analysis technique that can be used for both time series and non-time series data. SEM Trees algorithmically refine initial models of the sample and thus support researchers in theory development. This thesis includes demonstrations of the methods on simulated as well as on real data sets, including applications of SEM Trees to longitudinal models of cognitive development and cross-sectional cognitive factor models, and applications of PDC on psychophysiological data, including electroencephalographic, electrocardiographic, and genetic data.Ziel dieser Arbeit ist der Entwurf von explorativen Analysemethoden für Datensätze aus der Psychologie, um Wissenschaftler bei der Entwicklung fundierter Theorien zu unterstützen. Die Arbeit ist motiviert durch die Beobachtung, dass die klassischen Auswertungsmethoden für psychologische Datensätze auf der Tradition gründen, Hypothesen zu testen, die vor der Datenerhebung aufgestellt wurden. Allerdings treten die folgenden beiden Situationen im Alltag der Datenauswertung häufig auf: (1) es existieren keine Hypothesen über die Daten und damit auch kein Modelle. Der Wissenschaftler muss also auf uninformierte Methoden zurückgreifen, um Strukturen und Ähnlichkeiten in den Daten aufzudecken. (2) Modelle sind vorhanden, die Hypothesen über die Daten widerspiegeln, aber die Stichprobe nur unzureichend abbilden. In diesen Fällen müssen die existierenden Modelle und damit Hypothesen verändert und erweitert werden, um die Beschreibung der beobachteten Phänomene zu verfeinern. Die vorliegende Dissertation führt für beide Fälle je eine neue Methode ein, die auf die explorative Analyse psychologischer Daten zugeschnitten ist. Gleichwohl sind beide Methoden für alle Bereiche nützlich, in denen Zeitreihendaten analysiert werden, wie z.B. in der Soziologie, den Wirtschaftswissenschaften, den Neurowissenschaften und der Genetik. Der erste Teil der Arbeit schlägt ein Clusteringverfahren für Zeitreihen vor. Dieses basiert auf einem Ähnlichkeitsmaß zwischen Zeitreihen, das auf die Permutationsverteilung der eingebetteten Zeitreihen zurückgeht. Dieses Maß wird mit einem hierarchischen Clusteralgorithmus kombiniert, um Zeitreihen nach ihrer Komplexität in homogene Gruppen zu ordnen. Auf diese Weise entsteht die neue Methode der Permutationsverteilungs-basierten Clusteranalyse (PDC). Zwei Methoden zur Bestimmung der Anzahl von separaten Clustern werden hergeleitet, einmal auf Grundlage von statistischen Tests und einmal basierend auf informationstheoretischen Kriterien. Der zweite Teil der Arbeit erweitert Strukturgleichungsmodelle (SEM), eine vielseitige Modellierungstechnik, die in der Psychologie weit verbreitet ist, zu Strukturgleichungsmodell-Bäumen (SEM Trees). SEM Trees beschreiben rekursive Partitionen eines Raumes beobachteter Variablen mit maximalen Unterschieden in den Modellparametern eines SEMs. In Situationen, in denen Hypothesen in Form eines Modells existieren, können SEM Trees sie verfeinern, indem sie automatisch Variablen finden, die Unterschiede in den Modellparametern erklären. Durch die hohe Flexibilität von SEMs, können eine Vielzahl verschiedener Modelle mit SEM Trees erweitert werden. Die Methode eignet sich damit für die Analyse sowohl von Zeitreihen als auch von Nicht-Zeitreihen. SEM Trees verfeinern algorithmisch anfängliche Hypothesen und unterstützen Forscher in der Weiterentwicklung ihrer Theorien. Die vorliegende Arbeit beinhaltet Demonstrationen der vorgeschlagenen Methoden auf realen Datensätzen, darunter Anwendungen von SEM Trees auf einem längsschnittlichen Wachstumsmodell kognitiver Fähigkeiten und einem querschnittlichen kognitiven Faktor Modell, sowie Anwendungen des PDC auf verschiedenen psychophsyiologischen Zeitreihen

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Discovery of low-dimensional structure in high-dimensional inference problems

    Full text link
    Many learning and inference problems involve high-dimensional data such as images, video or genomic data, which cannot be processed efficiently using conventional methods due to their dimensionality. However, high-dimensional data often exhibit an inherent low-dimensional structure, for instance they can often be represented sparsely in some basis or domain. The discovery of an underlying low-dimensional structure is important to develop more robust and efficient analysis and processing algorithms. The first part of the dissertation investigates the statistical complexity of sparse recovery problems, including sparse linear and nonlinear regression models, feature selection and graph estimation. We present a framework that unifies sparse recovery problems and construct an analogy to channel coding in classical information theory. We perform an information-theoretic analysis to derive bounds on the number of samples required to reliably recover sparsity patterns independent of any specific recovery algorithm. In particular, we show that sample complexity can be tightly characterized using a mutual information formula similar to channel coding results. Next, we derive major extensions to this framework, including dependent input variables and a lower bound for sequential adaptive recovery schemes, which helps determine whether adaptivity provides performance gains. We compute statistical complexity bounds for various sparse recovery problems, showing our analysis improves upon the existing bounds and leads to intuitive results for new applications. In the second part, we investigate methods for improving the computational complexity of subgraph detection in graph-structured data, where we aim to discover anomalous patterns present in a connected subgraph of a given graph. This problem arises in many applications such as detection of network intrusions, community detection, detection of anomalous events in surveillance videos or disease outbreaks. Since optimization over connected subgraphs is a combinatorial and computationally difficult problem, we propose a convex relaxation that offers a principled approach to incorporating connectivity and conductance constraints on candidate subgraphs. We develop a novel nearly-linear time algorithm to solve the relaxed problem, establish convergence and consistency guarantees and demonstrate its feasibility and performance with experiments on real networks

    Privacy-preserving information hiding and its applications

    Get PDF
    The phenomenal advances in cloud computing technology have raised concerns about data privacy. Aided by the modern cryptographic techniques such as homomorphic encryption, it has become possible to carry out computations in the encrypted domain and process data without compromising information privacy. In this thesis, we study various classes of privacy-preserving information hiding schemes and their real-world applications for cyber security, cloud computing, Internet of things, etc. Data breach is recognised as one of the most dreadful cyber security threats in which private data is copied, transmitted, viewed, stolen or used by unauthorised parties. Although encryption can obfuscate private information against unauthorised viewing, it may not stop data from illegitimate exportation. Privacy-preserving Information hiding can serve as a potential solution to this issue in such a manner that a permission code is embedded into the encrypted data and can be detected when transmissions occur. Digital watermarking is a technique that has been used for a wide range of intriguing applications such as data authentication and ownership identification. However, some of the algorithms are proprietary intellectual properties and thus the availability to the general public is rather limited. A possible solution is to outsource the task of watermarking to an authorised cloud service provider, that has legitimate right to execute the algorithms as well as high computational capacity. Privacypreserving Information hiding is well suited to this scenario since it is operated in the encrypted domain and hence prevents private data from being collected by the cloud. Internet of things is a promising technology to healthcare industry. A common framework consists of wearable equipments for monitoring the health status of an individual, a local gateway device for aggregating the data, and a cloud server for storing and analysing the data. However, there are risks that an adversary may attempt to eavesdrop the wireless communication, attack the gateway device or even access to the cloud server. Hence, it is desirable to produce and encrypt the data simultaneously and incorporate secret sharing schemes to realise access control. Privacy-preserving secret sharing is a novel research for fulfilling this function. In summary, this thesis presents novel schemes and algorithms, including: • two privacy-preserving reversible information hiding schemes based upon symmetric cryptography using arithmetic of quadratic residues and lexicographic permutations, respectively. • two privacy-preserving reversible information hiding schemes based upon asymmetric cryptography using multiplicative and additive privacy homomorphisms, respectively. • four predictive models for assisting the removal of distortions inflicted by information hiding based respectively upon projection theorem, image gradient, total variation denoising, and Bayesian inference. • three privacy-preserving secret sharing algorithms with different levels of generality

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    Modularity and Neural Integration in Large-Vocabulary Continuous Speech Recognition

    Get PDF
    This Thesis tackles the problems of modularity in Large-Vocabulary Continuous Speech Recognition with use of Neural Network
    corecore