44 research outputs found

    Interactive real-time three-dimensional visualisation of virtual textiles

    Get PDF
    Virtual textile databases provide a cost-efficient alternative to the use of existing hardcover sample catalogues. By taking advantage of the high performance features offered by the latest generation of programmable graphics accelerator boards, it is possible to combine photometric stereo methods with 3D visualisation methods to implement a virtual textile database. In this thesis, we investigate and combine rotation invariant texture retrieval with interactive visualisation techniques. We use a 3D surface representation that is a generic data representation that allows us to combine real-time interactive 3D visualisation methods with present day texture retrieval methods. We begin by investigating the most suitable data format for the 3D surface representation and identify relief-mapping combined with Bézier surfaces as the most suitable 3D surface representations for our needs, and go on to describe how these representation can be combined for real-time rendering. We then investigate ten different methods of implementing rotation invariant texture retrieval using feature vectors. These results show that first order statistics in the form of histogram data are very effective for discriminating colour albedo information, while rotation invariant gradient maps are effective for distinguishing between different types of micro-geometry using either first or second order statistics.Engineering and physical Sciences Research (EPSRC

    Reconstructing Images from In Vivo Laser Scanning Microscope Data

    Get PDF
    Two-photon laser-scanning microscopy can be used for in vivo neuro-imaging of small animals. Due to the very high resolution of the images, any brain motion can cause significant artifacts; often the tissue may get displaced by 10 or more pixels from its rest position. To scan an image of 512 lines it takes about 1s. During this time, at least 3 heart beats and 1 respiration happen moving the brain. Therefore some tissue locations are scanned several times while others are missed. Consequently, although the images may appear reasonable, they can lead to incorrect conclusions with respect to brain structure or function. As lines are scanned almost instantaneously (~1ms), our problem is reduced to relocating each line in a three-dimensional stack of images to its "correct" location. In order to model the movement process and quantify the effect of the physiological signal, we collected hybrid image data: fixing y and z, the microscope was set to scan in the x direction for several thousands of times. Classifying these lines using Normalized Cross-Correlation kernel function, we were able to track the trajectory that the line follows due to brain motion. Based on it, we can predict the number of replicates that we may need to reconstruct a reliable image. Also, we can study how it relates with the physiological values. To address the motion effects, we describe a Semi-Hidden Markov Model to estimate the sequence of hidden states most likely to have generated the observations. The model considers that at the scanning time the brain is either in "near-to-rest"(S1) state, or in "far-from-rest"(S2) state. Our algorithm assigns probabilities for each state based on concomitant physiological measurements. Using Viterbi's approach we estimate the most likely path of states and we select the lines observed in S1. Because there is no gold standard, we suggest comparing our result with a stack of images collected after the animal is sacrificed. Conditioned on inherent experimental and technological limitations, the results of this work offer a description of the brain movement caused by physiology and a solution for reconstructing reliable images from in vivo microscopy

    Ensemble learning of high dimension datasets

    Get PDF
    Ensemble learning, an approach in Machine Learning, makes decisions based on the collective decision of a committee of learners to solve complex tasks with minimal human intervention. Advances in computing technology have enabled researchers build datasets with the number of features in the order of thousands and enabled building more accurate predictive models. Unfortunately, high dimensional datasets are especially challenging for machine learning due to the phenomenon dubbed as the "curse of dimensionality". One approach to overcoming this challenge is ensemble learning using Random Subspace (RS) method, which has been shown to perform very well empirically however with few theoretical explanations to said effectiveness for classification tasks. In this thesis, we aim to provide theoretical insights into RS ensemble classifiers to give a more in-depth understanding of the theoretical foundations of other ensemble classifiers. We investigate the conditions for norm-preservations in RS projections. Insights into this provide us with the theoretical basis for RS in algorithms that are based on the geometry of the data (i.e. clustering, nearest-neighbour). We then investigate the guarantees for the dot products of two random vectors after RS projection. This guarantee is useful to capture the geometric structure of a classification problem. We will then investigate the accuracy of a majority vote ensemble using a generalized Polya-Urn model, and how the parameters of the model are derived from diversity measures. We will discuss the practical implications of the model, explore the noise tolerance of ensembles, and give a plausible explanation for the effectiveness of ensembles. We will provide empirical corroboration for our main results with both synthetic and real-world high-dimensional data. We will also discuss the implications of our theory on other applications (i.e. compressive sensing). Based on our results, we will propose a method of building ensembles for Deep Neural Network image classifications using RS projections without needing to retrain the neural network, which showed improved accuracy and very good robustness to adversarial examples. Ultimately, we hope that the insights gained in this thesis would make in-roads towards the answer to a key open question for ensemble classifiers, "When will an ensemble of weak learners outperform a single carefully tuned learner?

    Secret-Shared Shuffle with Malicious Security

    Get PDF
    A secret-shared shuffle (SSS) protocol permutes a secret-shared vector using a random secret permutation. It has found numerous applications, however, it is also an expensive operation and often a performance bottleneck. Chase et al. (Asiacrypt\u2720) recently proposed a highly efficient semi-honest two-party SSS protocol known as the CGP protocol. It utilizes purposely designed pseudorandom correlations that facilitate a communication-efficient online shuffle phase. That said, semi-honest security is insufficient in many real-world application scenarios since shuffle is usually used for highly sensitive applications. Considering this, recent works (CANS\u2721, NDSS\u2722) attempted to enhance the CGP protocol with malicious security over authenticated secret sharings. However, we find that these attempts are flawed, and malicious adversaries can still learn private information via malicious deviations. This is demonstrated with concrete attacks proposed in this paper. Then the question is how to fill the gap and design a maliciously secure CGP shuffle protocol. We answer this question by introducing a set of lightweight correlation checks and a leakage reduction mechanism. Then we apply our techniques with authenticated secret sharings to achieve malicious security. Notably, our protocol, while increasing security, is also efficient. In the two-party setting, experiment results show that our maliciously secure protocol introduces an acceptable overhead compared to its semi-honest version and is more efficient than the state-of-the-art maliciously secure SSS protocol from the MP-SPDZ library

    Local selection of features and its applications to image search and annotation

    Get PDF
    In multimedia applications, direct representations of data objects typically involve hundreds or thousands of features. Given a query object, the similarity between the query object and a database object can be computed as the distance between their feature vectors. The neighborhood of the query object consists of those database objects that are close to the query object. The semantic quality of the neighborhood, which can be measured as the proportion of neighboring objects that share the same class label as the query object, is crucial for many applications, such as content-based image retrieval and automated image annotation. However, due to the existence of noisy or irrelevant features, errors introduced into similarity measurements are detrimental to the neighborhood quality of data objects. One way to alleviate the negative impact of noisy features is to use feature selection techniques in data preprocessing. From the original vector space, feature selection techniques select a subset of features, which can be used subsequently in supervised or unsupervised learning algorithms for better performance. However, their performance on improving the quality of data neighborhoods is rarely evaluated in the literature. In addition, most traditional feature selection techniques are global, in the sense that they compute a single set of features across the entire database. As a consequence, the possibility that the feature importance may vary across different data objects or classes of objects is neglected. To compute a better neighborhood structure for objects in high-dimensional feature spaces, this dissertation proposes several techniques for selecting features that are important to the local neighborhood of individual objects. These techniques are then applied to image applications such as content-based image retrieval and image label propagation. Firstly, an iterative K-NN graph construction method for image databases is proposed. A local variant of the Laplacian Score is designed for the selection of features for individual images. Noisy features are detected and sparsified iteratively from the original standardized feature vectors. This technique is incorporated into an approximate K-NN graph construction method so as to improve the semantic quality of the graph. Secondly, in a content-based image retrieval system, a generalized version of the Laplacian Score is used to compute different feature subspaces for images in the database. For online search, a query image is ranked in the feature spaces of database images. Those database images for which the query image is ranked highly are selected as the query results. Finally, a supervised method for the local selection of image features is proposed, for refining the similarity graph used in an image label propagation framework. By using only the selected features to compute the edges leading from labeled image nodes to unlabeled image nodes, better annotation accuracy can be achieved. Experimental results on several datasets are provided in this dissertation, to demonstrate the effectiveness of the proposed techniques for the local selection of features, and for the image applications under consideration

    Discovery as Regulation

    Get PDF
    This article develops an approach to discovery that is grounded in regulatory theory and administrative subpoena power. The conventional judicial and scholarly view about discovery is that it promotes fair and accurate outcomes and nudges the parties toward settlement. While commonly held, however, this belief is increasingly outdated and suffers from limitations. Among them, it has generated endless controversy about the problem of discovery costs. Indeed, a growing chorus of scholars and courts has offered an avalanche of reforms, from cost shifting and bespoke discovery contracts to outright elimination. Recently, Judge Thomas Hardiman quipped that if he had absolute power, he would abolish discovery for cases involving less than $500,000. These debates, however, are at a standstill, and existing scholarship offers incomplete treatment of discovery theory that might move debates forward. The core insight of the project is that in the private-enforcement context—where Congress deliberately employs private litigants as the main method of statutory enforcement—there is a surprisingly strong case that our current discovery system should be understood in part as serving regulatory goals analogous to administrative subpoena power. That is, discovery here can be seen as an extension of the subpoena power that agencies like the SEC, FTC, and EPA possess and is the lynchpin of a system that depends on private litigants to enforce our most important statutes. By forcing parties to disclose large amounts of information, discovery deters harm and, most importantly, shapes industry-wide practices and the primary behavior of regulated entities. This approach has a vast array of implications for the scope of discovery as well as the debate over costs. Scholars and courts should thus grapple with the consequences of what I call “regulatory discovery” for the entire legal system

    Changing Priorities. 3rd VIBRArch

    Full text link
    In order to warrant a good present and future for people around the planet and to safe the care of the planet itself, research in architecture has to release all its potential. Therefore, the aims of the 3rd Valencia International Biennial of Research in Architecture are: - To focus on the most relevant needs of humanity and the planet and what architectural research can do for solving them. - To assess the evolution of architectural research in traditionally matters of interest and the current state of these popular and widespread topics. - To deepen in the current state and findings of architectural research on subjects akin to post-capitalism and frequently related to equal opportunities and the universal right to personal development and happiness. - To showcase all kinds of research related to the new and holistic concept of sustainability and to climate emergency. - To place in the spotlight those ongoing works or available proposals developed by architectural researchers in order to combat the effects of the COVID-19 pandemic. - To underline the capacity of architectural research to develop resiliency and abilities to adapt itself to changing priorities. - To highlight architecture's multidisciplinarity as a melting pot of multiple approaches, points of view and expertise. - To open new perspectives for architectural research by promoting the development of multidisciplinary and inter-university networks and research groups. For all that, the 3rd Valencia International Biennial of Research in Architecture is open not only to architects, but also for any academic, practitioner, professional or student with a determination to develop research in architecture or neighboring fields.Cabrera Fausto, I. (2023). Changing Priorities. 3rd VIBRArch. Editorial Universitat Politècnica de València. https://doi.org/10.4995/VIBRArch2022.2022.1686

    Proceedings of DRS Learn X Design 2019: Insider Knowledge

    Get PDF
    corecore