17 research outputs found

    Revisiting data augmentation for subspace clustering

    Full text link
    Subspace clustering is the classical problem of clustering a collection of data samples that approximately lie around several low-dimensional subspaces. The current state-of-the-art approaches for this problem are based on the self-expressive model which represents the samples as linear combination of other samples. However, these approaches require sufficiently well-spread samples for accurate representation which might not be necessarily accessible in many applications. In this paper, we shed light on this commonly neglected issue and argue that data distribution within each subspace plays a critical role in the success of self-expressive models. Our proposed solution to tackle this issue is motivated by the central role of data augmentation in the generalization power of deep neural networks. We propose two subspace clustering frameworks for both unsupervised and semi-supervised settings that use augmented samples as an enlarged dictionary to improve the quality of the self-expressive representation. We present an automatic augmentation strategy using a few labeled samples for the semi-supervised problem relying on the fact that the data samples lie in the union of multiple linear subspaces. Experimental results confirm the effectiveness of data augmentation, as it significantly improves the performance of general self-expressive models.Comment: 38 pages (including 10 of supplementary material

    Towards Developing Computer Vision Algorithms and Architectures for Real-world Applications

    Get PDF
    abstract: Computer vision technology automatically extracts high level, meaningful information from visual data such as images or videos, and the object recognition and detection algorithms are essential in most computer vision applications. In this dissertation, we focus on developing algorithms used for real life computer vision applications, presenting innovative algorithms for object segmentation and feature extraction for objects and actions recognition in video data, and sparse feature selection algorithms for medical image analysis, as well as automated feature extraction using convolutional neural network for blood cancer grading. To detect and classify objects in video, the objects have to be separated from the background, and then the discriminant features are extracted from the region of interest before feeding to a classifier. Effective object segmentation and feature extraction are often application specific, and posing major challenges for object detection and classification tasks. In this dissertation, we address effective object flow based ROI generation algorithm for segmenting moving objects in video data, which can be applied in surveillance and self driving vehicle areas. Optical flow can also be used as features in human action recognition algorithm, and we present using optical flow feature in pre-trained convolutional neural network to improve performance of human action recognition algorithms. Both algorithms outperform the state-of-the-arts at their time. Medical images and videos pose unique challenges for image understanding mainly due to the fact that the tissues and cells are often irregularly shaped, colored, and textured, and hand selecting most discriminant features is often difficult, thus an automated feature selection method is desired. Sparse learning is a technique to extract the most discriminant and representative features from raw visual data. However, sparse learning with \textit{L1} regularization only takes the sparsity in feature dimension into consideration; we improve the algorithm so it selects the type of features as well; less important or noisy feature types are entirely removed from the feature set. We demonstrate this algorithm to analyze the endoscopy images to detect unhealthy abnormalities in esophagus and stomach, such as ulcer and cancer. Besides sparsity constraint, other application specific constraints and prior knowledge may also need to be incorporated in the loss function in sparse learning to obtain the desired results. We demonstrate how to incorporate similar-inhibition constraint, gaze and attention prior in sparse dictionary selection for gastroscopic video summarization that enable intelligent key frame extraction from gastroscopic video data. With recent advancement in multi-layer neural networks, the automatic end-to-end feature learning becomes feasible. Convolutional neural network mimics the mammal visual cortex and can extract most discriminant features automatically from training samples. We present using convolutinal neural network with hierarchical classifier to grade the severity of Follicular Lymphoma, a type of blood cancer, and it reaches 91\% accuracy, on par with analysis by expert pathologists. Developing real world computer vision applications is more than just developing core vision algorithms to extract and understand information from visual data; it is also subject to many practical requirements and constraints, such as hardware and computing infrastructure, cost, robustness to lighting changes and deformation, ease of use and deployment, etc.The general processing pipeline and system architecture for the computer vision based applications share many similar design principles and architecture. We developed common processing components and a generic framework for computer vision application, and a versatile scale adaptive template matching algorithm for object detection. We demonstrate the design principle and best practices by developing and deploying a complete computer vision application in real life, building a multi-channel water level monitoring system, where the techniques and design methodology can be generalized to other real life applications. The general software engineering principles, such as modularity, abstraction, robust to requirement change, generality, etc., are all demonstrated in this research.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Applications in security and evasions in machine learning : a survey

    Get PDF
    In recent years, machine learning (ML) has become an important part to yield security and privacy in various applications. ML is used to address serious issues such as real-time attack detection, data leakage vulnerability assessments and many more. ML extensively supports the demanding requirements of the current scenario of security and privacy across a range of areas such as real-time decision-making, big data processing, reduced cycle time for learning, cost-efficiency and error-free processing. Therefore, in this paper, we review the state of the art approaches where ML is applicable more effectively to fulfill current real-world requirements in security. We examine different security applications' perspectives where ML models play an essential role and compare, with different possible dimensions, their accuracy results. By analyzing ML algorithms in security application it provides a blueprint for an interdisciplinary research area. Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks. Therefore, requirements rise to assess the vulnerability in the ML models to cope up with the adversarial attacks at the time of development. Accordingly, as a supplement to this point, we also analyze the different types of adversarial attacks on the ML models. To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods. Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed. Finally, we also investigate different types of properties of the adversarial attacks

    Cyber Security and Critical Infrastructures 2nd Volume

    Get PDF
    The second volume of the book contains the manuscripts that were accepted for publication in the MDPI Special Topic "Cyber Security and Critical Infrastructure" after a rigorous peer-review process. Authors from academia, government and industry contributed their innovative solutions, consistent with the interdisciplinary nature of cybersecurity. The book contains 16 articles, including an editorial that explains the current challenges, innovative solutions and real-world experiences that include critical infrastructure and 15 original papers that present state-of-the-art innovative solutions to attacks on critical systems

    Data Reduction Algorithms in Machine Learning and Data Science

    Get PDF
    Raw data are usually required to be pre-processed for better representation or discrimination of classes. This pre-processing can be done by data reduction, i.e., either reduction in dimensionality or numerosity (cardinality). Dimensionality reduction can be used for feature extraction or data visualization. Numerosity reduction is useful for ranking data points or finding the most and least important data points. This thesis proposes several algorithms for data reduction, known as dimensionality and numerosity reduction, in machine learning and data science. Dimensionality reduction tackles feature extraction and feature selection methods while numerosity reduction includes prototype selection and prototype generation approaches. This thesis focuses on feature extraction and prototype selection for data reduction. Dimensionality reduction methods can be divided into three categories, i.e., spectral, probabilistic, and neural network-based methods. The spectral methods have a geometrical point of view and are mostly reduced to the generalized eigenvalue problem. Probabilistic and network-based methods have stochastic and information theoretic foundations, respectively. Numerosity reduction methods can be divided into methods based on variance, geometry, and isolation. For dimensionality reduction, under the spectral category, I propose weighted Fisher discriminant analysis, Roweis discriminant analysis, and image quality aware embedding. I also propose quantile-quantile embedding as a probabilistic method where the distribution of embedding is chosen by the user. Backprojection, Fisher losses, and dynamic triplet sampling using Bayesian updating are other proposed methods in the neural network-based category. Backprojection is for training shallow networks with a projection-based perspective in manifold learning. Two Fisher losses are proposed for training Siamese triplet networks for increasing and decreasing the inter- and intra-class variances, respectively. Two dynamic triplet mining methods, which are based on Bayesian updating to draw triplet samples stochastically, are proposed. For numerosity reduction, principal sample analysis and instance ranking by matrix decomposition are the proposed variance-based methods; these methods rank instances using inter-/intra-class variances and matrix factorization, respectively. Curvature anomaly detection, in which the points are assumed to be the vertices of polyhedron, and isolation Mondrian forest are the proposed methods based on geometry and isolation, respectively. To assess the proposed tools developed for data reduction, I apply them to some applications in medical image analysis, image processing, and computer vision. Data reduction, used as a pre-processing tool, has different applications because it provides various ways of feature extraction and prototype selection for applying to different types of data. Dimensionality reduction extracts informative features and prototype selection selects the most informative data instances. For example, for medical image analysis, I use Fisher losses and dynamic triplet sampling for embedding histopathology image patches and demonstrating how different the tumorous cancer tissue types are from the normal ones. I also propose offline/online triplet mining using extreme distances for this embedding. In image processing and computer vision application, I propose Roweisfaces and Roweisposes for face recognition and 3D action recognition, respectively, using my proposed Roweis discriminant analysis method. I also introduce the concepts of anomaly landscape and anomaly path using the proposed curvature anomaly detection and use them to denoise images and video frames. I report extensive experiments, on different datasets, to show the effectiveness of the proposed algorithms. By experiments, I demonstrate that the proposed methods are useful for extracting informative features and instances for better accuracy, representation, prediction, class separation, data reduction, and embedding. I show that the proposed dimensionality reduction methods can extract informative features for better separation of classes. An example is obtaining an embedding space for separating cancer histopathology patches from the normal patches which helps hospitals diagnose cancers more easily in an automatic way. I also show that the proposed numerosity reduction methods are useful for ranking data instances based on their importance and reducing data volumes without a significant drop in performance of machine learning and data science algorithms

    Advances in knowledge discovery and data mining Part II

    Get PDF
    19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p

    Semi-supervised and unsupervised kernel-based novelty detection with application to remote sensing images

    Get PDF
    The main challenge of new information technologies is to retrieve intelligible information from the large volume of digital data gathered every day. Among the variety of existing data sources, the satellites continuously observing the surface of the Earth are key to the monitoring of our environment. The new generation of satellite sensors are tremendously increasing the possibilities of applications but also increasing the need for efficient processing methodologies in order to extract information relevant to the users' needs in an automatic or semi-automatic way. This is where machine learning comes into play to transform complex data into simplified products such as maps of land-cover changes or classes by learning from data examples annotated by experts. These annotations, also called labels, may actually be difficult or costly to obtain since they are established on the basis of ground surveys. As an example, it is extremely difficult to access a region recently flooded or affected by wildfires. In these situations, the detection of changes has to be done with only annotations from unaffected regions. In a similar way, it is difficult to have information on all the land-cover classes present in an image while being interested in the detection of a single one of interest. These challenging situations are called novelty detection or one-class classification in machine learning. In these situations, the learning phase has to rely only on a very limited set of annotations, but can exploit the large set of unlabeled pixels available in the images. This setting, called semi-supervised learning, allows significantly improving the detection. In this Thesis we address the development of methods for novelty detection and one-class classification with few or no labeled information. The proposed methodologies build upon the kernel methods, which take place within a principled but flexible framework for learning with data showing potentially non-linear feature relations. The thesis is divided into two parts, each one having a different assumption on the data structure and both addressing unsupervised (automatic) and semi-supervised (semi-automatic) learning settings. The first part assumes the data to be formed by arbitrary-shaped and overlapping clusters and studies the use of kernel machines, such as Support Vector Machines or Gaussian Processes. An emphasis is put on the robustness to noise and outliers and on the automatic retrieval of parameters. Experiments on multi-temporal multispectral images for change detection are carried out using only information from unchanged regions or none at all. The second part assumes high-dimensional data to lie on multiple low dimensional structures, called manifolds. We propose a method seeking a sparse and low-rank representation of the data mapped in a non-linear feature space. This representation allows us to build a graph, which is cut into several groups using spectral clustering. For the semi-supervised case where few labels of one class of interest are available, we study several approaches incorporating the graph information. The class labels can either be propagated on the graph, constrain spectral clustering or used to train a one-class classifier regularized by the given graph. Experiments on the unsupervised and oneclass classification of hyperspectral images demonstrate the effectiveness of the proposed approaches

    Developing an Effective Detection Framework for Targeted Ransomware Attacks in Brownfield Industrial Internet of Things

    Full text link
    The Industrial Internet of Things (IIoT) is being interconnected with many critical industrial activities, creating major cyber security concerns. The key concern is with edge systems of Brownfield IIoT, where new devices and technologies are deployed to interoperate with legacy industrial control systems and leverage the benefits of IoT. These edge devices, such as edge gateways, have opened the way to advanced attacks such as targeted ransomware. Various pre-existing security solutions can detect and mitigate such attacks but are often ineffective due to the heterogeneous nature of the IIoT devices and protocols and their interoperability demands. Consequently, developing new detection solutions is essential. The key challenges in developing detection solutions for targeted ransomware attacks in IIoT systems include 1) understanding attacks and their behaviour, 2) designing accurate IIoT system models to test attacks, 3) obtaining realistic data representing IIoT systems' activities and connectivities, and 4) identifying attacks. This thesis provides important contributions to the research focusing on investigating targeted ransomware attacks against IIoT edge systems and developing a new detection framework. The first contribution is developing the world's first example of ransomware, specifically targeting IIoT edge gateways. The experiments' results demonstrate that such an attack is now possible on edge gateways. Also, the kernel-related activity parameters appear to be significant indicators of the crypto-ransomware attacks' behaviour, much more so than for similar attacks in workstations. The second contribution is developing a new holistic end-to-end IIoT security testbed (i.e., Brown-IIoTbed) that can be easily reproduced and reconfigured to support new processes and security scenarios. The results prove that Brown-IIoTbed operates efficiently in terms of its functions and security testing. The third contribution is generating a first-of-its-kind dataset tailored for IIoT systems covering targeted ransomware attacks and their activities, called X-IIoTID. The dataset includes connectivity- and device-agnostic features collected from various data sources. The final contribution is developing a new asynchronous peer-to-peer federated deep learning framework tailored for IIoT edge gateways for detecting targeted ransomware attacks. The framework's effectiveness has been evaluated against pre-existing datasets and the newly developed X-IIoTID dataset

    Regularised inference for changepoint and dependency analysis in non-stationary processes

    Get PDF
    Multivariate correlated time series are found in many modern socio-scientific domains such as neurology, cyber-security, genetics and economics. The focus of this thesis is on efficiently modelling and inferring dependency structure both between data-streams and across points in time. In particular, it is considered that generating processes may vary over time, and are thus non-stationary. For example, patterns of brain activity are expected to change when performing different tasks or thought processes. Models that can describe such behaviour must be adaptable over time. However, such adaptability creates challenges for model identification. In order to perform learning or estimation one must control how model complexity grows in relation to the volume of data. To this extent, one of the main themes of this work is to investigate both the implementation and effect of assumptions on sparsity; relating to model parsimony at an individual time- point, and smoothness; how quickly a model may change over time. Throughout this thesis two basic classes of non-stationary model are stud- ied. Firstly, a class of piecewise constant Gaussian Graphical models (GGM) is introduced that can encode graphical dependencies between data-streams. In particular, a group-fused regulariser is examined that allows for the estima- tion of changepoints across graphical models. The second part of the thesis focuses on extending a class of locally-stationary wavelet (LSW) models. Un- like the raw GGM this enables one to encode dependencies not only between data-streams, but also across time. A set of sparsity aware estimators are developed for estimation of the spectral parameters of such models which are then compared to previous works in the domain

    CLASSIFYING AND RESPONDING TO NETWORK INTRUSIONS

    Get PDF
    Intrusion detection systems (IDS) have been widely adopted within the IT community, as passive monitoring tools that report security related problems to system administrators. However, the increasing number and evolving complexity of attacks, along with the growth and complexity of networking infrastructures, has led to overwhelming numbers of IDS alerts, which allow significantly smaller timeframe for a human to respond. The need for automated response is therefore very much evident. However, the adoption of such approaches has been constrained by practical limitations and administrators' consequent mistrust of systems' abilities to issue appropriate responses. The thesis presents a thorough analysis of the problem of intrusions, and identifies false alarms as the main obstacle to the adoption of automated response. A critical examination of existing automated response systems is provided, along with a discussion of why a new solution is needed. The thesis determines that, while the detection capabilities remain imperfect, the problem of false alarms cannot be eliminated. Automated response technology must take this into account, and instead focus upon avoiding the disruption of legitimate users and services in such scenarios. The overall aim of the research has therefore been to enhance the automated response process, by considering the context of an attack, and investigate and evaluate a means of making intelligent response decisions. The realisation of this objective has included the formulation of a response-oriented taxonomy of intrusions, which is used as a basis to systematically study intrusions and understand the threats detected by an IDS. From this foundation, a novel Flexible Automated and Intelligent Responder (FAIR) architecture has been designed, as the basis from which flexible and escalating levels of response are offered, according to the context of an attack. The thesis describes the design and operation of the architecture, focusing upon the contextual factors influencing the response process, and the way they are measured and assessed to formulate response decisions. The architecture is underpinned by the use of response policies which provide a means to reflect the changing needs and characteristics of organisations. The main concepts of the new architecture were validated via a proof-of-concept prototype system. A series of test scenarios were used to demonstrate how the context of an attack can influence the response decisions, and how the response policies can be customised and used to enable intelligent decisions. This helped to prove that the concept of flexible automated response is indeed viable, and that the research has provided a suitable contribution to knowledge in this important domain
    corecore