35 research outputs found

    New electronic tongue sensor array system for accurate liquor beverage classification

    Get PDF
    The use of sensors in different applications to improve the monitoring of a process and its variables is required as it enables information to be obtained directly from the process by ensuring its quality. This is now possible because of the advances in the fabrication of sensors and the development of equipment with a high processing capability. These elements enable the development of portable smart systems that can be used directly in the monitoring of the process and the testing of variables, which, in some cases, must evaluated by laboratory tests to ensure high-accuracy measurement results. One of these processes is taste recognition and, in general, the classification of liquids, where electronic tongues have presented some advantages compared with traditional monitoring because of the time reduction for the analysis, the possibility of online monitoring, and the use of strategies of artificial intelligence for the analysis of the data. However, although some methods and strategies have been developed, it is necessary to continue in the development of strategies that enable the results in the analysis of the data from electrochemical sensors to be improved. In this way, this paper explores the application of an electronic tongue system in the classification of liquor beverages, which was directly applied to an alcoholic beverage found in specific regions of Colombia. The system considers the use of eight commercial sensors and a data acquisition system with a machine-learning-based methodology developed for this aim. Results show the advantages of the system and its accuracy in the analysis and classification of this kind of alcoholic beverage.This research was funded by the Department of Science, Technology and Innovation of Colombia, grant 799, and Universidad Nacional de Colombia, grant 57399.Peer ReviewedPostprint (published version

    Multi-Source Data Fusion for Cyberattack Detection in Power Systems

    Full text link
    Cyberattacks can cause a severe impact on power systems unless detected early. However, accurate and timely detection in critical infrastructure systems presents challenges, e.g., due to zero-day vulnerability exploitations and the cyber-physical nature of the system coupled with the need for high reliability and resilience of the physical system. Conventional rule-based and anomaly-based intrusion detection system (IDS) tools are insufficient for detecting zero-day cyber intrusions in the industrial control system (ICS) networks. Hence, in this work, we show that fusing information from multiple data sources can help identify cyber-induced incidents and reduce false positives. Specifically, we present how to recognize and address the barriers that can prevent the accurate use of multiple data sources for fusion-based detection. We perform multi-source data fusion for training IDS in a cyber-physical power system testbed where we collect cyber and physical side data from multiple sensors emulating real-world data sources that would be found in a utility and synthesizes these into features for algorithms to detect intrusions. Results are presented using the proposed data fusion application to infer False Data and Command injection-based Man-in- The-Middle (MiTM) attacks. Post collection, the data fusion application uses time-synchronized merge and extracts features followed by pre-processing such as imputation and encoding before training supervised, semi-supervised, and unsupervised learning models to evaluate the performance of the IDS. A major finding is the improvement of detection accuracy by fusion of features from cyber, security, and physical domains. Additionally, we observed the co-training technique performs at par with supervised learning methods when fed with our features

    Algorithms for feature selection and pattern recognition on Grassmann manifolds

    Get PDF
    Includes bibliographical references.2015 Summer.This dissertation presents three distinct application-driven research projects united by ideas and topics from geometric data analysis, optimization, computational topology, and machine learning. We first consider hyperspectral band selection problem solved by using sparse support vector machines (SSVMs). A supervised embedded approach is proposed using the property of SSVMs to exhibit a model structure that includes a clearly identifiable gap between zero and non-zero feature vector weights that permits important bands to be definitively selected in conjunction with the classification problem. An SSVM is trained using bootstrap aggregating to obtain a sample of SSVM models to reduce variability in the band selection process. This preliminary sample approach for band selection is followed by a secondary band selection which involves retraining the SSVM to further reduce the set of bands retained. We propose and compare three adaptations of the SSVM band selection algorithm for the multiclass problem. We illustrate the performance of these methods on two benchmark hyperspectral data sets. Second, we propose an approach for capturing the signal variability in data using the framework of the Grassmann manifold (Grassmannian). Labeled points from each class are sampled and used to form abstract points on the Grassmannian. The resulting points have representations as orthonormal matrices and as such do not reside in Euclidean space in the usual sense. There are a variety of metrics which allow us to determine distance matrices that can be used to realize the Grassmannian as an embedding in Euclidean space. Multidimensional scaling (MDS) determines a low dimensional Euclidean embedding of the manifold, preserving or approximating the Grassmannian geometry based on the distance measure. We illustrate that we can achieve an isometric embedding of the Grassmann manifold using the chordal metric while this is not the case with other distances. However, non-isometric embeddings generated by using the smallest principal angle pseudometric on the Grassmannian lead to the best classification results: we observe that as the dimension of the Grassmannian grows, the accuracy of the classification grows to 100% in binary classification experiments. To build a classification model, we use SSVMs to perform simultaneous dimension selection. The resulting classifier selects a subset of dimensions of the embedding without loss in classification performance. Lastly, we present an application of persistent homology to the detection of chemical plumes in hyperspectral movies. The pixels of the raw hyperspectral data cubes are mapped to the geometric framework of the Grassmann manifold where they are analyzed, contrasting our approach with the more standard framework in Euclidean space. An advantage of this approach is that it allows the time slices in a hyperspectral movie to be collapsed to a sequence of points in such a way that some of the key structure within and between the slices is encoded by the points on the Grassmannian. This motivates the search for topological structure, associated with the evolution of the frames of a hyperspectral movie, within the corresponding points on the manifold. The proposed framework affords the processing of large data sets, such as the hyperspectral movies explored in this investigation, while retaining valuable discriminative information. For a particular choice of a distance metric on the Grassmannian, it is possible to generate topological signals that capture changes in the scene after a chemical release

    Proceedings of the 2019 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    In 2019 fand wieder der jährliche Workshop des Fraunhofer IOSB und des Lehrstuhls für Interaktive Echtzeitsysteme des Karlsruher Insitut für Technologie statt. Die Doktoranden beider Institutionen präsentierten den Fortschritt ihrer Forschung in den Themen Maschinelles Lernen, Machine Vision, Messtechnik, Netzwerksicherheit und Usage Control. Die Ideen dieses Workshops sind in diesem Buch gesammelt in der Form technischer Berichte

    Proceedings of the 2019 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    In 2019 again, the annual joint workshop of the Fraunhofer IOSB and the Vision and Fusion Laboratory of the Karlsruhe Institute of Technology took place. The doctoral students of both institutions presented extensive reports on the status of their research and discussed topics ranging from computer vision and optical metrology to network security, usage control and machine learning. The results and ideas presented at the workshop are collected in this book in the form of technical reports

    Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations.

    Get PDF
    Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area

    매개분포근사를 통한 공정시스템 공학에서의 확률기계학습 접근법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 화학생물공학부, 2021.8. 이종민.With the rapid development of measurement technology, higher quality and vast amounts of process data become available. Nevertheless, process data are ‘scarce’ in many cases as they are sampled only at certain operating conditions while the dimensionality of the system is large. Furthermore, the process data are inherently stochastic due to the internal characteristics of the system or the measurement noises. For this reason, uncertainty is inevitable in process systems, and estimating it becomes a crucial part of engineering tasks as the prediction errors can lead to misguided decisions and cause severe casualties or economic losses. A popular approach to this is applying probabilistic inference techniques that can model the uncertainty in terms of probability. However, most of the existing probabilistic inference techniques are based on recursive sampling, which makes it difficult to use them for industrial applications that require processing a high-dimensional and massive amount of data. To address such an issue, this thesis proposes probabilistic machine learning approaches based on parametric distribution approximation, which can model the uncertainty of the system and circumvent the computational complexity as well. The proposed approach is applied for three major process engineering tasks: process monitoring, system modeling, and process design. First, a process monitoring framework is proposed that utilizes a probabilistic classifier for fault classification. To enhance the accuracy of the classifier and reduce the computational cost for its training, a feature extraction method called probabilistic manifold learning is developed and applied to the process data ahead of the fault classification. We demonstrate that this manifold approximation process not only reduces the dimensionality of the data but also casts the data into a clustered structure, making the classifier have a low dependency on the type and dimension of the data. By exploiting this property, non-metric information (e.g., fault labels) of the data is effectively incorporated and the diagnosis performance is drastically improved. Second, a probabilistic modeling approach based on Bayesian neural networks is proposed. The parameters of deep neural networks are transformed into Gaussian distributions and trained using variational inference. The redundancy of the parameter is autonomously inferred during the model training, and insignificant parameters are eliminated a posteriori. Through a verification study, we demonstrate that the proposed approach can not only produce high-fidelity models that describe the stochastic behaviors of the system but also produce the optimal model structure. Finally, a novel process design framework is proposed based on reinforcement learning. Unlike the conventional optimization methods that recursively evaluate the objective function to find an optimal value, the proposed method approximates the objective function surface by parametric probabilistic distributions. This allows learning the continuous action policy without introducing any cumbersome discretization process. Moreover, the probabilistic policy gives means for effective control of the exploration and exploitation rates according to the certainty information. We demonstrate that the proposed framework can learn process design heuristics during the solution process and use them to solve similar design problems.계측기술의 발달로 양질의, 그리고 방대한 양의 공정 데이터의 취득이 가능해졌다. 그러나 많은 경우 시스템 차원의 크기에 비해서 일부 운전조건의 공정 데이터만이 취득되기 때문에, 공정 데이터는 ‘희소’하게 된다. 뿐만 아니라, 공정 데이터는 시스템 거동 자체와 더불어 계측에서 발생하는 노이즈로 인한 본질적인 확률적 거동을 보인다. 따라서 시스템의 예측모델은 예측 값에 대한 불확실성을 정량적으로 기술하는 것이 요구되며, 이를 통해 오진을 예방하고 잠재적 인명 피해와 경제적 손실을 방지할 수 있다. 이에 대한 보편적인 접근법은 확률추정기법을 사용하여 이러한 불확실성을 정량화 하는 것이나, 현존하는 추정기법들은 재귀적 샘플링에 의존하는 특성상 고차원이면서도 다량인 공정데이터에 적용하기 어렵다는 근본적인 한계를 가진다. 본 학위논문에서는 매개분포근사에 기반한 확률기계학습을 적용하여 시스템에 내재된 불확실성을 모델링하면서도 동시에 계산 효율적인 접근 방법을 제안하였다. 먼저, 공정의 모니터링에 있어 가우시안 혼합 모델 (Gaussian mixture model)을 분류자로 사용하는 확률적 결함 분류 프레임워크가 제안되었다. 이때 분류자의 학습에서의 계산 복잡도를 줄이기 위하여 데이터를 저차원으로 투영시키는데, 이를 위한 확률적 다양체 학습 (probabilistic manifold learn-ing) 방법이 제안되었다. 제안하는 방법은 데이터의 다양체 (manifold)를 근사하여 데이터 포인트 사이의 쌍별 우도 (pairwise likelihood)를 보존하는 투영법이 사용된다. 이를 통하여 데이터의 종류와 차원에 의존도가 낮은 진단 결과를 얻음과 동시에 데이터 레이블과 같은 비거리적 (non-metric) 정보를 효율적으로 사용하여 결함 진단 능력을 향상시킬 수 있음을 보였다. 둘째로, 베이지안 심층 신경망(Bayesian deep neural networks)을 사용한 공정의 확률적 모델링 방법론이 제시되었다. 신경망의 각 매개변수는 가우스 분포로 치환되며, 변분추론 (variational inference)을 통하여 계산 효율적인 훈련이 진행된다. 훈련이 끝난 후 파라미터의 유효성을 측정하여 불필요한 매개변수를 소거하는 사후 모델 압축 방법이 사용되었다. 반도체 공정에 대한 사례 연구는 제안하는 방법이 공정의 복잡한 거동을 효과적으로 모델링 할 뿐만 아니라 모델의 최적 구조를 도출할 수 있음을 보여준다. 마지막으로, 분포형 심층 신경망을 사용한 강화학습을 기반으로 한 확률적 공정 설계 프레임워크가 제안되었다. 최적치를 찾기 위해 재귀적으로 목적 함수 값을 평가하는 기존의 최적화 방법론과 달리, 목적 함수 곡면 (objective function surface)을 매개화 된 확률분포로 근사하는 접근법이 제시되었다. 이를 기반으로 이산화 (discretization)를 사용하지 않고 연속적 행동 정책을 학습하며, 확실성 (certainty)에 기반한 탐색 (exploration) 및 활용 (exploi-tation) 비율의 제어가 효율적으로 이루어진다. 사례 연구 결과는 공정의 설계에 대한 경험지식 (heuristic)을 학습하고 유사한 설계 문제의 해를 구하는 데 이용할 수 있음을 보여준다.Chapter 1 Introduction 1 1.1. Motivation 1 1.2. Outline of the thesis 5 Chapter 2 Backgrounds and preliminaries 9 2.1. Bayesian inference 9 2.2. Monte Carlo 10 2.3. Kullback-Leibler divergence 11 2.4. Variational inference 12 2.5. Riemannian manifold 13 2.6. Finite extended-pseudo-metric space 16 2.7. Reinforcement learning 16 2.8. Directed graph 19 Chapter 3 Process monitoring and fault classification with probabilistic manifold learning 20 3.1. Introduction 20 3.2. Methods 25 3.2.1. Uniform manifold approximation 27 3.2.2. Clusterization 28 3.2.3. Projection 31 3.2.4. Mapping of unknown data query 32 3.2.5. Inference 33 3.3. Verification study 38 3.3.1. Dataset description 38 3.3.2. Experimental setup 40 3.3.3. Process monitoring 43 3.3.4. Projection characteristics 47 3.3.5. Fault diagnosis 50 3.3.6. Computational Aspects 56 Chapter 4 Process system modeling with Bayesian neural networks 59 4.1. Introduction 59 4.2. Methods 63 4.2.1. Long Short-Term Memory (LSTM) 63 4.2.2. Bayesian LSTM (BLSTM) 66 4.3. Verification study 68 4.3.1. System description 68 4.3.2. Estimation of the plasma variables 71 4.3.3. Dataset description 72 4.3.4. Experimental setup 72 4.3.5. Weight regularization during training 78 4.3.6. Modeling complex behaviors of the system 80 4.3.7. Uncertainty quantification and model compression 85 Chapter 5 Process design based on reinforcement learning with distributional actor-critic networks 89 5.1. Introduction 89 5.2. Methods 93 5.2.1. Flowsheet hashing 93 5.2.2. Behavioral cloning 99 5.2.3. Neural Monte Carlo tree search (N-MCTS) 100 5.2.4. Distributional actor-critic networks (DACN) 105 5.2.5. Action masking 110 5.3. Verification study 110 5.3.1. System description 110 5.3.2. Experimental setup 111 5.3.3. Result and discussions 115 Chapter 6 Concluding remarks 120 6.1. Summary of the contributions 120 6.2. Future works 122 Appendix 125 A.1. Proof of Lemma 1 125 A.2. Performance indices for dimension reduction 127 A.3. Model equations for process units 130 Bibliography 132 초 록 149박

    Process Monitoring and Data Mining with Chemical Process Historical Databases

    Get PDF
    Modern chemical plants have distributed control systems (DCS) that handle normal operations and quality control. However, the DCS cannot compensate for fault events such as fouling or equipment failures. When faults occur, human operators must rapidly assess the situation, determine causes, and take corrective action, a challenging task further complicated by the sheer number of sensors. This information overload as well as measurement noise can hide information critical to diagnosing and fixing faults. Process monitoring algorithms can highlight key trends in data and detect faults faster, reducing or even preventing the damage that faults can cause. This research improves tools for process monitoring on different chemical processes. Previously successful monitoring methods based on statistics can fail on non-linear processes and processes with multiple operating states. To address these challenges, we develop a process monitoring technique based on multiple self-organizing maps (MSOM) and apply it in industrial case studies including a simulated plant and a batch reactor. We also use standard SOM to detect a novel event in a separation tower and produce contribution plots which help isolate the causes of the event. Another key challenge to any engineer designing a process monitoring system is that implementing most algorithms requires data organized into “normal” and “faulty”; however, data from faulty operations can be difficult to locate in databases storing months or years of operations. To assist in identifying faulty data, we apply data mining algorithms from computer science and compare how they cluster chemical process data from normal and faulty conditions. We identify several techniques which successfully duplicated normal and faulty labels from expert knowledge and introduce a process data mining software tool to make analysis simpler for practitioners. The research in this dissertation enhances chemical process monitoring tasks. MSOM-based process monitoring improves upon standard process monitoring algorithms in fault identification and diagnosis tasks. The data mining research reduces a crucial barrier to the implementation of monitoring algorithms. The enhanced monitoring introduced can help engineers develop effective and scalable process monitoring systems to improve plant safety and reduce losses from fault events

    Estimating user interaction probability for non-guaranteed display advertising

    Get PDF
    Billions of advertisements are displayed to internet users every hour, a market worth approximately $110 billion in 2013. The process of displaying advertisements to internet users is managed by advertising exchanges, automated systems which match advertisements to users while balancing conflicting advertiser, publisher, and user objectives. Real-time bidding is a recent development in the online advertising industry that allows more than one exchange (or demand-side platform) to bid for the right to deliver an ad to a specific user while that user is loading a webpage, creating a liquid market for ad impressions. Real-time bidding accounted for around 10% of the German online advertising market in late 2013, a figure which is growing at an annual rate of around 40%. In this competitive market, accurately calculating the expected value of displaying an ad to a user is essential for profitability. In this thesis, we develop a system that significantly improves the existing method for estimating the value of displaying an ad to a user in a German advertising exchange and demand-side platform. The most significant calculation in this system is estimating the probability of a user interacting with an ad in a given context. We first implement a hierarchical main-effects and latent factor model which is similar enough to the existing exchange system to allow a simple and robust upgrade path, while improving performance substantially. We then use regularized generalized linear models to estimate the probability of an ad interaction occurring following an individual user impression event. We build a system capable of training thousands of campaign models daily, handling over 300 million events per day, 18 million recurrent users, and thousands of model dimensions. Together, these systems improve on the log-likelihood of the existing method by over 10%. We also provide an overview of the real-time bidding market microstructure in the German real- time bidding market in September and November 2013, and indicate potential areas for exploiting competitors’ behaviour, including building user features from real-time bid responses. Finally, for personal interest, we experiment with scalable k-nearest neighbour search algorithms, nonlinear dimension reduction, manifold regularization, graph clustering, and stochastic block model inference using the large datasets from the linear model

    Validação de heterogeneidade estrutural em dados de Crio-ME por comitês de agrupadores

    Get PDF
    Orientadores: Fernando José Von Zuben, Rodrigo Villares PortugalDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Análise de Partículas Isoladas é uma técnica que permite o estudo da estrutura tridimensional de proteínas e outros complexos macromoleculares de interesse biológico. Seus dados primários consistem em imagens de microscopia eletrônica de transmissão de múltiplas cópias da molécula em orientações aleatórias. Tais imagens são bastante ruidosas devido à baixa dose de elétrons utilizada. Reconstruções 3D podem ser obtidas combinando-se muitas imagens de partículas em orientações similares e estimando seus ângulos relativos. Entretanto, estados conformacionais heterogêneos frequentemente coexistem na amostra, porque os complexos moleculares podem ser flexíveis e também interagir com outras partículas. Heterogeneidade representa um desafio na reconstrução de modelos 3D confiáveis e degrada a resolução dos mesmos. Entre os algoritmos mais populares usados para classificação estrutural estão o agrupamento por k-médias, agrupamento hierárquico, mapas autoorganizáveis e estimadores de máxima verossimilhança. Tais abordagens estão geralmente entrelaçadas à reconstrução dos modelos 3D. No entanto, trabalhos recentes indicam ser possível inferir informações a respeito da estrutura das moléculas diretamente do conjunto de projeções 2D. Dentre estas descobertas, está a relação entre a variabilidade estrutural e manifolds em um espaço de atributos multidimensional. Esta dissertação investiga se um comitê de algoritmos de não-supervisionados é capaz de separar tais "manifolds conformacionais". Métodos de "consenso" tendem a fornecer classificação mais precisa e podem alcançar performance satisfatória em uma ampla gama de conjuntos de dados, se comparados a algoritmos individuais. Nós investigamos o comportamento de seis algoritmos de agrupamento, tanto individualmente quanto combinados em comitês, para a tarefa de classificação de heterogeneidade conformacional. A abordagem proposta foi testada em conjuntos sintéticos e reais contendo misturas de imagens de projeção da proteína Mm-cpn nos estados "aberto" e "fechado". Demonstra-se que comitês de agrupadores podem fornecer informações úteis na validação de particionamentos estruturais independetemente de algoritmos de reconstrução 3DAbstract: Single Particle Analysis is a technique that allows the study of the three-dimensional structure of proteins and other macromolecular assemblies of biological interest. Its primary data consists of transmission electron microscopy images from multiple copies of the molecule in random orientations. Such images are very noisy due to the low electron dose employed. Reconstruction of the macromolecule can be obtained by averaging many images of particles in similar orientations and estimating their relative angles. However, heterogeneous conformational states often co-exist in the sample, because the molecular complexes can be flexible and may also interact with other particles. Heterogeneity poses a challenge to the reconstruction of reliable 3D models and degrades their resolution. Among the most popular algorithms used for structural classification are k-means clustering, hierarchical clustering, self-organizing maps and maximum-likelihood estimators. Such approaches are usually interlaced with the reconstructions of the 3D models. Nevertheless, recent works indicate that it is possible to infer information about the structure of the molecules directly from the dataset of 2D projections. Among these findings is the relationship between structural variability and manifolds in a multidimensional feature space. This dissertation investigates whether an ensemble of unsupervised classification algorithms is able to separate these "conformational manifolds". Ensemble or "consensus" methods tend to provide more accurate classification and may achieve satisfactory performance across a wide range of datasets, when compared with individual algorithms. We investigate the behavior of six clustering algorithms both individually and combined in ensembles for the task of structural heterogeneity classification. The approach was tested on synthetic and real datasets containing a mixture of images from the Mm-cpn chaperonin in the "open" and "closed" states. It is shown that cluster ensembles can provide useful information in validating the structural partitionings independently of 3D reconstruction methodsMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric
    corecore