357 research outputs found

    Connectionist systems for image processing and anomaly detection

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaA Inteligência Artificial (IA) e a Ciência de Dados estão cada vez mais presentes no nosso quotidiano e os benefícios que trouxeram para a sociedade nos últimos anos são notáveis. O sucesso da IA foi impulsionado pela capacidade adaptativa que as máquinas adquiriram e está estreitamente relacionada com a sua habilidade para aprender. Os sistemas conexionistas, apresentados na forma de Redes Neurais Artificiais (RNAs), que se inspiram no sistema nervoso humano, são um dos mais importantes modelos que permitem a aprendizagem. Estes são utilizados em diversas áreas, como em problemas de previsão ou classificação, apresentando resultados cada vez mais satisfatórios. Uma das áreas em que esta tecnologia se tem destacado é a Visão Computacional (Computer Vision (CV)), permitindo, por exemplo, a localização de objetos em imagens e a sua correta identificação. A Deteção de Anomalias (Anomaly Detection (AD)) é outro campo onde as RNAs vêm surgindo como uma das tecnologias para a resolução de problemas. Em cada área são utilizadas diferentes arquiteturas de acordo com o tipo de dados e o problema a resolver. Combinando o processamento de imagens e a deteção de anomalias, verifica-se uma convergência de metodologias que utilizam módulos convolucionais em arquiteturas dedicadas a AD. O objetivo principal desta dissertação é estudar as técnicas existentes nestes domínios, desenvolvendo diferentes arquiteturas e modelos, aplicando-as a casos práticos de forma a comparar os resultados obtidos em cada abordagem. O caso prático principal consiste na monitorização de pavimentos rodoviários por meio de imagens para a identificação automática de áreas degradadas. Para isso, dois protótipos de software são propostos para recolher e visualizar os dados adquiridos. O estudo de arquiteturas de RNAs para o diagnóstico da condição do asfalto por meio de imagens é o foco central no processo científico apresentado. Os métodos de Machine Learning (ML) utilizados incluem classificadores binários, Autoencoders (AEs) e Variational Autoencoders (VAEs). Para os dois últimos modelos, práticas supervisionadas e não supervisionadas são também comparadas, comprovando a sua utilidade em cenários onde não há dados rotulados disponíveis. Usando o modelo VAE num ambiente supervisionado, este apresenta uma excelente distinção entre áreas de pavimentação em boas condições e degradadas. Quando não existem dados rotulados disponíveis, a melhor opção é utilizar o modelo AE, utilizando a distribuição de semelhanças das reconstruções para calcular o threshold de separação, atingindo accuracy e precision superiores a 94%). O processo completo de desenvolvimento mostra que é possível construir uma solução alternativa para diminuir os custos de operação em relação aos sistemas comerciais existentes e melhorar a usabilidade quando comparada às soluções tradicionais. Adicionalmente, dois estudos demonstram a versatilidade dos sistemas conexionistas na resolução de problemas, nomeadamente no projeto de estruturas mecânicas, possibilitando a modelação de campos de deslocamento e pressão em placas reforçadas; e na utilização de AD para identificar locais de aglomeração de pessoas através de técnicas de crowdsensing.Artificial Intelligence (AI) and Data Science (DS) have become increasingly present in our daily lives, and the benefits it has brought to society in recent years are remarkable. The success of AI was driven by the adaptive capacity that machines gained, and it is closely related to their ability to learn. Connectionist systems, presented in the form of Artificial Neural Networks (ANNs), which are inspired by the human nervous system, are one of the principal models that allows learning. These models are used in several areas, like forecasting or classification problems, presenting increasingly satisfactory results. One area in which this technology has excelled is Com puter Vision (CV), allowing, for example, the location of objects in images and their correct identification. Anomaly Detection (AD) is another field where ANNs have been emerging as one technology for problem solving. In each area, different architectures are used according to the type of data and the problem to be solved. Combining im age processing and the finding of anomalies in this type of data, there is a convergence of methodologies using convolutional modules in architectures dedicated to AD. The main objective of this dissertation is to study the existent techniques in these domains, developing different model architectures, and applying them to practical case studies in order to compare the results obtained in each approach. The major practical use case consists of monitoring road pavements using images to automatically identify degraded areas. For that, two software prototypes are proposed to gather and visualise the acquired data. Moreover, the study of ANN architectures to diagnose the asphalt condition through images is the central focus of this work. The experimented methods for AD in images include a binary classifier network as a baseline, Autoencoders (AEs) and Variational Autoen coders (VAEs). Supervised and unsupervised practises are also compared, proving their utility also in scenarios where there is no labelled data available. Using the VAE model in a supervised setting, it presents a excellent distinction between good and bad pavement areas. When labelled data is not available, using the AE and the distribution of similarities of good pavement reconstructions to calculate the threshold is the best option with both accuracy and precision above 94%. The full development process shows it is possible to build an alternative solution to decrease the operation costs relatively to expensive commercial systems and improve usability when compared with traditional solutions. Additionally, two case studies demonstrate the versatility of connectionist systems to solve problems, namely in Mechanical Structural Design enabling the modelling of displacement and pressure fields in reinforced plates; and using AD to identify crowded places through crowd-sensing techniques

    Algorithms in nature: the convergence of systems biology and computational thinking

    Get PDF
    Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. This Perspectives discusses the recent convergence of these two ways of thinking

    An overview on structural health monitoring: From the current state-of-the-art to new bio-inspired sensing paradigms

    Get PDF
    In the last decades, the field of structural health monitoring (SHM) has grown exponentially. Yet, several technical constraints persist, which are preventing full realization of its potential. To upgrade current state-of-the-art technologies, researchers have started to look at nature’s creations giving rise to a new field called ‘biomimetics’, which operates across the border between living and non-living systems. The highly optimised and time-tested performance of biological assemblies keeps on inspiring the development of bio-inspired artificial counterparts that can potentially outperform conventional systems. After a critical appraisal on the current status of SHM, this paper presents a review of selected works related to neural, cochlea and immune-inspired algorithms implemented in the field of SHM, including a brief survey of the advancements of bio-inspired sensor technology for the purpose of SHM. In parallel to this engineering progress, a more in-depth understanding of the most suitable biological patterns to be transferred into multimodal SHM systems is fundamental to foster new scientific breakthroughs. Hence, grounded in the dissection of three selected human biological systems, a framework for new bio-inspired sensing paradigms aimed at guiding the identification of tailored attributes to transplant from nature to SHM is outlined.info:eu-repo/semantics/acceptedVersio

    Network intrusion detection using genetic algorithm to find best DNA signature

    Full text link
    Bioinformatics is part of computer science that joins between computer programming and molecular biology. DNA consists of long sequence of nucleotides which formulates the genome. Our method is to generate normal signature sequence and alignment threshold value from processing the system training data and encode observed network connection into corresponding DNA nucleotides sequence, then to align the signature sequence with observed sequence to find similarity degree value and decide whether the connection is attack or normal. Number of DNA sequences makes up each population, and then new generations are produced to select the Signature with best alignment value with normal network connection sequences. This paper ends up with accuracy value and threshold score for detecting the network anomalies that no known conditions exist for them to be discovered in addition for percentage of generating false positive and true negative alarms

    Anomaly detection in the dynamics of web and social networks

    Get PDF
    In this work, we propose a new, fast and scalable method for anomaly detection in large time-evolving graphs. It may be a static graph with dynamic node attributes (e.g. time-series), or a graph evolving in time, such as a temporal network. We define an anomaly as a localized increase in temporal activity in a cluster of nodes. The algorithm is unsupervised. It is able to detect and track anomalous activity in a dynamic network despite the noise from multiple interfering sources. We use the Hopfield network model of memory to combine the graph and time information. We show that anomalies can be spotted with a good precision using a memory network. The presented approach is scalable and we provide a distributed implementation of the algorithm. To demonstrate its efficiency, we apply it to two datasets: Enron Email dataset and Wikipedia page views. We show that the anomalous spikes are triggered by the real-world events that impact the network dynamics. Besides, the structure of the clusters and the analysis of the time evolution associated with the detected events reveals interesting facts on how humans interact, exchange and search for information, opening the door to new quantitative studies on collective and social behavior on large and dynamic datasets.Comment: The Web Conference 2019, 10 pages, 7 figure

    HTM approach to image classification, sound recognition and time series forecasting

    Get PDF
    Dissertação de mestrado em Biomedical EngineeringThe introduction of Machine Learning (ML) on the orbit of the resolution of problems typically associated within the human behaviour has brought great expectations to the future. In fact, the possible development of machines capable of learning, in a similar way as of the humans, could bring grand perspectives to diverse areas like healthcare, the banking sector, retail, and any other area in which we could avoid the constant attention of a person dedicated to the solving of a problem; furthermore, there are those problems that are still not at the hands of humans to solve - these are now at the disposal of intelligent machines, bringing new possibilities to the humankind development. ML algorithms, specifically Deep Learning (DL) methods, lack a bigger acceptance by part of the community, even though they are present in various systems in our daily basis. This lack of confidence, mandatory to let systems make big, important decisions with great impact in the everyday life is due to the difficulty on understanding the learning mechanisms and previsions that result by the same - some algorithms represent themselves as ”black boxes”, translating an input into an output, while not being totally transparent to the outside. Another complication rises, when it is taken into account that the same algorithms are trained to a specific task and in accordance to the training cases found on their development, being more susceptible to error in a real environment - one can argue that they do not constitute a true Artificial Intelligence (AI). Following this line of thought, this dissertation aims at studying a new theory, Hierarchical Temporal Memory (HTM), that can be placed in the area of Machine Intelligence (MI), an area that studies the capacity of how the software systems can learn, in an identical way to the learning of a human being. The HTM is still a fresh theory, that lays on the present perception of the functioning of the human neocortex and assumes itself as under constant development; at the moment, the theory dictates that the neocortex zones are organized in an hierarchical structure, being a memory system, capable of recognizing spatial and temporal patterns. In the course of this project, an analysis was made to the functioning of the theory and its applicability to the various tasks typically solved with ML algorithms, like image classification, sound recognition and time series forecasting. At the end of this dissertation, after the evaluation of the different results obtained in various approaches, it was possible to conclude that even though these results were positive, the theory still needs to mature, not only in its theoretical basis but also in the development of libraries and frameworks of software, to capture the attention of the AI community.A introdução de ML na órbita da resolução de problemas tipicamente dedicados ao foro humano trouxe grandes expectativas para o futuro. De facto, o possível desenvolvimento de máquinas capazes de aprender, de forma semelhante aos humanos, poderia trazer grandes perspetivas para diversas áreas como a saúde, o setor bancário, retalho, e qualquer outra área em que se poderia evitar o constante alerta de uma pessoa dedicada a um problema; para além disso, problemas sem resolução humana passavam a estar a mercê destas máquinas, levando a novas possibilidades no desenvolvimento da humanidade. Apesar de se encontrar em vários sistemas no nosso dia-a-dia, estes algoritmos de ML, especificamente de DL, carecem ainda de maior aceitação por parte da comunidade, devido a dificuldade de perceber as aprendizagens e previsões resultantes, feitas pelos mesmos - alguns algoritmos apresentam-se como ”caixas negras”, traduzindo um input num output, não sendo totalmente transparente para o exterior - é necessária confiança nos sistemas que possam tomar decisões importantes e com grandes impactos no quotidiano; por outro lado, os mesmos algoritmos encontram-se treinados para uma tarefa específica e de acordo com os casos encontrados no desenvolvimento do seu treino, sendo mais suscetíveis a erros em ambientes reais, podendo se discutir que não constituem, por isso, uma verdadeira Inteligência Artificial. Seguindo este segmento, a presente dissertação procura estudar uma nova teoria, HTM, inserida na área de MI, que pretende dar a capacidade aos sistemas de software de aprenderem de uma forma idêntica a do ser humano. Esta recente teoria, assenta na atual perceção do funcionamento do neocórtex, estando por isso em constante desenvolvimento; no momento, e assumida como uma teoria que dita a hierarquização estrutural das zonas do neocórtex, sendo um sistema de memória, reconhecedor de padrões espaciais e temporais. Ao longo deste projeto, foi feita uma análise ao funcionamento da teoria, e a sua aplicabilidade a várias tarefas tipicamente resolvidas com algoritmos de ML, como classificação de imagem, reconhecimento de som e previsão de series temporais. No final desta dissertação, após uma avaliação dos diferentes resultados obtidos em várias abordagens, foi possível concluir que apesar dos resultadospositivos, a teoria precisa ainda de maturar, não só a nível teórico como a nível prático, no desenvolvimento de bibliotecas e frameworks de software, de forma a capturar a atenção da comunidade de Inteligência Artificial
    corecore