267 research outputs found

    A Comparison of Nature Inspired Algorithms for Multi-threshold Image Segmentation

    Full text link
    In the field of image analysis, segmentation is one of the most important preprocessing steps. One way to achieve segmentation is by mean of threshold selection, where each pixel that belongs to a determined class islabeled according to the selected threshold, giving as a result pixel groups that share visual characteristics in the image. Several methods have been proposed in order to solve threshold selectionproblems; in this work, it is used the method based on the mixture of Gaussian functions to approximate the 1D histogram of a gray level image and whose parameters are calculated using three nature inspired algorithms (Particle Swarm Optimization, Artificial Bee Colony Optimization and Differential Evolution). Each Gaussian function approximates thehistogram, representing a pixel class and therefore a threshold point. Experimental results are shown, comparing in quantitative and qualitative fashion as well as the main advantages and drawbacks of each algorithm, applied to multi-threshold problem.Comment: 16 pages, this is a draft of the final version of the article sent to the Journa

    RapidRadio: A Domain-Specific Productivity Enhancing Framework

    Get PDF
    The RapidRadio framework for signal classification and receiver deployment is discussed. The framework is a productivity enhancing tool that reduces the required knowledge-base for implementing a receiver on an FPGA-based SDR platform. The ultimate objective of this framework is to identify unknown signals and to build FPGA-based receivers capable of receiving them. The architecture of the receiver deployed by the framework and its implementation are discussed. The framework's capacity to classify a signal and deploy a functional receiver is validated with over-the-air experiments

    Timely Classification of Encrypted or ProtocolObfuscated Internet Traffic Using Statistical Methods

    Get PDF
    Internet traffic classification aims to identify the type of application or protocol that generated a particular packet or stream of packets on the network. Through traffic classification, Internet Service Providers (ISPs), governments, and network administrators can access basic functions and several solutions, including network management, advanced network monitoring, network auditing, and anomaly detection. Traffic classification is essential as it ensures the Quality of Service (QoS) of the network, as well as allowing efficient resource planning. With the increase of encrypted or obfuscated protocol traffic on the Internet and multilayer data encapsulation, some classical classification methods have lost interest from the scientific community. The limitations of traditional classification methods based on port numbers and payload inspection to classify encrypted or obfuscated Internet traffic have led to significant research efforts focused on Machine Learning (ML) based classification approaches using statistical features from the transport layer. In an attempt to increase classification performance, Machine Learning strategies have gained interest from the scientific community and have shown promise in the future of traffic classification, specially to recognize encrypted traffic. However, ML approach also has its own limitations, as some of these methods have a high computational resource consumption, which limits their application when classifying large traffic or realtime flows. Limitations of ML application have led to the investigation of alternative approaches, including featurebased procedures and statistical methods. In this sense, statistical analysis methods, such as distances and divergences, have been used to classify traffic in large flows and in realtime. The main objective of statistical distance is to differentiate flows and find a pattern in traffic characteristics through statistical properties, which enable classification. Divergences are functional expressions often related to information theory, which measure the degree of discrepancy between any two distributions. This thesis focuses on proposing a new methodological approach to classify encrypted or obfuscated Internet traffic based on statistical methods that enable the evaluation of network traffic classification performance, including the use of computational resources in terms of CPU and memory. A set of traffic classifiers based on KullbackLeibler and JensenShannon divergences, and Euclidean, Hellinger, Bhattacharyya, and Wootters distances were proposed. The following are the four main contributions to the advancement of scientific knowledge reported in this thesis. First, an extensive literature review on the classification of encrypted and obfuscated Internet traffic was conducted. The results suggest that portbased and payloadbased methods are becoming obsolete due to the increasing use of traffic encryption and multilayer data encapsulation. MLbased methods are also becoming limited due to their computational complexity. As an alternative, Support Vector Machine (SVM), which is also an ML method, and the KolmogorovSmirnov and Chisquared tests can be used as reference for statistical classification. In parallel, the possibility of using statistical methods for Internet traffic classification has emerged in the literature, with the potential of good results in classification without the need of large computational resources. The potential statistical methods are Euclidean Distance, Hellinger Distance, Bhattacharyya Distance, Wootters Distance, as well as KullbackLeibler (KL) and JensenShannon divergences. Second, we present a proposal and implementation of a classifier based on SVM for P2P multimedia traffic, comparing the results with KolmogorovSmirnov (KS) and Chisquare tests. The results suggest that SVM classification with Linear kernel leads to a better classification performance than KS and Chisquare tests, depending on the value assigned to the Self C parameter. The SVM method with Linear kernel and suitable values for the Self C parameter may be a good choice to identify encrypted P2P multimedia traffic on the Internet. Third, we present a proposal and implementation of two classifiers based on KL Divergence and Euclidean Distance, which are compared to SVM with Linear kernel, configured with the standard Self C parameter, showing a reduced ability to classify flows based solely on packet sizes compared to KL and Euclidean Distance methods. KL and Euclidean methods were able to classify all tested applications, particularly streaming and P2P, where for almost all cases they efficiently identified them with high accuracy, with reduced consumption of computational resources. Based on the obtained results, it can be concluded that KL and Euclidean Distance methods are an alternative to SVM, as these statistical approaches can operate in realtime and do not require retraining every time a new type of traffic emerges. Fourth, we present a proposal and implementation of a set of classifiers for encrypted Internet traffic, based on JensenShannon Divergence and Hellinger, Bhattacharyya, and Wootters Distances, with their respective results compared to those obtained with methods based on Euclidean Distance, KL, KS, and ChiSquare. Additionally, we present a comparative qualitative analysis of the tested methods based on Kappa values and Receiver Operating Characteristic (ROC) curves. The results suggest average accuracy values above 90% for all statistical methods, classified as ”almost perfect reliability” in terms of Kappa values, with the exception of KS. This result indicates that these methods are viable options to classify encrypted Internet traffic, especially Hellinger Distance, which showed the best Kappa values compared to other classifiers. We conclude that the considered statistical methods can be accurate and costeffective in terms of computational resource consumption to classify network traffic. Our approach was based on the classification of Internet network traffic, focusing on statistical distances and divergences. We have shown that it is possible to classify and obtain good results with statistical methods, balancing classification performance and the use of computational resources in terms of CPU and memory. The validation of the proposal supports the argument of this thesis, which proposes the implementation of statistical methods as a viable alternative to Internet traffic classification compared to methods based on port numbers, payload inspection, and ML.A classificação de tráfego Internet visa identificar o tipo de aplicação ou protocolo que gerou um determinado pacote ou fluxo de pacotes na rede. Através da classificação de tráfego, Fornecedores de Serviços de Internet (ISP), governos e administradores de rede podem ter acesso às funções básicas e várias soluções, incluindo gestão da rede, monitoramento avançado de rede, auditoria de rede e deteção de anomalias. Classificar o tráfego é essencial, pois assegura a Qualidade de Serviço (QoS) da rede, além de permitir planear com eficiência o uso de recursos. Com o aumento de tráfego cifrado ou protocolo ofuscado na Internet e do encapsulamento de dados multicamadas, alguns métodos clássicos da classificação perderam interesse de investigação da comunidade científica. As limitações dos métodos tradicionais da classificação com base no número da porta e na inspeção de carga útil payload para classificar o tráfego de Internet cifrado ou ofuscado levaram a esforços significativos de investigação com foco em abordagens da classificação baseadas em técnicas de Aprendizagem Automática (ML) usando recursos estatísticos da camada de transporte. Na tentativa de aumentar o desempenho da classificação, as estratégias de Aprendizagem Automática ganharam o interesse da comunidade científica e se mostraram promissoras no futuro da classificação de tráfego, principalmente no reconhecimento de tráfego cifrado. No entanto, a abordagem em ML também têm as suas próprias limitações, pois alguns desses métodos possuem um elevado consumo de recursos computacionais, o que limita a sua aplicação para classificação de grandes fluxos de tráfego ou em tempo real. As limitações no âmbito da aplicação de ML levaram à investigação de abordagens alternativas, incluindo procedimentos baseados em características e métodos estatísticos. Neste sentido, os métodos de análise estatística, tais como distâncias e divergências, têm sido utilizados para classificar tráfego em grandes fluxos e em tempo real. A distância estatística possui como objetivo principal diferenciar os fluxos e permite encontrar um padrão nas características de tráfego através de propriedades estatísticas, que possibilitam a classificação. As divergências são expressões funcionais frequentemente relacionadas com a teoria da informação, que mede o grau de discrepância entre duas distribuições quaisquer. Esta tese focase na proposta de uma nova abordagem metodológica para classificação de tráfego cifrado ou ofuscado da Internet com base em métodos estatísticos que possibilite avaliar o desempenho da classificação de tráfego de rede, incluindo a utilização de recursos computacionais, em termos de CPU e memória. Foi proposto um conjunto de classificadores de tráfego baseados nas Divergências de KullbackLeibler e JensenShannon e Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters. A seguir resumemse os tese. Primeiro, realizámos uma ampla revisão de literatura sobre classificação de tráfego cifrado e ofuscado de Internet. Os resultados sugerem que os métodos baseados em porta e baseados em carga útil estão se tornando obsoletos em função do crescimento da utilização de cifragem de tráfego e encapsulamento de dados multicamada. O tipo de métodos baseados em ML também está se tornando limitado em função da complexidade computacional. Como alternativa, podese utilizar a Máquina de Vetor de Suporte (SVM), que também é um método de ML, e os testes de KolmogorovSmirnov e Quiquadrado como referência de comparação da classificação estatística. Em paralelo, surgiu na literatura a possibilidade de utilização de métodos estatísticos para classificação de tráfego de Internet, com potencial de bons resultados na classificação sem aporte de grandes recursos computacionais. Os métodos estatísticos potenciais são as Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters, além das Divergências de Kullback–Leibler (KL) e JensenShannon. Segundo, apresentamos uma proposta e implementação de um classificador baseado na Máquina de Vetor de Suporte (SVM) para o tráfego multimédia P2P (PeertoPeer), comparando os resultados com os testes de KolmogorovSmirnov (KS) e Quiquadrado. Os resultados sugerem que a classificação da SVM com kernel Linear conduz a um melhor desempenho da classificação do que os testes KS e Quiquadrado, dependente do valor atribuído ao parâmetro Self C. O método SVM com kernel Linear e com valores adequados para o parâmetro Self C pode ser uma boa escolha para identificar o tráfego Par a Par (P2P) multimédia cifrado na Internet. Terceiro, apresentamos uma proposta e implementação de dois classificadores baseados na Divergência de KullbackLeibler (KL) e na Distância Euclidiana, sendo comparados com a SVM com kernel Linear, configurado para o parâmestro Self C padrão, apresenta reduzida capacidade de classificar fluxos com base apenas nos tamanhos dos pacotes em relação aos métodos KL e Distância Euclidiana. Os métodos KL e Euclidiano foram capazes de classificar todas as aplicações testadas, destacandose streaming e P2P, onde para quase todos os casos foi eficiente identificálas com alta precisão, com reduzido consumo de recursos computacionais.Com base nos resultados obtidos, podese concluir que os métodos KL e Distância Euclidiana são uma alternativa à SVM, porque essas abordagens estatísticas podem operar em tempo real e não precisam de retreinamento cada vez que surge um novo tipo de tráfego. Quarto, apresentamos uma proposta e implementação de um conjunto de classificadores para o tráfego de Internet cifrado, baseados na Divergência de JensenShannon e nas Distâncias de Hellinger, Bhattacharyya e Wootters, sendo os respetivos resultados comparados com os resultados obtidos com os métodos baseados na Distância Euclidiana, KL, KS e Quiquadrado. Além disso, apresentamos uma análise qualitativa comparativa dos métodos testados com base nos valores de Kappa e Curvas Característica de Operação do Receptor (ROC). Os resultados sugerem valores médios de precisão acima de 90% para todos os métodos estatísticos, classificados como “confiabilidade quase perfeita” em valores de Kappa, com exceçãode KS. Esse resultado indica que esses métodos são opções viáveis para a classificação de tráfego cifrado da Internet, em especial a Distância de Hellinger, que apresentou os melhores resultados do valor de Kappa em comparaçãocom os demais classificadores. Concluise que os métodos estatísticos considerados podem ser precisos e económicos em termos de consumo de recursos computacionais para classificar o tráfego da rede. A nossa abordagem baseouse na classificação de tráfego de rede Internet, focando em distâncias e divergências estatísticas. Nós mostramos que é possível classificar e obter bons resultados com métodos estatísticos, equilibrando desempenho de classificação e uso de recursos computacionais em termos de CPU e memória. A validação da proposta sustenta o argumento desta tese, que propõe a implementação de métodos estatísticos como alternativa viável à classificação de tráfego da Internet em relação aos métodos com base no número da porta, na inspeção de carga útil e de ML.Thesis prepared at Instituto de Telecomunicações Delegação da Covilhã and at the Department of Computer Science of the University of Beira Interior, and submitted to the University of Beira Interior for discussion in public session to obtain the Ph.D. Degree in Computer Science and Engineering. This work has been funded by Portuguese FCT/MCTES through national funds and, when applicable, cofunded by EU funds under the project UIDB/50008/2020, and by operation Centro010145FEDER000019 C4 Centro de Competências em Cloud Computing, cofunded by the European Regional Development Fund (ERDF/FEDER) through the Programa Operacional Regional do Centro (Centro 2020). This work has also been funded by CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education) within the Ministry of Education of Brazil under a scholarship supported by the International Cooperation Program CAPES/COFECUB Project 9090134/ 2013 at the University of Beira Interior

    Self-administered transcranial direct current stimulation treatment of knee osteoarthritis alters pain-related fNIRS connectivity networks

    Get PDF
    Epub 2023 Mar 31Significance: Knee osteoarthritis (OA) is a disease that causes chronic pain in the elderly population. Currently, OA is mainly treated pharmacologically with analgesics, although research has shown that neuromodulation via transcranial direct current stimulation (tDCS) may be beneficial in reducing pain in clinical settings. However, no studies have reported the effects of home-based self-administered tDCS on functional brain networks in older adults with knee OA. Aim: We used functional near-infrared spectroscopy (fNIRS) to investigate the functional connectivity effects of tDCS on underlying pain processing mechanisms at the central nervous level in older adults with knee OA. Approach: Pain-related brain connectivity networks were extracted using fNIRS at baseline and for three consecutive weeks of treatment from 120 subjects randomly assigned to two groups undergoing active tDCS and sham tDCS. Results: Our results showed that the tDCS intervention significantly modulated pain-related connectivity correlation only in the group receiving active treatment. We also found that only the active treatment group showed a significantly reduced number and strength of functional connections evoked during nociception in the prefrontal cortex, primary motor (M1), and primary somatosensory (S1) cortices. To our knowledge, this is the first study in which the effect of tDCS on pain-related connectivity networks is investigated using fNIRS. Conclusions: fNIRS-based functional connectivity can be effectively used to investigate neural circuits of pain at the cortical level in association with nonpharmacological, self-administered tDCS treatment.S.M.H. and L.P. would like to acknowledge the support of the National Science Foundation (Grant Nos. CNS 1650536 and 2137255) and I/UCRC for Building Reliable Advances and Innovation in Neurotechnology. LP also acknowledges the U.S. Fulbright Scholar Program and the Fulbright Spain Commission for sponsoring his stay at the Basque Center on Cognition, Brain and Language. The research reported in this publication was supported by the National Institute of Nursing Research of the National Institutes of Health (Award No. R15NR018050)

    AMANDA : density-based adaptive model for nonstationary data under extreme verification latency scenarios

    Get PDF
    Gradual concept-drift refers to a smooth and gradual change in the relations between input and output data in the underlying distribution over time. The problem generates a model obsolescence and consequently a quality decrease in predictions. Besides, there is a challenging task during the stream: The extreme verification latency (EVL) to verify the labels. For batch scenarios, state-of-the-art methods propose an adaptation of a supervised model by using an unconstrained least squares importance fitting (uLSIF) algorithm or a semi-supervised approach along with a core support extraction (CSE) method. However, these methods do not properly tackle the mentioned problems due to their high computational time for large data volumes, lack in representing the right samples of the drift or even for having several parameters for tuning. Therefore, we propose a density-based adaptive model for nonstationary data (AMANDA), which uses a semi-supervised classifier along with a CSE method. AMANDA has two variations: AMANDA with a fixed cutting percentage (AMANDA-FCP); and AMANDA with a dynamic cutting percentage (AMANDADCP). Our results indicate that the two variations of AMANDA outperform the state-of-the-art methods for almost all synthetic datasets and real ones with an improvement up to 27.98% regarding the average error. We have found that the use of AMANDA-FCP improved the results for a gradual concept-drift even with a small size of initial labeled data. Moreover, our results indicate that SSL classifiers are improved when they work along with our static or dynamic CSE methods. Therefore, we emphasize the importance of research directions based on this approach.Concept-drift gradual refere-se à mudança suave e gradual na distribuição dos dados conforme o tempo passa. Este problema causa obsolescência no modelo de aprendizado e queda na qualidade das previsões. Além disso, existe um complicador durante o processamento dos dados: a latência de verificação extrema (LVE) para se verificar os rótulos. Métodos do estado da arte propõem uma adaptação do modelo supervisionado usando uma abordagem de estimação de importância baseado em mínimos quadrados ou usando uma abordagem semi-supervisionada em conjunto com a extração de instâncias centrais, na sigla em inglês (CSE). Entretanto, estes métodos não tratam adequadamente os problemas mencionados devido ao fato de requererem alto tempo computacional para processar grandes volumes de dados, falta de correta seleção das instâncias que representam a mudança da distribuição, ou ainda por demandarem o ajuste de grande quantidade de parâmetros. Portanto, propomos um modelo adaptativo baseado em densidades para dados não-estacionários (AMANDA), que tem como base um classificador semi-supervisionado e um método CSE baseado em densidade. AMANDA tem duas variações: percentual de corte fixo (AMANDAFCP); e percentual de corte dinâmico (AMANDA-DCP). Nossos resultados indicam que as duas variações da proposta superam o estado da arte em quase todas as bases de dados sintéticas e reais em até 27,98% em relação ao erro médio. Concluímos que a aplicação do método AMANDA-FCP faz com que a classificação melhore mesmo quando há uma pequena porção inicial de dados rotulados. Mais ainda, os classificadores semi-supervisionados são melhorados quando trabalham em conjunto com nossos métodos de CSE, estático ou dinâmico

    Digital Modulation Identification Model Using Wavelet Transform and Statistical Parameters

    Get PDF
    A generalized modulation identification scheme is developed and presented. With the help of this scheme, the automatic modulation classification and recognition of wireless communication signals with a priori unknown parameters are possible effectively. The special features of the procedure are the possibility to adapt it dynamically to nearly all modulation types, and the capability to identify. The developed scheme based on wavelet transform and statistical parameters has been used to identify M-ary PSK, M-ary QAM, GMSK, and M-ary FSK modulations. The simulated results show that the correct modulation identification is possible to a lower bound of 5 dB. The identification percentage has been analyzed based on the confusion matrix. When SNR is above 5 dB, the probability of detection of the proposed system is more than 0.968. The performance of the proposed scheme has been compared with existing methods and found it will identify all digital modulation schemes with low SNR

    Machine learning methods for the characterization and classification of complex data

    Get PDF
    This thesis work presents novel methods for the analysis and classification of medical images and, more generally, complex data. First, an unsupervised machine learning method is proposed to order anterior chamber OCT (Optical Coherence Tomography) images according to a patient's risk of developing angle-closure glaucoma. In a second study, two outlier finding techniques are proposed to improve the results of above mentioned machine learning algorithm, we also show that they are applicable to a wide variety of data, including fraud detection in credit card transactions. In a third study, the topology of the vascular network of the retina, considering it a complex tree-like network is analyzed and we show that structural differences reveal the presence of glaucoma and diabetic retinopathy. In a fourth study we use a model of a laser with optical injection that presents extreme events in its intensity time-series to evaluate machine learning methods to forecast such extreme events.El presente trabajo de tesis desarrolla nuevos métodos para el análisis y clasificación de imágenes médicas y datos complejos en general. Primero, proponemos un método de aprendizaje automático sin supervisión que ordena imágenes OCT (tomografía de coherencia óptica) de la cámara anterior del ojo en función del grado de riesgo del paciente de padecer glaucoma de ángulo cerrado. Luego, desarrollamos dos métodos de detección automática de anomalías que utilizamos para mejorar los resultados del algoritmo anterior, pero que su aplicabilidad va mucho más allá, siendo útil, incluso, para la detección automática de fraudes en transacciones de tarjetas de crédito. Mostramos también, cómo al analizar la topología de la red vascular de la retina considerándola una red compleja, podemos detectar la presencia de glaucoma y de retinopatía diabética a través de diferencias estructurales. Estudiamos también un modelo de un láser con inyección óptica que presenta eventos extremos en la serie temporal de intensidad para evaluar diferentes métodos de aprendizaje automático para predecir dichos eventos extremos.Aquesta tesi desenvolupa nous mètodes per a l’anàlisi i la classificació d’imatges mèdiques i dades complexes. Hem proposat, primer, un mètode d’aprenentatge automàtic sense supervisió que ordena imatges OCT (tomografia de coherència òptica) de la cambra anterior de l’ull en funció del grau de risc del pacient de patir glaucoma d’angle tancat. Després, hem desenvolupat dos mètodes de detecció automàtica d’anomalies que hem utilitzat per millorar els resultats de l’algoritme anterior, però que la seva aplicabilitat va molt més enllà, sent útil, fins i tot, per a la detecció automàtica de fraus en transaccions de targetes de crèdit. Mostrem també, com en analitzar la topologia de la xarxa vascular de la retina considerant-la una xarxa complexa, podem detectar la presència de glaucoma i de retinopatia diabètica a través de diferències estructurals. Finalment, hem estudiat un làser amb injecció òptica, el qual presenta esdeveniments extrems en la sèrie temporal d’intensitat. Hem avaluat diferents mètodes per tal de predir-los.Postprint (published version
    corecore