44 research outputs found
Efficient and robust audio fingerprinting
Master'sMASTER OF SCIENC
Detection and handling of overlapping speech for speaker diarization
For the last several years, speaker diarization has been attracting substantial research attention as one of the spoken
language technologies applied for the improvement, or enrichment, of recording transcriptions. Recordings of meetings,
compared to other domains, exhibit an increased complexity due to the spontaneity of speech, reverberation effects, and also
due to the presence of overlapping speech.
Overlapping speech refers to situations when two or more speakers are speaking simultaneously. In meeting data, a
substantial portion of errors of the conventional speaker diarization systems can be ascribed to speaker overlaps, since usually
only one speaker label is assigned per segment. Furthermore, simultaneous speech included in training data can eventually
lead to corrupt single-speaker models and thus to a worse segmentation.
This thesis concerns the detection of overlapping speech segments and its further application for the improvement of speaker
diarization performance. We propose the use of three spatial cross-correlationbased parameters for overlap detection on
distant microphone channel data. Spatial features from different microphone pairs are fused by means of principal component
analysis, linear discriminant analysis, or by a multi-layer perceptron.
In addition, we also investigate the possibility of employing longterm prosodic information. The most suitable subset from a set
of candidate prosodic features is determined in two steps. Firstly, a ranking according to mRMR criterion is obtained, and then,
a standard hill-climbing wrapper approach is applied in order to determine the optimal number of features.
The novel spatial as well as prosodic parameters are used in combination with spectral-based features suggested previously in
the literature. In experiments conducted on AMI meeting data, we show that the newly proposed features do contribute to the
detection of overlapping speech, especially on data originating from a single recording site.
In speaker diarization, for segments including detected speaker overlap, a second speaker label is picked, and such segments
are also discarded from the model training. The proposed overlap labeling technique is integrated in Viterbi decoding, a part of
the diarization algorithm. During the system development it was discovered that it is favorable to do an independent
optimization of overlap exclusion and labeling with respect to the overlap detection system.
We report improvements over the baseline diarization system on both single- and multi-site AMI data. Preliminary experiments
with NIST RT data show DER improvement on the RT ¿09 meeting recordings as well.
The addition of beamforming and TDOA feature stream into the baseline diarization system, which was aimed at improving the
clustering process, results in a bit higher effectiveness of the overlap labeling algorithm. A more detailed analysis on the
overlap exclusion behavior reveals big improvement contrasts between individual meeting recordings as well as between
various settings of the overlap detection operation point. However, a high performance variability across different recordings is
also typical of the baseline diarization system, without any overlap handling
Timely Classification of Encrypted or ProtocolObfuscated Internet Traffic Using Statistical Methods
Internet traffic classification aims to identify the type of application or protocol that generated
a particular packet or stream of packets on the network. Through traffic classification,
Internet Service Providers (ISPs), governments, and network administrators can
access basic functions and several solutions, including network management, advanced
network monitoring, network auditing, and anomaly detection. Traffic classification is
essential as it ensures the Quality of Service (QoS) of the network, as well as allowing
efficient resource planning.
With the increase of encrypted or obfuscated protocol traffic on the Internet and multilayer
data encapsulation, some classical classification methods have lost interest from the
scientific community. The limitations of traditional classification methods based on port
numbers and payload inspection to classify encrypted or obfuscated Internet traffic have
led to significant research efforts focused on Machine Learning (ML) based classification
approaches using statistical features from the transport layer. In an attempt to increase
classification performance, Machine Learning strategies have gained interest from the scientific
community and have shown promise in the future of traffic classification, specially
to recognize encrypted traffic.
However, ML approach also has its own limitations, as some of these methods have a
high computational resource consumption, which limits their application when classifying
large traffic or realtime
flows. Limitations of ML application have led to the investigation
of alternative approaches, including featurebased
procedures and statistical methods. In
this sense, statistical analysis methods, such as distances and divergences, have been used
to classify traffic in large flows and in realtime.
The main objective of statistical distance is to differentiate flows and find a pattern in
traffic characteristics through statistical properties, which enable classification. Divergences
are functional expressions often related to information theory, which measure the
degree of discrepancy between any two distributions.
This thesis focuses on proposing a new methodological approach to classify encrypted
or obfuscated Internet traffic based on statistical methods that enable the evaluation of
network traffic classification performance, including the use of computational resources
in terms of CPU and memory. A set of traffic classifiers based on KullbackLeibler
and
JensenShannon
divergences, and Euclidean, Hellinger, Bhattacharyya, and Wootters distances
were proposed. The following are the four main contributions to the advancement
of scientific knowledge reported in this thesis.
First, an extensive literature review on the classification of encrypted and obfuscated Internet traffic was conducted. The results suggest that portbased
and payloadbased
methods are becoming obsolete due to the increasing use of traffic encryption and multilayer
data encapsulation. MLbased
methods are also becoming limited due to their computational
complexity. As an alternative, Support Vector Machine (SVM), which is also
an ML method, and the KolmogorovSmirnov
and Chisquared
tests can be used as reference
for statistical classification. In parallel, the possibility of using statistical methods
for Internet traffic classification has emerged in the literature, with the potential of good
results in classification without the need of large computational resources. The potential
statistical methods are Euclidean Distance, Hellinger Distance, Bhattacharyya Distance,
Wootters Distance, as well as KullbackLeibler
(KL) and JensenShannon
divergences.
Second, we present a proposal and implementation of a classifier based on SVM for P2P
multimedia traffic, comparing the results with KolmogorovSmirnov
(KS) and Chisquare
tests. The results suggest that SVM classification with Linear kernel leads to a better classification
performance than KS and Chisquare
tests, depending on the value assigned to
the Self C parameter. The SVM method with Linear kernel and suitable values for the Self
C parameter may be a good choice to identify encrypted P2P multimedia traffic on the
Internet.
Third, we present a proposal and implementation of two classifiers based on KL Divergence
and Euclidean Distance, which are compared to SVM with Linear kernel, configured
with the standard Self C parameter, showing a reduced ability to classify flows based
solely on packet sizes compared to KL and Euclidean Distance methods. KL and Euclidean
methods were able to classify all tested applications, particularly streaming and P2P,
where for almost all cases they efficiently identified them with high accuracy, with reduced
consumption of computational resources. Based on the obtained results, it can be
concluded that KL and Euclidean Distance methods are an alternative to SVM, as these
statistical approaches can operate in realtime
and do not require retraining every time a
new type of traffic emerges.
Fourth, we present a proposal and implementation of a set of classifiers for encrypted
Internet traffic, based on JensenShannon
Divergence and Hellinger, Bhattacharyya, and
Wootters Distances, with their respective results compared to those obtained with methods
based on Euclidean Distance, KL, KS, and ChiSquare.
Additionally, we present a comparative
qualitative analysis of the tested methods based on Kappa values and Receiver
Operating Characteristic (ROC) curves. The results suggest average accuracy values above
90% for all statistical methods, classified as ”almost perfect reliability” in terms of Kappa
values, with the exception of KS. This result indicates that these methods are viable options
to classify encrypted Internet traffic, especially Hellinger Distance, which showed
the best Kappa values compared to other classifiers. We conclude that the considered
statistical methods can be accurate and costeffective
in terms of computational resource
consumption to classify network traffic. Our approach was based on the classification of Internet network traffic, focusing on statistical
distances and divergences. We have shown that it is possible to classify and obtain
good results with statistical methods, balancing classification performance and the
use of computational resources in terms of CPU and memory. The validation of the proposal
supports the argument of this thesis, which proposes the implementation of statistical
methods as a viable alternative to Internet traffic classification compared to methods
based on port numbers, payload inspection, and ML.A classificação de tráfego Internet visa identificar o tipo de aplicação ou protocolo que
gerou um determinado pacote ou fluxo de pacotes na rede. Através da classificação de
tráfego, Fornecedores de Serviços de Internet (ISP), governos e administradores de rede
podem ter acesso às funções básicas e várias soluções, incluindo gestão da rede, monitoramento
avançado de rede, auditoria de rede e deteção de anomalias. Classificar o tráfego é
essencial, pois assegura a Qualidade de Serviço (QoS) da rede, além de permitir planear
com eficiência o uso de recursos.
Com o aumento de tráfego cifrado ou protocolo ofuscado na Internet e do encapsulamento
de dados multicamadas, alguns métodos clássicos da classificação perderam interesse de
investigação da comunidade científica. As limitações dos métodos tradicionais da classificação
com base no número da porta e na inspeção de carga útil payload para classificar
o tráfego de Internet cifrado ou ofuscado levaram a esforços significativos de investigação
com foco em abordagens da classificação baseadas em técnicas de Aprendizagem
Automática (ML) usando recursos estatísticos da camada de transporte. Na tentativa
de aumentar o desempenho da classificação, as estratégias de Aprendizagem Automática
ganharam o interesse da comunidade científica e se mostraram promissoras no futuro da
classificação de tráfego, principalmente no reconhecimento de tráfego cifrado.
No entanto, a abordagem em ML também têm as suas próprias limitações,
pois alguns
desses métodos possuem um elevado consumo de recursos computacionais, o que limita
a sua aplicação para classificação de grandes fluxos de tráfego ou em tempo real. As limitações
no âmbito da aplicação de ML levaram à investigação de abordagens alternativas,
incluindo procedimentos baseados em características e métodos estatísticos. Neste sentido,
os métodos de análise estatística, tais como distâncias e divergências, têm sido utilizados
para classificar tráfego em grandes fluxos e em tempo real.
A distância estatística possui como objetivo principal diferenciar os fluxos e permite encontrar
um padrão nas características de tráfego através de propriedades estatísticas, que
possibilitam a classificação. As divergências são expressões funcionais frequentemente
relacionadas com a teoria da informação, que mede o grau de discrepância entre duas
distribuições quaisquer.
Esta tese focase
na proposta de uma nova abordagem metodológica para classificação de
tráfego cifrado ou ofuscado da Internet com base em métodos estatísticos que possibilite
avaliar o desempenho da classificação de tráfego de rede, incluindo a utilização de recursos
computacionais, em termos de CPU e memória. Foi proposto um conjunto de classificadores
de tráfego baseados nas Divergências de KullbackLeibler
e JensenShannon
e Distâncias Euclidiana, Hellinger, Bhattacharyya e Wootters. A seguir resumemse
os tese.
Primeiro, realizámos uma ampla revisão de literatura sobre classificação de tráfego cifrado
e ofuscado de Internet. Os resultados sugerem que os métodos baseados em porta e
baseados em carga útil estão se tornando obsoletos em função do crescimento da utilização
de cifragem de tráfego e encapsulamento de dados multicamada. O tipo de métodos
baseados em ML também está se tornando limitado em função da complexidade computacional.
Como alternativa, podese
utilizar a Máquina de Vetor de Suporte (SVM),
que também é um método de ML, e os testes de KolmogorovSmirnov
e Quiquadrado
como referência de comparação da classificação estatística. Em paralelo, surgiu na literatura
a possibilidade de utilização de métodos estatísticos para classificação de tráfego
de Internet, com potencial de bons resultados na classificação sem aporte de grandes recursos
computacionais. Os métodos estatísticos potenciais são as Distâncias Euclidiana,
Hellinger, Bhattacharyya e Wootters, além das Divergências de Kullback–Leibler (KL) e
JensenShannon.
Segundo, apresentamos uma proposta e implementação de um classificador baseado na
Máquina de Vetor de Suporte (SVM) para o tráfego multimédia P2P (PeertoPeer),
comparando
os resultados com os testes de KolmogorovSmirnov
(KS) e Quiquadrado.
Os
resultados sugerem que a classificação da SVM com kernel Linear conduz a um melhor
desempenho da classificação do que os testes KS e Quiquadrado,
dependente do valor
atribuído ao parâmetro Self C. O método SVM com kernel Linear e com valores adequados
para o parâmetro Self C pode ser uma boa escolha para identificar o tráfego Par a Par
(P2P) multimédia cifrado na Internet.
Terceiro, apresentamos uma proposta e implementação de dois classificadores baseados
na Divergência de KullbackLeibler (KL) e na Distância Euclidiana, sendo comparados
com a SVM com kernel Linear, configurado para o parâmestro Self C padrão, apresenta
reduzida
capacidade de classificar fluxos com base apenas nos tamanhos dos pacotes
em relação aos métodos KL e Distância Euclidiana. Os métodos KL e Euclidiano foram
capazes de classificar todas as aplicações testadas, destacandose
streaming e P2P, onde
para quase todos os casos foi eficiente identificálas
com alta precisão, com reduzido consumo
de recursos computacionais.Com base nos resultados obtidos, podese
concluir que
os métodos KL e Distância Euclidiana são uma alternativa à SVM, porque essas abordagens
estatísticas podem operar em tempo real e não precisam de retreinamento cada vez
que surge um novo tipo de tráfego.
Quarto, apresentamos uma proposta e implementação de um conjunto de classificadores
para o tráfego de Internet cifrado, baseados na Divergência de JensenShannon
e nas Distâncias
de Hellinger, Bhattacharyya e Wootters, sendo os respetivos resultados comparados
com os resultados obtidos com os métodos baseados na Distância Euclidiana, KL, KS e Quiquadrado.
Além disso, apresentamos uma análise qualitativa comparativa dos
métodos testados com base nos valores de Kappa e Curvas Característica de Operação do
Receptor (ROC). Os resultados sugerem valores médios de precisão acima de 90% para todos
os métodos estatísticos, classificados como “confiabilidade quase perfeita” em valores
de Kappa, com exceçãode KS. Esse resultado indica que esses métodos são opções viáveis
para a classificação de tráfego cifrado da Internet, em especial a Distância de Hellinger,
que apresentou os melhores resultados do valor de Kappa em comparaçãocom os demais
classificadores. Concluise
que os métodos estatísticos considerados podem ser precisos e
económicos em termos de consumo de recursos computacionais para classificar o tráfego
da rede.
A nossa abordagem baseouse
na classificação de tráfego de rede Internet, focando em
distâncias e divergências estatísticas. Nós mostramos que é possível classificar e obter
bons resultados com métodos estatísticos, equilibrando desempenho de classificação e
uso de recursos computacionais em termos de CPU e memória. A validação da proposta
sustenta o argumento desta tese, que propõe a implementação de métodos estatísticos
como alternativa viável à classificação de tráfego da Internet em relação aos métodos com
base no número da porta, na inspeção de carga útil e de ML.Thesis prepared at Instituto de Telecomunicações Delegação
da Covilhã and at the Department
of Computer Science of the University of Beira Interior, and submitted to the
University of Beira Interior for discussion in public session to obtain the Ph.D. Degree in
Computer Science and Engineering.
This work has been funded by Portuguese FCT/MCTES through national funds and, when
applicable, cofunded
by EU funds under the project UIDB/50008/2020, and by operation
Centro010145FEDER000019
C4
Centro
de Competências em Cloud Computing,
cofunded
by the European Regional Development Fund (ERDF/FEDER) through
the Programa Operacional Regional do Centro (Centro 2020). This work has also been
funded by CAPES (Brazilian Federal Agency for Support and Evaluation of Graduate Education)
within the Ministry of Education of Brazil under a scholarship supported by the
International Cooperation Program CAPES/COFECUB Project
9090134/
2013 at the
University of Beira Interior
Data Summarizations for Scalable, Robust and Privacy-Aware Learning in High Dimensions
The advent of large-scale datasets has offered unprecedented amounts of information for building statistically powerful machines, but, at the same time, also introduced a remarkable computational challenge: how can we efficiently process massive data? This thesis presents a suite of data reduction methods that make learning algorithms scale on large datasets, via extracting a succinct model-specific representation that summarizes the
full data collection—a coreset. Our frameworks support by design datasets of arbitrary dimensionality, and can be used for general purpose Bayesian inference under real-world constraints, including privacy preservation and robustness to outliers, encompassing diverse uncertainty-aware data analysis tasks, such as density estimation, classification
and regression.
We motivate the necessity for novel data reduction techniques in the first place by developing a reidentification attack on coarsened representations of private behavioural data. Analysing longitudinal records of human mobility, we detect privacy-revealing structural patterns, that remain preserved in reduced graph representations of individuals’ information with manageable size. These unique patterns enable mounting linkage attacks via structural similarity computations on longitudinal mobility traces, revealing an overlooked, yet existing, privacy threat.
We then propose a scalable variational inference scheme for approximating posteriors on large datasets via learnable weighted pseudodata, termed pseudocoresets. We show that the use of pseudodata enables overcoming the constraints on minimum summary size for given approximation quality, that are imposed on all existing Bayesian coreset constructions due to data dimensionality. Moreover, it allows us to develop a scheme for pseudocoresets-based summarization that satisfies the standard framework of differential privacy by construction; in this way, we can release reduced size privacy-preserving representations for sensitive datasets that are amenable to arbitrary post-processing.
Subsequently, we consider summarizations for large-scale Bayesian inference in scenarios when observed datapoints depart from the statistical assumptions of our model. Using robust divergences, we develop a method for constructing coresets resilient to model misspecification. Crucially, this method is able to automatically discard outliers from the generated data summaries. Thus we deliver robustified scalable representations
for inference, that are suitable for applications involving contaminated and unreliable data sources.
We demonstrate the performance of proposed summarization techniques on multiple parametric statistical models, and diverse simulated and real-world datasets, from music genre features to hospital readmission records, considering a wide range of data dimensionalities.Nokia Bell Labs,
Lundgren Fund,
Darwin College, University of Cambridge
Department of Computer Science & Technology, University of Cambridg
Transmission Modeling with Smartphone-based Sensing
Infectious disease spread is difficult to accurately measure and model. Even for well-studied pathogens, uncertainties remain regarding the dynamics of mixing behavior and how to balance simulation-generated estimates with empirical data. Smartphone-based sensing data promises the availability of inferred proximate contacts, with which we can improve transmission models. This dissertation addresses the problem of informing transmission models with proximity contact data by breaking it down into three sub-questions.
Firstly, can proximity contact data inform transmission models? To this question, an extended-Kalman-filter enhanced System Dynamics Susceptible-Infectious-Removed (EKF-SD-SIR) model demonstrated the filtering approach, as a framework, for informing Systems Dynamics models with proximity contact data. This combination results in recurrently-regrounded system status as empirical data arrive throughout disease transmission simulations---simultaneously considering empirical data accuracy, growing simulation error between measurements, and supporting estimation of changing model parameters. However, as revealed by this investigation, this filtering approach is limited by the quality and reliability of sensing-informed proximate contacts, which leads to the dissertation's second and third questions---investigating the impact of temporal and spatial resolution on sensing inferred proximity contact data for transmission models.
GPS co-location and Bluetooth beaconing are two of those common measurement modalities to sense proximity contacts with different underlying technologies and tradeoffs. However, both measurement modalities have shortcomings and are prone to false positives or negatives when used to detect proximate contacts because unmeasured environmental influences bias the data. Will differences in sensing modalities impact transmission models informed by proximity contact data? The second part of this dissertation compares GPS- and Bluetooth-inferred proximate contacts by accessing their impact on simulated attack rates in corresponding proximate-contact-informed agent-based Susceptible-Exposed-Infectious-Recovered (ABM-SEIR) models of four distinct contagious diseases. Results show that the inferred proximate contacts resulting from these two measurement modalities are different and give rise to significantly different attack rates across multiple data collections and pathogens.
While the advent of commodity mobile devices has eased the collection of proximity contact data, battery capacity and associated costs impose tradeoffs between the frequency and scanning duration used for proximate-contact detection. The choice of a balanced sensing regime involves specifying temporal resolutions and interpreting sensing data---depending on circumstances such as the characteristics of a particular pathogen, accompanying disease, and underlying population. How will the temporal resolution of sensing impact transmission models informed by proximity contact data? Furthermore, how will circumstances alter the impact of temporal resolution? The third part of this dissertation investigates the impacts of sensing regimes on findings from two sampling methods of sensing at widely varying inter-observation intervals by synthetically downsampling proximity contact data from five contact network studies---with each of these five studies measuring participant-participant contact every 5 minutes for durations of four or more weeks. The impact of downsampling is evaluated through ABM-SEIR simulations from both population- and individual-level for 12 distinct contagious diseases and associated variants of concern. Studies in this part find that for epidemiological models employing proximity contact data, both the observation paradigms and the inter-observation interval configured to collect proximity contact data exert impacts on the simulation results. Moreover, the impact is subject to the population characteristics and pathogen infectiousness reflective (such as the basic reproduction number, ). By comparing the performance of two sampling methods of sensing, we found that in most cases, periodically observing for a certain duration can collect proximity contact data that allows agent-based models to produce a reasonable estimation of the attack rate. However, higher-resolution data are preferred for modeling individual infection risk. Findings from this part of the dissertation represent a step towards providing the empirical basis for guidelines to inform data collection that is at once efficient and effective.
This dissertation addresses the problem of informing transmission models with proximity contact data in three steps. Firstly, the demonstration of an EKF-SD-SIR model suggests that the filtering approach could improve System Dynamics transmission models by leveraging proximity contact data. In addition, experiments with the EKF-SD-SIR model also revealed that the filtering approach is constrained by the limited quality and reliability of sensing-data-inferred proximate contacts. The following two parts of this dissertation investigate spatial-temporal factors that could impact the quality and reliability of sensor-collected proximity contact data. In the second step, the impact of spatial resolution is illustrated by differences between two typical sensing modalities---Bluetooth beaconing versus GPS co-location. Experiments show that, in general, proximity contact data collected with Bluetooth beaconing lead to transmission models with results different from those driven by proximity contact data collected with GPS co-location. Awareness of the differences between sensing modalities can aid researchers in incorporating proximity contact data into transmission models. Finally, in the third step, the impact of temporal resolution is elucidated by investigating the differences between results of transmission models led by proximity contact data collected with varying observation frequencies. These differences led by varying observation frequencies are evaluated under circumstances with alternative assumptions regarding sampling method, disease/pathogen type, and the underlying population. Experiments show that the impact of sensing regimes is influenced by the type of diseases/pathogens and underlying population, while sampling once in a while can be a decent choice across all situations. This dissertation demonstrated the value of a filtering approach to enhance transmission models with sensor-collected proximity contact data, as well as explored spatial-temporal factors that will impact the accuracy and reliability of sensor-collected proximity contact data. Furthermore, this dissertation suggested guidance for future sensor-based proximity contact data collection and highlighted needs and opportunities for further research on sensing-inferred proximity contact data for transmission models
Multimedia Protection using Content and Embedded Fingerprints
Improved digital connectivity has made the Internet an important medium for multimedia distribution and consumption in recent years. At the same time, this increased proliferation of multimedia has raised significant challenges in secure multimedia distribution and intellectual property protection. This dissertation examines two complementary aspects of the multimedia protection problem that utilize content fingerprints and embedded collusion-resistant fingerprints.
The first aspect considered is the automated identification of multimedia using content fingerprints, which is emerging as an important tool for detecting copyright violations on user generated content websites. A content fingerprint is a compact identifier that captures robust and distinctive properties of multimedia content, which can be used for uniquely identifying the multimedia object. In this dissertation, we describe a modular framework for theoretical modeling and analysis of content fingerprinting techniques. Based on this framework, we analyze the impact of distortions in the features on the corresponding fingerprints and also consider the problem of designing a suitable quantizer for encoding the features in order to improve the identification accuracy. The interaction between the fingerprint designer and a malicious adversary seeking to evade detection is studied under a game-theoretic framework and optimal strategies for both parties are derived. We then focus on analyzing and understanding the matching process at the fingerprint level. Models for fingerprints with different types of correlations are developed and the identification accuracy under each model is examined. Through this analysis we obtain useful guidelines for designing practical systems and also uncover connections to other areas of research.
A complementary problem considered in this dissertation concerns tracing the users responsible for unauthorized redistribution of multimedia. Collusion-resistant fingerprints, which are signals that uniquely identify the recipient, are proactively embedded in the multimedia before redistribution and can be used for identifying the malicious users. We study the problem of designing collusion resistant fingerprints for embedding in compressed multimedia. Our study indicates that directly adapting traditional fingerprinting techniques to this new setting of compressed multimedia results in low collusion resistance. To withstand attacks, we propose an anti-collusion dithering technique for embedding fingerprints that significantly improves the collusion resistance compared to traditional fingerprints