25 research outputs found
Real-time sentiment analysis of video calls
In recent years, with ever-increasing internet connection speed and bandwidth, video-focused software has become more and more popular for both work and pleasure. Examples of such applications include Skype, BlueJeans or iOS Face- Time. These applications, and the various interactions facilitated by them contain lots of interesting data that we feel would be very fruitful to gather and analyze. Within the context of this thesis, we focused on evaluating the potential of collecting sentiment analytics from video teleconferencing both on an individual and group level, for the purpose of helping people reflect on their own behavior and regulate their emotions
.
To achieve this, we developed a composable, scalable microservice-based analytics pipeline for video and speech, and a browser-based web application to demonstrate it. We evaluated already existing solutions for gathering sentiment analytics, and integrated two of them into our analytics pipeline. The whole system was deployed in a virtualized container environment using Docker. Be- sides the pipeline and web application, we also designed and implemented some visualizations for the data that we gathered.
In the end we developed a working prototype, although deeper analysis and evaluation of the actual accuracy of its results needs to be performed. Human emotions are rather difficult to quantize. We found that the current APIs and libraries publicly available for performing sentiment analysis are already quite accurate and feature-rich, and we expect them to get even better
VCare: A Personal Emergency Response System to Promote Safe and Independent Living Among Elders Staying by Themselves in Community or Residential Settings
‘Population aging’ is a growing concern for most of us living in the twenty first century, primarily because many of us in the next few years will have a senior person to care for - spending money towards their healthcare expenditures AND/OR having to balance a full-time job with the responsibility of care-giving, travelling from another city to be with this elderly citizen who might be our parent, grand-parent or even community elders. As informal care-givers, if somehow we were able to monitor the day-to-day activities of our elderly dependents, and be alerted when wrong happens to them that would be of great help and lower the care-giving burden considerably. Information and Communication Technology (ICT) can certainly help in such a scenario, with tools and techniques that ensure safe living for the individual we are caring for, and save us from a lot of worry by providing us with anytime access into their lives or activities, and as a result check their functional state. However, we should be mindful of the tactics that could be adopted by harm causers to steal data stored in these products and try to curb the associated service costs. In short, we are in need of robust, cost-effective, useful, and secure solutions to help elders in our society to ‘age gracefully’. This work is a little step taken towards that direction.
‘Population aging’ is a growing concern for most of us living in the twenty first century, primarily because many of us in the next few years will have a senior person to care for - spending money towards their healthcare expenditures AND/OR having to balance a full-time job with the responsibility of care-giving, travelling from another city to be with this elderly citizen who might be our parent, grand-parent or even community elders. As informal care-givers, if somehow we were able to monitor the day-to-day activities of our elderly dependents, and be alerted when wrong happens to them that would be of great help and lower the care-giving burden considerably. Information and Communication Technology (ICT) can certainly help in such a scenario, with tools and techniques that ensure safe living for the individual we are caring for, and save us from a lot of worry by providing us with anytime access into their lives or activities, and as a result check their functional state. However, we should be mindful of the tactics that could be adopted by harm causers to steal data stored in these products and try to curb the associated service costs. In short, we are in need of robust, cost-effective, useful, and secure solutions to help elders in our society to ‘age gracefully’. This work is a little step taken towards that direction.
Advisor: Tadeusz Wysock
Computational Resource Abuse in Web Applications
Internet browsers include Application Programming Interfaces (APIs) to support Web applications that require complex functionality, e.g., to let end users watch videos, make phone calls, and play video games. Meanwhile, many Web applications employ the browser APIs to rely on the user's hardware to execute intensive computation, access the Graphics Processing Unit (GPU), use persistent storage, and establish network connections.
However, providing access to the system's computational resources, i.e., processing, storage, and networking, through the browser creates an opportunity for attackers to abuse resources. Principally, the problem occurs when an attacker compromises a Web site and includes malicious code to abuse its visitor's computational resources. For example, an attacker can abuse the user's system networking capabilities to perform a Denial of Service (DoS) attack against third parties. What is more, computational resource abuse has not received widespread attention from the Web security community because most of the current specifications are focused on content and session properties such as isolation, confidentiality, and integrity.
Our primary goal is to study computational resource abuse and to advance the state of the art by providing a general attacker model, multiple case studies, a thorough analysis of available security mechanisms, and a new detection mechanism. To this end, we implemented and evaluated three scenarios where attackers use multiple browser APIs to abuse networking, local storage, and computation. Further, depending on the scenario, an attacker can use browsers to perform Denial of Service against third-party Web sites, create a network of browsers to store and distribute arbitrary data, or use browsers to establish anonymous connections similarly to The Onion Router (Tor). Our analysis also includes a real-life resource abuse case found in the wild, i.e., CryptoJacking, where thousands of Web sites forced their visitors to perform crypto-currency mining without their consent. In the general case, attacks presented in this thesis share the attacker model and two key characteristics: 1) the browser's end user remains oblivious to the attack, and 2) an attacker has to invest little resources in comparison to the resources he obtains.
In addition to the attack's analysis, we present how existing, and upcoming, security enforcement mechanisms from Web security can hinder an attacker and their drawbacks. Moreover, we propose a novel detection approach based on browser API usage patterns. Finally, we evaluate the accuracy of our detection model, after training it with the real-life crypto-mining scenario, through a large scale analysis of the most popular Web sites
TORKAMELEON. IMPROVING TOR’S CENSORSHIP RESISTANCE WITH K-ANONYMIZATION MEDIA MORPHING COVERT INPUT CHANNELS
Anonymity networks such as Tor and other related tools are powerful means of increas-
ing the anonymity and privacy of Internet users’ communications. Tor is currently the
most widely used solution by whistleblowers to disclose confidential information and
denounce censorship measures, including violations of civil rights, freedom of expres-
sion, or guarantees of free access to information. However, recent research studies have
shown that Tor is vulnerable to so-called powerful correlation attacks carried out by
global adversaries or collaborative Internet censorship parties. In the Tor ”arms race”
scenario, we can see that as new censorship, surveillance, and deep correlation tools have
been researched, new, improved solutions for preserving anonymity have also emerged.
In recent research proposals, unobservable encapsulation of IP packets in covert media
channels is one of the most promising defenses against such threat models. They leverage
WebRTC-based covert channels as a robust and practical approach against powerful traf-
fic correlation analysis. At the same time, these solutions are difficult to combat through
the traffic-blocking measures commonly used by censorship authorities.
In this dissertation, we propose TorKameleon, a censorship evasion solution de-
signed to protect Tor users with increased censorship resistance against powerful traffic
correlation attacks executed by global adversaries. The system is based on flexible K-
anonymization input circuits that can support TLS tunneling and WebRTC-based covert
channels before forwarding users’ original input traffic to the Tor network. Our goal
is to protect users from machine and deep learning correlation attacks between incom-
ing user traffic and observed traffic at different Tor network relays, such as middle and
egress relays. TorKameleon is the first system to implement a Tor pluggable transport
based on parameterizable TLS tunneling and WebRTC-based covert channels. We have
implemented the TorKameleon prototype and performed extensive validations to ob-
serve the correctness and experimental performance of the proposed solution in the Tor
environment. With these evaluations, we analyze the necessary tradeoffs between the
performance of the standard Tor network and the achieved effectiveness and performance
of TorKameleon, capable of preserving the required unobservability properties.Redes de anonimização como o Tor e soluções ou ferramentas semelhantes são meios
poderosos de aumentar a anonimidade e a privacidade das comunicações de utilizadores
da Internet . O Tor é atualmente a rede de anonimato mais utilizada por delatores para
divulgar informações confidenciais e denunciar medidas de censura tais como violações
de direitos civis e da liberdade de expressão, ou falhas nas garantias de livre acesso à
informação. No entanto, estudos recentes mostram que o Tor é vulnerável a adversários
globais ou a entidades que colaboram entre si para garantir a censura online. Neste
cenário competitivo e de jogo do “gato e do rato”, é possível verificar que à medida que
novas soluções de censura e vigilância são investigadas, novos sistemas melhorados para
a preservação de anonimato são também apresentados e refinados. O encapsulamento de
pacotes IP em túneis encapsulados em protocolos de media são uma das mais promissoras
soluções contra os novos modelos de ataque à anonimidade. Estas soluções alavancam
canais encobertos em protocolos de media baseados em WebRTC para resistir a poderosos
ataques de correlação de tráfego e a medidas de bloqueios normalmente usadas pelos
censores.
Nesta dissertação propomos o TorKameleon, uma solução desenhada para protoger
os utilizadores da rede Tor contra os mais recentes ataques de correlação feitos por um
modelo de adversário global. O sistema é baseado em estratégias de anonimização e
reencaminhamento do tráfego do utilizador através de K nós, utilizando também encap-
sulamento do tráfego em canais encobertos em túneis TLS ou WebRTC. O nosso objetivo
é proteger os utilizadores da rede Tor de ataques de correlação implementados através
de modelos de aprendizagem automática feitos entre o tráfego do utilizador que entra
na rede Tor e esse mesmo tráfego noutro segmento da rede, como por exemplo nos nós
de saída da rede. O TorKameleon é o primeiro sistema a implementar um Tor pluggable
transport parametrizável, baseado em túneis TLS ou em canais encobertos em protocolos
media. Implementamos um protótipo do sistema e realizamos uma extensa avalição expe-
rimental, inserindo a solução no ambiente da rede Tor. Com base nestas avaliações, anali-
zamos o tradeoff necessário entre a performance da rede Tor e a eficácia e a performance
obtida do TorKameleon, que garante as propriedades de preservação de anonimato
Recommended from our members
TOWARDS RELIABLE CIRCUMVENTION OF INTERNET CENSORSHIP
The Internet plays a crucial role in today\u27s social and political movements by facilitating the free circulation of speech, information, and ideas; democracy and human rights throughout the world critically depend on preserving and bolstering the Internet\u27s openness. Consequently, repressive regimes, totalitarian governments, and corrupt corporations regulate, monitor, and restrict the access to the Internet, which is broadly known as Internet \emph{censorship}. Most countries are improving the internet infrastructures, as a result they can implement more advanced censoring techniques. Also with the advancements in the application of machine learning techniques for network traffic analysis have enabled the more sophisticated Internet censorship. In this thesis, We take a close look at the main pillars of internet censorship, we will introduce new defense and attacks in the internet censorship literature.
Internet censorship techniques investigate users’ communications and they can decide to interrupt a connection to prevent a user from communicating with a specific entity. Traffic analysis is one of the main techniques used to infer information from internet communications. One of the major challenges to traffic analysis mechanisms is scaling the techniques to today\u27s exploding volumes of network traffic, i.e., they impose high storage, communications, and computation overheads. We aim at addressing this scalability issue by introducing a new direction for traffic analysis, which we call \emph{compressive traffic analysis}. Moreover, we show that, unfortunately, traffic analysis attacks can be conducted on Anonymity systems with drastically higher accuracies than before by leveraging emerging learning mechanisms. We particularly design a system, called \deepcorr, that outperforms the state-of-the-art by significant margins in correlating network connections. \deepcorr leverages an advanced deep learning architecture to \emph{learn} a flow correlation function tailored to complex networks. Also to be able to analyze the weakness of such approaches we show that an adversary can defeat deep neural network based traffic analysis techniques by applying statistically undetectable \emph{adversarial perturbations} on the patterns of live network traffic.
We also design techniques to circumvent internet censorship. Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. We propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to \emph{all} previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it \emph{downstream-only} decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Then, we propose to use game theoretic approaches to model the arms races between the censors and the censorship circumvention tools. This will allow us to analyze the effect of different parameters or censoring behaviors on the performance of censorship circumvention tools. We apply our methods on two fundamental problems in internet censorship.
Finally, to bring our ideas to practice, we designed a new censorship circumvention tool called \name. \name aims at increasing the collateral damage of censorship by employing a ``mass\u27\u27 of normal Internet users, from both censored and uncensored areas, to serve as circumvention proxies
MediaSync: Handbook on Multimedia Synchronization
This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
Collision Avoidance on Unmanned Aerial Vehicles using Deep Neural Networks
Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently
gained a prominent role in many industries, being widely used not only among enthusiastic
consumers but also in high demanding professional situations, and will have a
massive societal impact over the coming years. However, the operation of UAVs is full
of serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or
randomly thrown objects). These collision scenarios are complex to analyze in real-time,
sometimes being computationally impossible to solve with existing State of the Art (SoA)
algorithms, making the use of UAVs an operational hazard and therefore significantly reducing
their commercial applicability in urban environments. In this work, a conceptual
framework for both stand-alone and swarm (networked) UAVs is introduced, focusing on
the architectural requirements of the collision avoidance subsystem to achieve acceptable
levels of safety and reliability. First, the SoA principles for collision avoidance against
stationary objects are reviewed. Afterward, a novel image processing approach that uses
deep learning and optical flow is presented. This approach is capable of detecting and
generating escape trajectories against potential collisions with dynamic objects. Finally,
novel models and algorithms combinations were tested, providing a new approach for
the collision avoidance of UAVs using Deep Neural Networks. The feasibility of the proposed
approach was demonstrated through experimental tests using a UAV, created from
scratch using the framework developed.Os veículos aéreos não tripulados (VANTs), embora dificilmente considerados uma
nova tecnologia, ganharam recentemente um papel de destaque em muitas indústrias,
sendo amplamente utilizados não apenas por amadores, mas também em situações profissionais
de alta exigência, sendo expectável um impacto social massivo nos próximos
anos. No entanto, a operação de VANTs está repleta de sérios riscos de segurança, como
colisões com obstáculos dinâmicos (pássaros, outros VANTs ou objetos arremessados).
Estes cenários de colisão são complexos para analisar em tempo real, às vezes sendo computacionalmente
impossível de resolver com os algoritmos existentes, tornando o uso de
VANTs um risco operacional e, portanto, reduzindo significativamente a sua aplicabilidade
comercial em ambientes citadinos. Neste trabalho, uma arquitectura conceptual
para VANTs autônomos e em rede é apresentada, com foco nos requisitos arquitetônicos
do subsistema de prevenção de colisão para atingir níveis aceitáveis de segurança e confiabilidade.
Os estudos presentes na literatura para prevenção de colisão contra objectos
estacionários são revistos e uma nova abordagem é descrita. Esta tecnica usa técnicas
de aprendizagem profunda e processamento de imagem, para realizar a prevenção de
colisões em tempo real com objetos móveis. Por fim, novos modelos e combinações de algoritmos
são propostos, fornecendo uma nova abordagem para evitar colisões de VANTs
usando Redes Neurais Profundas. A viabilidade da abordagem foi demonstrada através
de testes experimentais utilizando um VANT, desenvolvido a partir da arquitectura
apresentada
Smart Monitoring and Control in the Future Internet of Things
The Internet of Things (IoT) and related technologies have the promise of realizing pervasive and smart applications which, in turn, have the potential of improving the quality of life of people living in a connected world. According to the IoT vision, all things can cooperate amongst themselves and be managed from anywhere via the Internet, allowing tight integration between the physical and cyber worlds and thus improving efficiency, promoting usability, and opening up new application opportunities. Nowadays, IoT technologies have successfully been exploited in several domains, providing both social and economic benefits. The realization of the full potential of the next generation of the Internet of Things still needs further research efforts concerning, for instance, the identification of new architectures, methodologies, and infrastructures dealing with distributed and decentralized IoT systems; the integration of IoT with cognitive and social capabilities; the enhancement of the sensing–analysis–control cycle; the integration of consciousness and awareness in IoT environments; and the design of new algorithms and techniques for managing IoT big data. This Special Issue is devoted to advancements in technologies, methodologies, and applications for IoT, together with emerging standards and research topics which would lead to realization of the future Internet of Things
Measuring for privacy: From tracking to cloaking
We rely on various types of online services to access information for different uses, and often provide sensitive information during the interactions with these services. These online services are of different types; e.g. commercial websites (e.g., banking, education, news, shopping, dating, social media), essential websites (e.g., government). Online services are available through websites as well as mobile apps. The growth of web sites, mobile devices and apps that run on those devices, have resulted in the proliferation of online services. This whole ecosystem of online services had created an environment where everyone using it are being tracked. Several past studies have performed privacy measurements to assess the prevalence of tracking in online services. Most of these studies used institutional (i.e., non-residential) resources for their measurements, and lacked global perspective. Tracking on online services and its impact to privacy may differ at various locations. Therefore, to fill in this gap, we perform a privacy measurement study of popular commercial websites, using residential networks from various locations.
Unlike commercial online services, there are different categories (e.g., government, hospital, religion) of essential online services where users do not expect to be tracked. The users of these essential online services often use information of extreme personal and sensitive in nature (e.g., social insurance number, health information, prayer requests/confessions made to a religious minister) when interacting with those services. However, contrary to the expectations of users, these essential services include user tracking capabilities. We built frameworks to perform privacy measurements of these online services (include both web sites and Android apps) that are of different types (i.e., governments, hospitals and religious services in jurisdictions around the world). The instrumented tracking metrics (i.e., stateless, stateful, session replaying) from the privacy measurements of these online services are then analyzed.
Malicious sites (e.g., phishing) mimic online services to deceive users, causing them harm. We found 80% of analyzed malicious sites are cloaked, and not blocked by search engine crawlers. Therefore, sensitive information collected from users through these sites is exposed. In addition, underlying Internet-connected infrastructure (e.g., networked devices such as routers, modems) used by online users, can suffer from security issues due to nonuse of TLS or use of weak SSL/TLS certificates. Such security issues (e.g., spying on a CCTV camera) can compromise data integrity, confidentiality and user privacy.
Overall, we found tracking on commercial websites differ based on the location of corresponding residential users. We also observed widespread use of tracking by commercial trackers, and session replay services that expose sensitive information from essential online services. Sensitive information are also exposed due to vulnerabilities in online services (e.g., Cross Site Scripting). Furthermore, a significant proportion of malicious sites evade detection by security/search engine crawlers, which may make such sites readily available to users. We also detect weaknesses in the TLS ecosystem of Internet-connected infrastructure that supports running these online services. These observations require more research on privacy of online services, as well as information exposure from malicious online services, to understand the significance of privacy issues, and to adopt appropriate mitigation strategies