845 research outputs found
XML Schema-based Minification for Communication of Security Information and Event Management (SIEM) Systems in Cloud Environments
XML-based communication governs most of today's systems communication, due to
its capability of representing complex structural and hierarchical data.
However, XML document structure is considered a huge and bulky data that can be
reduced to minimize bandwidth usage, transmission time, and maximize
performance. This contributes to a more efficient and utilized resource usage.
In cloud environments, this affects the amount of money the consumer pays.
Several techniques are used to achieve this goal. This paper discusses these
techniques and proposes a new XML Schema-based Minification technique. The
proposed technique works on XML Structure reduction using minification. The
proposed technique provides a separation between the meaningful names and the
underlying minified names, which enhances software/code readability. This
technique is applied to Intrusion Detection Message Exchange Format (IDMEF)
messages, as part of Security Information and Event Management (SIEM) system
communication hosted on Microsoft Azure Cloud. Test results show message size
reduction ranging from 8.15% to 50.34% in the raw message, without using
time-consuming compression techniques. Adding GZip compression to the proposed
technique produces 66.1% shorter message size compared to original XML
messages.Comment: XML, JSON, Minification, XML Schema, Cloud, Log, Communication,
Compression, XMill, GZip, Code Generation, Code Readability, 9 pages, 12
figures, 5 tables, Journal Articl
KeyForge: Mitigating Email Breaches with Forward-Forgeable Signatures
Email breaches are commonplace, and they expose a wealth of personal,
business, and political data that may have devastating consequences. The
current email system allows any attacker who gains access to your email to
prove the authenticity of the stolen messages to third parties -- a property
arising from a necessary anti-spam / anti-spoofing protocol called DKIM. This
exacerbates the problem of email breaches by greatly increasing the potential
for attackers to damage the users' reputation, blackmail them, or sell the
stolen information to third parties.
In this paper, we introduce "non-attributable email", which guarantees that a
wide class of adversaries are unable to convince any third party of the
authenticity of stolen emails. We formally define non-attributability, and
present two practical system proposals -- KeyForge and TimeForge -- that
provably achieve non-attributability while maintaining the important protection
against spam and spoofing that is currently provided by DKIM. Moreover, we
implement KeyForge and demonstrate that that scheme is practical, achieving
competitive verification and signing speed while also requiring 42% less
bandwidth per email than RSA2048
SDN as Active Measurement Infrastructure
Active measurements are integral to the operation and management of networks,
and invaluable to supporting empirical network research. Unfortunately, it is
often cost-prohibitive and logistically difficult to widely deploy measurement
nodes, especially in the core. In this work, we consider the feasibility of
tightly integrating measurement within the infrastructure by using Software
Defined Networks (SDNs). We introduce "SDN as Active Measurement
Infrastructure" (SAAMI) to enable measurements to originate from any location
where SDN is deployed, removing the need for dedicated measurement nodes and
increasing vantage point diversity. We implement ping and traceroute using
SAAMI, as well as a proof-of-concept custom measurement protocol to demonstrate
the power and ease of SAAMI's open framework. Via a large-scale measurement
campaign using SDN switches as vantage points, we show that SAAMI is accurate,
scalable, and extensible
Design and implementation of an endpoint reputation module
La invención de Internet y su exponencial crecimiento desde su creación ha conllevado
una ingente y creciente cantidad de amenazas: desde robos masivos de datos,
hasta el formar parte de una red ilegal al servicio del mejor postor. En el momento
en que un dispositivo está conectado a esta gran red, la amenaza sobre ese dispositivo
es real, e inminente si no se toma ninguna medida. Para tratar de prevenir estos
problemas, se han propuesto e implementado multitud de medidas, como sistemas
de detección de intrusos, cortafuegos, o, como explicaremos a continuación, listas
blancas y listas negras.
Al hablar de listas nos referiremos al concepto de entidad (o endpoint en inglés)
como una abstracción que abarca IPs (como 8.8.8.8), dominios (como www.wikipedia.org )
y URLs (como http://www.ebay.com/rpp/gift-cards ). Una lista negra o blacklist es un
conjunto de entidades o endpoints de las que se sabe que son (o han sido) maliciosas.
Su utilidad dentro de un sistema es la de reaccionar en el momento de conectar con
estas entidades, tomando medidas como por ejemplo bloquear el acceso a ellas. Una
lista blanca o whitelist, por el contrario, es un conjunto de entidades que se consideran
no maliciosas. Una de sus más usadas utilidades es la de permitir e-mail de
una serie de proveedores que se sabe que son de confianza (como google mail, yahoo,
hotmail...). En general, un dominio benigno tiene un periodo de vida más largo que
uno malicioso, pues este último en muchas ocasiones hará uso de mecanismos (como
cambiar de nombre regularmente) para precisamente evadir las listas negras. Por
esta razón, la lista blanca es más fiable y contiene pocos falsos positivos (hay pocas
entradas en ellas que en realidad sean maliciosas). Nos centraremos a continuación
en las listas negras.
Las listas negras son de gran utilidad en la prevención y deteción de amenazas: En
cuanto una nueva entidad maliciosa es descubierta, es añadida a la lista, y permite
a todas las partes protegidas por dicha lista (por ejemplo, programas antivirus o sistemas de detección de intrusos ) ser alertadas cuando un dispositivo intente conectarse
a ella. Estas listas pueden estar accesibles tanto por la red, como descargables,
tanto públicamente como de forma privada. Por otra parte, hay una serie de cuestiones
a tener en cuenta al utilizar una lista negra. La primera es que habrá entradas
que no estén actualizadas. Por ejemplo, un dominio puede haber pasado de ser malicioso
a no serlo. Para ello a veces se proporciona la fecha en la que fue añadida a
la lista, pero en la práctica esto no ocurre a menudo. El segundo problema es que
pueden darse falsos positivos, esto son, entidades consideradas maliciosas que en el
contexto en que nosotros nos movamos no sea necesario considerarlas como tal. Esto
depende de los criterios que utilice cada servicio de lista negra para bloquear estas
direcciones.
Principalmente por las cuestiones citadas anteriormente, en este proyecto se propone,
se implementa y se evalúa un método de agregación de una serie de listas,
tanto negras como blancas, tanto online como descargables, para entidades de internet.
Utilizaremos información extraída de 10 diferentes listas (y servicios en general)
disponibles en la red. A partir de ellas se calculará una reputación para cada entidad,
que se acercará a cero en la medida en que la entidad sea considerada benigna, y a
uno en la que sea considerada maliciosa.
En resumen, el proyecto de un módulo de reputatión para entidades en la red
aporta:
- Una mayor flexibilidad a la hora de decidir qué medidas tomar gracias a un
sistema de reputación continuo en lugar de un valor booleano.
- Reducción del número de falsos negativos, esto es, una mejor tasa de detección
de amenazas gracias a la agregación de diversas listas blancas y negras así como
informes de varios antivirus.---ABSTRACT---Since internet was invented and due to its rapid growth, the number of threats over this big network has constantly increased: from massive data theft, to forming
enormous illegal and malicious networks sold to the best bidder. Malware industry
has become professional, and in the very moment a device is connected to the internet,
it's exposed to all these threats, and they will take place if no measures are taken.
Lots of solutions have been proposed and implemented to prevent and mitigate these
threats, such as intrusion detection systems (IDS), firewalls, or as we'll explain below,
whitelists and blacklists.
When talking about these lists we are referring to the concept of endpoint, as an
abstraction wrapping IPs (like 8.8.8.8), domains (like www.wikipedia.org ) and URLs
(like http://www.ebay.com/rpp/gift-cards ). A blacklist is a set of endpoints known
to be (or to have been) malicious. Its function inside a system is to react in the
moment of trying to connect to these endpoints and do something, like blocking the
connection. A whitelist, on the other hand, is a set of endpoints known to be not
malicious. One of the most popular uses of whitelists is to accept e-mail from a set
of reliable mail providers (such as google mail, yahoo, hotmail...). Typically, a benign
domain remains longer than a malicious one, because this last one will often try
to bypass these lists implementing avoidance mechanisms (for example, changing its
name with high frequency). Due to this, whitelists are known to be more reliable and
contain less false positives (this is, less entries on them that are actually malicious).
We'll now focus on blacklists.
Blacklists are very useful in the world of internet security for preventing and detecting
threats: In the moment a new malicious endpoint is discovered, it is blacklisted,
and all the systems protected under that blacklist (like antivirus software or IDSs)
are alerted when a device is trying to connect to it. These lists can be accessible
through the web (online) or by downloads (online), either from a public source or
from a private one. There are some issues to consider when using a blacklist. The
first one is that some entries will not be updated, i.e., a domain might have been
malicious in the past, but benign in the current moment. For actually measuring
this, at least a last scan date is required, but this is not often provided by the black-list services. A second issue is that in practice there will be false positives, this is,
endpoints fiagged as malicious when they are actually benign. This depends on the
criteria used by each blacklist to block these entities.
Mostly for the issues cited above, this project proposes, implements and evaluates
a method of aggregation of several blacklist and whitelist services, both online and
online, for internet endpoints. The information is extracted from 10 diferent services
available on the cloud. We'll compute a reputation score for each endpoint, that will
approximate to 0 when it's considered benign, and to 1 when is considered malicious.
To sum up, the project of a reputation module for endpoints over the internet
contributes with:
- More flexibility when deciding what measures to take when facing a potentially
malicious endpoint, thanks to a continuous reputation system, instead of a
boolean value.
- A reduction on the number of false negatives, i.e., a better detection rate thanks
to the aggregation of various services like blacklists, whitelists and reports from
antivirus software
Discovery and Push Notification Mechanisms for Mobile Cloud Services
Viimase viie aasta jooksul on mobiilsed seadmed nagu sülearvutid, pihuarvutid, nutitelefonid jmt. tunginud peaaegu kõigisse inimeste igapäevaelu tegevustesse. Samuti on põhjalik teadus- ja arendustegevus mobiilsete tehnoloogiate vallas viinud märkimisväärsete täiustusteni riistvara, tarkvara ja andmeedastuse alal. Tänapäeval on mobiilsed seadmed varustatud sisseehitatud sensorite, kaamera, puutetundliku ekraani, suurema hulga mäluga, kuid ka tõhusamate energiatarbemehhanismidega. Lisaks on iOS ja Android operatsioonisüsteemide väljalaske tõttu suurenenud nii mobiilirakenduste arv kui keerukus, pakkudes arvukamalt kõrgetasemelisi rakendusi.
Sarnaselt on toimunud olulised arengud ja standardiseerimisele suunatud jõupingutused veebiteenusete valdkonnas ja elementaarsetele veebiteenuste ligipääsu kasutatakse laialdaselt nutitelefonidest. See on viinud loogilise järgmise sammuna veebiteenuste pakkumiseni nutitelefonidest. Telefonidest veebiteenuste pakkumise kontseptsioon ei ole uus ning seda on põhjalikult uurinud Srirama, kes pakkus välja Mobile Host (Mobiilne Veebiteenuse Pakkuja) kontseptsiooni. Algne realisatsioon kasutas aga aegunud tehnoloogiaid nagu JMEE, PersonalJava, SOAP arhitektuur jne. See töö uuendab Mobile Host'i kasutades uusimaid tehnoloogiad, nagu Android OS ja REST arhitektuur, ning pakub välja teenusemootori, mis põhineb Apache Felix'il - OSGi platvormi realisatsioonil piiratud ressurssidega seadmetele.
Hämmastava kiirusega toimunud arengud mobiilsete arvutuste vallas võimaldavad uue põlvkonna veebirakenduste loomist valdkondades nagu keskkonnateadlikkus, sotsiaalvõrgustikud, koostöövahendid, asukohapõhised teenused jne. Sellised rakendused saavad ära kasutada Mobile Host'i võimalusi. Selle tulemusena on klientidel ligipääs väga suurele hulgale teenustele, mistõttu tekib vajadus efektiivse teenuste avastamise mehhanismi järele. See töö pakub välja kataloogipõhise avastusmehhanismi võrgu ülekatte toega suurtele, kõrge liikuvusega võrgustikele. See mehhanism toetub OWL-S'le, mis on ontoloogia veebiteenuseid pakkuvate ressursside avastamiseks, väljakutseks, koostamiseks ja jälgimiseks. Töö kirjeldab ka Srirama välja pakutud algupärast teenuste avastamise mehhanismi, mis toetub peer-to-peer võrkudele ja Apache Lucene võtmesõna otsingumootorile. Uurimuse käigus uuendatakse teenuseotsing kasutama Apache Solr'i, Apache Lucene'i viimast versiooni. Teenuste avastust testiti põhjalikult ja tulemused on töös kokkuvõtvalt välja toodud.
Mobiilsete tehnoloogiate vallas uuritakse ka võimalust kasutada pilvetehnolologiat laiendamaks mobiilseadmete salvestusmahtu ja töökoormust edastades pilve andme- ja arvutusmahukad ülesanded. See soodustab keerulisemate ja võimalusrohkemate mobiilirakenduste arendust. Pilve delegeeritavate toimingute aeganõudva iseloomu tõttu aga on vajalik asünkroonne mehhanism teavitamaks kasutajat, millal töömahukad tegevused on lõpetatud. Mobiilsete pilveteenuste pakkujad ja vahevara lahendused võivad kasu saada Mobile Host'ist ja selle asünkroonsete teavituste võimekusest. Uurimus esitleb nelja teavitusmehhanismi: AC2DM, APNS, IBM MQTT ja Mobile Host'i põhine teavitus. Töö võtab kokku kvantitatiivse analüüsi tulemused ja toob välja nelja teavitamise lähenemise tugevused ja nõrkused. Lisaks kirjeldatakse CroudSTag rakenduse realisatsiooni - CroudSTag on mobiilirakendus, mille eesmärgiks on sotsiaalsete gruppide moodustamine kasutades näotuvastustehnoloogiat. CroudSTag-i realisatsioon kasutab mobiilseid pilveteenuseid ja Mobile Host'i, et pakkuda oma funktsionaalsust kasutajale.In the last lustrum the mobile devices such as laptops, PDAs, smart phones, tablets, etc. have pervaded almost all the environments where people perform their day-to-day activities. Further, the extensive Research and Development in mobile technologies has led to significant improvements in hardware, software and transmission. Similarly, there are significant developments and standardization efforts in web services domain and basic web services have been widely accessed from smart phones. This has lead to the logical next step of providing web services from the smart phones. The concept of the web service provisioning from smart phones is not new and has been extensively explored by Srirama who proposed the concept of Mobile Host. However, the original implementation considered aged technologies such as JMEE, PersonalJava, SOAP architecture among others. This work updates the Mobile Host to the latest technologies like Android OS and REST architecture and proposes a service engine based on Apache Felix, and OSGI implementation for resource constraint devices.
Moreover, the astonishing speed in developments in mobile computing enable the new generation of applications from domains such as context-awareness, social network, collaborative tools, location based services, etc., which benefit from the Mobile Host service provisioning capabilities. As a result the clients have access to a huge number of services available; therefore, an efficient and effective service discovery mechanism is required. The thesis proposes a directory-based with network overlay support discovery mechanism for large networks with high mobility. The proposed discovery mechanism relies in OWL-S, an ontology for service discovery, invocation, composition, and monitoring of web resources. The work also considers the original service discovery mechanism proposed by Srirama relying in peer-to-peer networks and Apache Lucene, a keyword search engine. The study updates the service search to Apache Solr, the latest development for Apache Lucene. The service discovery was extensively tested and the results are summarized in this work.
Mobile technologies are looking into the clouds for extending their capabilities in storage and processing by offloading data and process intensive tasks. This fosters the development of more complex and rich mobile applications. However, due to the time-consuming nature of the tasks delegated to the clouds, an asynchronous mechanism is necessary for notifying the user when the intensive tasks are completed. Mobile cloud service providers and Middleware solutions might benefit from Mobile Host and its asynchronous notification capabilities. The study presents four push notification mechanisms being AC2DM, APNS, IBM MQTT and Mobile Host based push notification. The work summarizes the results of a quantitative analysis and highlights the strengths and weakness of the four notifications approaches. In addition, it explains CroudSTag realization, a mobile application that aims the social group formation by means of facial recognition that relies in mobile cloud services and Mobile Host to provide its functionality to the user
An Interoperable Access Control System based on Self-Sovereign Identities
The extreme growth of the World Wide Web in the last decade together with recent scandals related to theft or abusive use of personal information have left users unsatisfied withtheir digital identity providers and concerned about their online privacy. Self-SovereignIdentity (SSI) is a new identity management paradigm which gives back control over personal information to its rightful owner - the individual. However, adoption of SSI on theWeb is complicated by the high overhead costs for the service providers due to the lackinginteroperability of the various emerging SSI solutions. In this work, we propose an AccessControl System based on Self-Sovereign Identities with a semantically modelled AccessControl Logic. Our system relies on the Web Access Control authorization rules usedin the Solid project and extends them to additionally express requirements on VerifiableCredentials, i.e., digital credentials adhering to a standardized data model. Moreover,the system achieves interoperability across multiple DID Methods and types of VerifiableCredentials allowing for incremental extensibility of the supported SSI technologies bydesign. A Proof-of-Concept prototype is implemented and its performance as well as multiple system design choices are evaluated: The End-to-End latency of the authorizationprocess takes between 2-5 seconds depending on the used DID Methods and can theoretically be further optimized to 1.5-3 seconds. Evaluating the potential interoperabilityachieved by the system shows that multiple DID Methods and different types of VerifiableCredentials can be supported. Lastly, multiple approaches for modelling required Verifiable Credentials are compared and the suitability of the SHACL language for describingthe RDF graphs represented by the required Linked Data credentials is shown
A context -and template- based data compression approach to improve resource-constrained IoT systems interoperability.
170 p.El objetivo del Internet de las Cosas (the Internet of Things, IoT) es el de interconectar todo tipo de cosas, desde dispositivos simples, como una bombilla o un termostato, a elementos más complejos y abstractoscomo una máquina o una casa. Estos dispositivos o elementos varían enormemente entre sí, especialmente en las capacidades que poseen y el tipo de tecnologías que utilizan. Esta heterogeneidad produce una gran complejidad en los procesos integración en lo que a la interoperabilidad se refiere.Un enfoque común para abordar la interoperabilidad a nivel de representación de datos en sistemas IoT es el de estructurar los datos siguiendo un modelo de datos estándar, así como formatos de datos basados en texto (e.g., XML). Sin embargo, el tipo de dispositivos que se utiliza normalmente en sistemas IoT tiene capacidades limitadas, así como recursos de procesamiento y de comunicación escasos. Debido a estas limitaciones no es posible integrar formatos de datos basados en texto de manera sencilla y e1ciente en dispositivos y redes con recursos restringidos. En esta Tesis, presentamos una novedosa solución de compresión de datos para formatos de datos basados en texto, que está especialmente diseñada teniendo en cuenta las limitaciones de dispositivos y redes con recursos restringidos. Denominamos a esta solución Context- and Template-based Compression (CTC). CTC mejora la interoperabilidad a nivel de los datos de los sistemas IoT a la vez que requiere muy pocos recursos en cuanto a ancho de banda de las comunicaciones, tamaño de memoria y potencia de procesamiento
- …