8 research outputs found

    A survey of denial-of-service and distributed denial of service attacks and defenses in cloud computing

    Get PDF
    Cloud Computing is a computingmodel that allows ubiquitous, convenient and on-demand access to a shared pool of highly configurable resources (e.g., networks, servers, storage, applications and services). Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are serious threats to the Cloud services’ availability due to numerous new vulnerabilities introduced by the nature of the Cloud, such as multi-tenancy and resource sharing. In this paper, new types of DoS and DDoS attacks in Cloud Computing are explored, especially the XML-DoS and HTTP-DoS attacks, and some possible detection and mitigation techniques are examined. This survey also provides an overview of the existing defense solutions and investigates the experiments and metrics that are usually designed and used to evaluate their performance, which is helpful for the future research in the domain

    Déni-de-service: Implémentation d'attaques XML-DOS et évaluation des défenses dans l'infonuagique

    Get PDF
    RÉSUMÉ L’infonuagique est un paradigme informatique dont la popularitĂ© et les usages n’ont fait que croĂźtre ces derniĂšres annĂ©es. Il doit son succĂšs Ă  sa grande adaptabilitĂ© et capacitĂ© de mise Ă  l’échelle, ainsi qu’à sa facilitĂ© d’utilisation. Son but est de permettre Ă  l’individu ou Ă  l’entreprise d’utiliser des ressources informatiques qu’il ou elle ne possĂšde pas physiquement, en utilisant des techniques dĂ©jĂ  connues et Ă©prouvĂ©es comme la virtualisation et les services web. Plusieurs modĂšles d’infonuagique existent, selon la fonctionnalitĂ© recherchĂ©e. Par exemple, dĂ©poser, partager et accĂ©der depuis n’importe quel terminal un fichier sur Dropbox, et exĂ©cuter une application extrĂȘmement gourmande en ressources sur une machine louĂ©e le temps de l’exĂ©cution sont deux exemples d’utilisation de l’infonuagique. Le modĂšle d’infonuagique influe sur la portion des ressources gĂ©rĂ©es par l’utilisateur, en comparaison des ressources gĂ©rĂ©es par le fournisseur. Dans le cas de Dropbox, l’utilisateur n’a pas besoin de se soucier des ressources allouĂ©es Ă  sa requĂȘte ni de savoir quel systĂšme d’exploitation a servi ou encore comment est gĂ©rĂ©e la base de donnĂ©es, alors que dans l’autre cas tous ces paramĂštres rentreront probablement en compte et seront donc Ă  la charge de l’utilisateur. Un rĂ©seau d’infonuagique peut tout autant ĂȘtre un rĂ©seau public accessible de tous moyennant finances, qu’un rĂ©seau privĂ© bĂąti et utilisĂ© seulement par une entreprise pour ses propres besoins. L’attrait considĂ©rable de l’infonuagique, pour les particuliers comme pour les entreprises, augmente par le fait mĂȘme les risques liĂ©s Ă  la sĂ©curitĂ© de ces rĂ©seaux, ceux-ci devenant une cible de choix pour les attaquants. Ce risque accru alliĂ© Ă  la confiance que l’utilisateur doit porter au fournisseur pour gĂ©rer et protĂ©ger ses donnĂ©es explique que nombreux sont ceux encore rĂ©ticents Ă  utiliser l’infonuagique, que ce soit pour des questions de confidentialitĂ© pour une entreprise ou de vie privĂ©e pour un particulier. La diversitĂ© des technologies utilisĂ©es implique une grande variĂ©tĂ© d’attaques possibles. Notamment, les rĂ©seaux d’infonuagique pĂątissent des mĂȘmes vulnĂ©rabilitĂ©s que les rĂ©seaux conventionnels, mais Ă©galement des failles de sĂ©curitĂ© liĂ©es Ă  l’utilisation de machines virtuelles. Cependant, ces menaces sont dans l’ensemble bien connues et la plupart du temps des mesures sont mises en place pour les contrer. Ce n’est pas le cas des vulnĂ©rabilitĂ©s liĂ©es Ă  l’utilisation des services web, utilisĂ©s abondamment dans le cas de l’infonuagique. Les rĂ©seaux d’infonuagique se donnent pour but d’ĂȘtre accessibles depuis n’importe oĂč et n’importe quel appareil, ce qui passe nĂ©cessairement par l’utilisation de services web. Or les serveurs web sont extrĂȘmement vulnĂ©rables aux attaques de type XML-DoS, mettant Ă  profit des requĂȘtes SOAP (Simple Object Access Protocol) utilisant un message XML malveillant. Ces attaques ont pour but d’utiliser une trĂšs grande partie si ce n’est pas toutes les ressources CPU et mĂ©moire de la machine hĂ©bergeant le serveur web victime, la rendant indisponible pour des utilisateurs lĂ©gitimes, ce qui est le but recherchĂ© lors d’une attaque de dĂ©ni de service. Ces attaques sont extrĂȘmement intĂ©ressantes d’un double point de vue. Elles sont tout d’abord trĂšs difficiles Ă  dĂ©tecter, car l’utilisateur qui en est Ă  l’origine est perçu comme un utilisateur lĂ©gitime (attaque au niveau de la couche application, donc impossible de la dĂ©tecter au niveau de la couche TCP/IP). De plus, elles prĂ©sentent une dissymĂ©trie importante entre les ressources dont l’attaquant a besoin pour monter l’attaque, et les ressources nĂ©cessaires pour traiter la requĂȘte. En effet, une requĂȘte SOAP mal formĂ©e bien que trĂšs basique peut dĂ©jĂ  demander des ressources considĂ©rables au serveur web victime. Ce type d’attaques ayant Ă©tĂ© assez peu Ă©tudiĂ©, malgrĂ© son efficacitĂ© et l’omniprĂ©sence des services web dans l’infonuagique, nous nous sommes proposĂ© de dĂ©montrer et de quantifier l’impact que peuvent avoir ces attaques dans le cadre de l’infonuagique, pour ensuite proposer des solutions possibles pour s’en dĂ©fendre. Nous avons jugĂ© qu’il Ă©tait plus appropriĂ© de recourir Ă  des outils de simulation pour mener nos travaux, pour plusieurs raisons comme notamment la possibilitĂ© de suivre de façon prĂ©cise l’évolution des ressources des diffĂ©rents serveurs du rĂ©seau, et la plus grande libertĂ© qui nous Ă©tait laissĂ©e de construire notre propre topologie. Notre premiĂšre contribution est de mettre en avant les Ă©quipements vulnĂ©rables dans un rĂ©seau d’infonuagique, et les diffĂ©rentes façons de les attaquer, ainsi que les diffĂ©rents types d’attaques XML-DoS. Cette analyse nous a permis d’apporter notre deuxiĂšme contribution, qui consiste Ă  utiliser et Ă  modifier en profondeur un simulateur d’infonuagique (le simulateur GreenCloud, basĂ© sur le simulateur NS2) afin de le rendre apte Ă  l’étude des attaques XML-DoS et plus rĂ©aliste. Une fois ces changements effectuĂ©s, nous montrons l’efficacitĂ© des attaques XML-DoS et les rĂ©percussions sur les usagers lĂ©gitimes. Par ailleurs, nous rĂ©alisons une comparaison critique des principales dĂ©fenses contre les attaques XML-DoS et contre les services web en gĂ©nĂ©ral, et sĂ©lectionnons celle qui nous semble la plus pertinente, afin de la mettre Ă  l’épreuve de la simulation et de mesurer son efficacitĂ©. Cette efficacitĂ© doit ĂȘtre dĂ©montrĂ©e aussi bien en terme de capacitĂ© Ă  dĂ©jouer l’attaque menĂ©e Ă  l’étape prĂ©cĂ©dente, que de prĂ©cision en terme de "false positives" et "false negatives". Un des dĂ©fis majeurs consiste en effet Ă  concilier une dĂ©fense qui se veut universelle pour toutes les machines du rĂ©seau, tout en Ă©tant capable de s’adapter Ă  la grande hĂ©tĂ©rogĂ©nĂ©itĂ© des services web qui cohabitent au sein d’un rĂ©seau d’infonuagique. Ces expĂ©rimentations sont alors l’objet de discussions et de conclusions sur la position Ă  adopter quant aux attaques XML-DoS dans les rĂ©seaux d’infonuagique, notamment les dĂ©fenses Ă  adopter et les pratiques Ă  observer, la dĂ©fense choisie Ă  l’étape prĂ©cĂ©dente ayant montrĂ© quelques limitations. Nous sommes partis de l’hypothĂšse que chacun des modĂšles d’infonuagique pouvait ĂȘtre touchĂ© par ce type d’attaques, bien que de façons diffĂ©rentes. Quel que soit le modĂšle d’infonuagique, il peut en effet avoir recours Ă  des services web et se retrouve donc vulnĂ©rable d’une façon ou d’une autre, qu’il s’agisse d’un serveur web gĂ©rant toutes les requĂȘtes entrantes des utilisateurs, ou un serveur web qu’un utilisateur a lui-mĂȘme installĂ© sur une machine virtuelle qu’il loue au fournisseur. Nous avons aussi jugĂ© essentiel d’introduire les spĂ©cificitĂ©s liĂ©es Ă  l’utilisation de machines virtuelles, comme la compĂ©tition pour les ressources entre machines virtuelles situĂ©es sur une mĂȘme machine physique.----------ABSTRACT Cloud Computing is a computing paradigm that has emerged in the past few years as a very promising way of using highly scalable and adaptable computing resources, as well as accessing them from anywhere in the world and from any terminal (mobile phone, tablet, laptop...). It allows companies or individuals to use computing infrastructures without having to physically own them, and without the burden of maintenance, installations, or updates. To achieve that, Cloud Computing uses already known and tested technologies such as virtualization and web services. Several Cloud Computing models exist, divided by how much of the infrastructure the user is in charge of, ranking from nothing at all (the provider is in charge of everything, from the operating system to the applications installed) to managing a whole virtual machine without even an operating system preinstalled. For instance, sharing and accessing a document on Dropbox, or running a resource intensive application on a rented machine, are both exemples of what can be done with Cloud Computing. In the case of Dropbox, the user doesn’t care what resources are allocated for his requests, no more than he needs to know on what operating system the request was run or how the database was accessed. But all those aspects will be part of what the user has to know and adjust in the second case. A Cloud Computing network can be public, as it is the case for Amazon, which allows you to access its resources if you pay for it, or private, if a company decides to build a cluster for its own needs. The strong appeal of Cloud Computing, for both businesses and individuals, dramatically increases the security risks, because it becomes a key target for attackers. This increased risk, added to the confidence the users must have in their services provider when it comes to managing and protecting their data, may explain why many are still reluctant to take the leap to Cloud Computing. For instance, a company may be reluctant for confidentiality reasons, while individuals may hesitate for privacy concerns. The broad range of technologies used in Cloud Computing makes it at risk for a wide variety of attacks, since it already comes with all the vulnerabilities associated with any conventional network, and all the security breaches that affect virtual machines. However, those threats are usually well documented and easily prevented. But this is not the case of the vulnerabilities that come from the use of web services, that are heavily used in Cloud Computing. Cloud Computing networks aim at being accessible from all over the world and on almost any device, and that implies using web services. Yet, web services are extremely vulnerable to XML-DoS type of attack. Those attacks take advantage of Simple Object Access Protocol (SOAP) requests using a malicious XML content. Those requests can easily deplete a web server from its resources, be it CPU or memory, making it unavailable for legitimate users; this is exactly the goal of a denial-of-service attack. The XML-DoS attacks are extremely interesting in two ways. First, they are very hard to detect, since the attack takes place on the application layer, so the user appears to be legitimate (it’s impossible to detect it on the TCP/IP layer). Second, the resources needed to mount the attack are very low compared to what is needed on the web server side to process even a basic but malformed request. This type of attack has been surprisingly left quite aside, despite its efficiency and the omnipresence of web services in Cloud Computing networks. This is the reason why we decided to prove and quantify the impact such attacks can have on Cloud Computing networks, to later propose possible solutions and defenses. We estimated that using a simulated environment was the best option for various reasons, like the possibility to monitor the resources of all the servers in the network, and the greater freedom we had to build our own topology. Our first contribution is to emphasize the vulnerable equipments in a Cloud Computing network, and the various ways to attack them, as well as the various forms an XML-DoS attack can take. Our second contribution is then to use, modify and improve a Cloud Computing simulator (the GreenCloud simulator, based on NS2), in order to make the study of XML-DoS attacks possible. Once the changes are made, we show the efficiency of XML- DoS attacks, and the impact they have on legitimate users. In addition, we compare the main existing defenses against XML-DoS attacks and web services attacks in general, and pick the one that seems to be best suited to protect Cloud Computing networks. We then put this defense to the test in our simulator, to evaluate its efficiency. This evaluation must take into consideration not only the ability to mitigate the attack we led in the previous step, but also the number of false positives and false negatives. One of the major challenges is to have a defense that will conciliate the ability to protect all the machines in the network, while still being able to adapt to the great heterogeneity of the various web services hosted at the same time in a Cloud Computing network. Those experimentations are then subjected to discussions and conclusions on the decisions to take when it comes to XML-DoS attacks. In particular, what defenses should be adopted and what practices should be followed, because the evaluation of the defense at the previous step will show that it may not be the optimal solution; this will be our final contribution. We made the assumption that all the Cloud Computing models could be the target of a XML-DoS attack in some way. No matter the model, it can actually use web services and is then vulnerable to those attacks, whether it is a web server handling incoming requests for all the users, or a web server a user installed on the virtual machine he rents. We thought it was essential to take into consideration the specificity of virtual machines, such as the contention for resources when they are located on the same physical machine

    Resource allocation for cost minimization of a slice broker in a 5G-MEC scenario

    Get PDF
    The fifth generation (5G) of mobile networks may offer a custom logical and virtualized network called network slicing. This virtualization opens a new opportunity to share infrastructure resources and encourage cooperation between several Infrastructure Providers (InPs) to offer tailored network slices for the Slice Tenants (STs). The Slice Broker (SB) is emerging as intermediate entity that purchases resources from the InPs and it offers network slices to the STs. The main challenge of the SB is to jointly decide the purchase of heterogeneous (data and network) resources from multiple InPs and create the slices to meet the various requests from the STs. Being an economical entity, the target of the SB is to maximize its profit by minimizing the costs while satisfying all the ST requests. This paper formulated the SB cost minimization problem and used CPLEX to obtain the optimal solution. The problem formulation considers the realistic scenario that the InPs offer the computing, storage and network resources by using predetermined configurations. Therefore, for each of the computing platform and logical connection, the SB may select one of the configurations. The proposed cost-minimization problem is compared with three alternative problems that have three different objectives: computing platform consolidation, network connection consolidation, and both computing-network consolidation. The computing platform and network connection consolidation are currently the most common approaches for decreasing resource costs. However, the result shows that consolidating computing and network resources fails to reach the actual minimal cost. The proposed problem finds the cheapest solution, which can save at least 30% of the total cost of the other approaches in every evaluated scenario. Moreover, consolidating the number of computing platforms can lead to the most expensive solution, up to 40% higher than the optimal solution of our proposed problem.submittedVersio

    Modelo de Calidad para Servicios Cloud

    Full text link
    [EN] Modelo de Calidad para Servicios Cloud 4 Abstract Context: Cloud computing is a model of provision and consumption of services that offers many advantages to companies (high availability, flexibility, maximum utilization of resources, etc.) that result in quality requirements that must be met by the servi ce. In recent years there have been proposed numerous quality attributes and metrics for cloud services, but there is no study to collect this information and classified it with respect to internal and external characteristics of service (Quality of Servic e - QoS) and characteristics in use of the service (Quality of Experience - QoE). Objective: The objective of this final master ’s work is to define a model specific quality cloud services aligned with the ISO / IEC 25010, which integrate the quality feature s, attributes and metrics proposed in the literature and which allow to assess the quality cloud of artifacts in various stages of the life cycle. Method: We performed a systematic review of the literature in order to identify and analyze the attributes an d quality metrics proposed to assess the quality of cloud services. This method has been widely used in the field of Software Engineering and has proven useful to collect and analyze existing information on a particular research topic. Results: The result is a quality model for cloud services which has been built from 178 attributes and 364 metric obtained as a result of the systematic review. In particular, the results of the review indicate that 48% of proposals are metrics to measure performance efficien cy, reliability metrics following him with 23%. With respect to the phase of the life cycle, 55% of these metrics are used in the operation phase and 32% in the acquisition phase. Regarding the point of view of stakeholders, 39% of the metrics are oriented to the service provider, 33% consumer, 7% to the facilitator (broker) and only 5% service developer. With respect to cloud evaluated artifacts, most metrics (97%) are applied to the cloud service being tested or deployed in the cloud; only 2% of the metri cs are applied to the service architecture and 1% on the service specification. With regard to the validation, the results show that 99% of metric proposals lack any type of validation, although 44% presents a proof of concept illustrating how the metrics can be used. Additionally, we identified 27 attributes for cloud services, the elasticity was the most named, with 14%. Conclusions: The results of this work have provided relevant information on the current status and gaps that exist in the field of quali ty assessment of cloud services. They have also allowed us to define a quality model to meet some identified shortcomings. As future work, we intend to refine the proposed model, propose new metrics and adapt some existing architectures for evaluating clou d and empirical studies to provide evidence about theusefulness of a set of metrics[ES] El trabajo consiste en definir un modelo de calidad que determine las caracterĂ­sticas de los servicios cloud y proporcione los mecanismos necesarios para evaluar su calidad. Se realizarĂĄ una revisiĂłn sistemĂĄtica de la literatura con el objetivo de identificar un conjunto de atributos de calidad, mĂ©tricas e indicadores que permitirĂĄn medir las caracterĂ­sticas identificadas. Como resultado se obtendrĂĄ un modelo de calidad alineado con la ISO/IEC 25010 que integrarĂĄ las caracterĂ­sticas de calidad, atributos, mĂ©tricas e indicadores que dan soporte a su evaluaciĂłn.[CA] Context: La computacio en el nuvol es un model de prestacio i consum de servicis que oferix moltes ventages a les empreses (alta disponibilitat, elasticitat, maxim aprofitament de recursos, etc.) que se traduixen en requisits de calitat que deuen ser complits pel servici. En els ultims anys s'han propost numerosos atributs de calitat i metriques per a servicis cloud, pero no existix un estudi que recoja esta informacio i la classifique en respecte a les caracteristiques internes i externes del servici (Quality of Service – QoS), aixina com caracteristiques en us del servici (Quality of Experience – QoE). Objectiu: L'objectiu d'este treball de fi de mĂĄster es definir un model de calitat especifica per a servicis cloud, enringlerat en l'ISO/IEC 25010, que integre les caracteristiques de calitat, atributs i metriques proposts en la lliteratura i que permeten evaluar la calitat dels artefactes cloud en distintes fases del cicle de vida. Metodo: S'ha realisat una revisio sistematica de la lliteratura en l'objectiu d'identificar i analisar els atributs i metriques de calitat propostes per a evaluar la calitat dels servicis cloud. Este metodo ha segut utilisat extensament en l'ambit de l'Ingenieria del Software i ha demostrat ser util per a recopilar i analisar l'informacio existent relativa a un determinat tema d'investigacio. Resultats: El resultat es un model de calitat per a servicis cloud que ha segut construit a partir dels 178 atributs i 364 metriques obtingudes com resultat de la revisio sistematica. En particular, els resultats de la revisio indiquen que el 48% de les metriques propostes son per a mesurar Eficiencia de desempenyorament, seguint-li les metriques de fiabilidad en en un 23%. En respecte a la fase del cicle de vida, un 55% d'estes metriques s'utilisen en la fase d'Operacio i un 32% en la fase d'Adquisicio. En respecte al punt de vista dels stakeholders, el 39% de les metriques estan orientades al proveĂŻdor del servici, el 33% al consumidor, el 7% al facilitador (brĂłker) i nomes un 5% al desarrollador del servici. En respecte als artefactes cloud valorats, la majoria de les metriques (97%) s'apliquen sobre el servici cloud en fase de proves o desplegat en el cloud; nomes un 2% de les metriques s'apliquen sobre l'arquitectura del servici i un 1% sobre l'especificacio del servici. En respecte a la validacio, els resultats mostren que el 99% de les metriques propostes carixen de qualsevol tipo de validacio, encara que el 44% presenta una prova de concepte que ilustra com se pot utilisar les metriques. Adicionalment identifiquem 27 atributs propis dels servicis cloud, sent l'elasticitat el mes nomenat, en 14%. Conclusions: Els resultats del treball han proporcionat informacio rellevant sobre l'estat actual i les carencies que existixen en l'ambit de l'evaluacio de la calitat dels servicis cloud. Tambe mos han permes definir un model de calitat per a suplir algunes carencies identificades. Com trabajos futurs, pretenem refinar el model propost, propondre noves metriques i adaptar algunes existents per a l'evaluacio d'arquitectures cloud, aixina com realisar estudios empirics per a proporcionar evidencia al voltant de l'utilitat d'un conjunt de metriquesNavas Rosales, RM. (2016). Modelo de Calidad para Servicios Cloud. http://hdl.handle.net/10251/77847TFG

    Detection and Mitigation of Steganographic Malware

    Get PDF
    A new attack trend concerns the use of some form of steganography and information hiding to make malware stealthier and able to elude many standard security mechanisms. Therefore, this Thesis addresses the detection and the mitigation of this class of threats. In particular, it considers malware implementing covert communications within network traffic or cloaking malicious payloads within digital images. The first research contribution of this Thesis is in the detection of network covert channels. Unfortunately, the literature on the topic lacks of real traffic traces or attack samples to perform precise tests or security assessments. Thus, a propaedeutic research activity has been devoted to develop two ad-hoc tools. The first allows to create covert channels targeting the IPv6 protocol by eavesdropping flows, whereas the second allows to embed secret data within arbitrary traffic traces that can be replayed to perform investigations in realistic conditions. This Thesis then starts with a security assessment concerning the impact of hidden network communications in production-quality scenarios. Results have been obtained by considering channels cloaking data in the most popular protocols (e.g., TLS, IPv4/v6, and ICMPv4/v6) and showcased that de-facto standard intrusion detection systems and firewalls (i.e., Snort, Suricata, and Zeek) are unable to spot this class of hazards. Since malware can conceal information (e.g., commands and configuration files) in almost every protocol, traffic feature or network element, configuring or adapting pre-existent security solutions could be not straightforward. Moreover, inspecting multiple protocols, fields or conversations at the same time could lead to performance issues. Thus, a major effort has been devoted to develop a suite based on the extended Berkeley Packet Filter (eBPF) to gain visibility over different network protocols/components and to efficiently collect various performance indicators or statistics by using a unique technology. This part of research allowed to spot the presence of network covert channels targeting the header of the IPv6 protocol or the inter-packet time of generic network conversations. In addition, the approach based on eBPF turned out to be very flexible and also allowed to reveal hidden data transfers between two processes co-located within the same host. Another important contribution of this part of the Thesis concerns the deployment of the suite in realistic scenarios and its comparison with other similar tools. Specifically, a thorough performance evaluation demonstrated that eBPF can be used to inspect traffic and reveal the presence of covert communications also when in the presence of high loads, e.g., it can sustain rates up to 3 Gbit/s with commodity hardware. To further address the problem of revealing network covert channels in realistic environments, this Thesis also investigates malware targeting traffic generated by Internet of Things devices. In this case, an incremental ensemble of autoencoders has been considered to face the ''unknown'' location of the hidden data generated by a threat covertly exchanging commands towards a remote attacker. The second research contribution of this Thesis is in the detection of malicious payloads hidden within digital images. In fact, the majority of real-world malware exploits hiding methods based on Least Significant Bit steganography and some of its variants, such as the Invoke-PSImage mechanism. Therefore, a relevant amount of research has been done to detect the presence of hidden data and classify the payload (e.g., malicious PowerShell scripts or PHP fragments). To this aim, mechanisms leveraging Deep Neural Networks (DNNs) proved to be flexible and effective since they can learn by combining raw low-level data and can be updated or retrained to consider unseen payloads or images with different features. To take into account realistic threat models, this Thesis studies malware targeting different types of images (i.e., favicons and icons) and various payloads (e.g., URLs and Ethereum addresses, as well as webshells). Obtained results showcased that DNNs can be considered a valid tool for spotting the presence of hidden contents since their detection accuracy is always above 90% also when facing ''elusion'' mechanisms such as basic obfuscation techniques or alternative encoding schemes. Lastly, when detection or classification are not possible (e.g., due to resource constraints), approaches enforcing ''sanitization'' can be applied. Thus, this Thesis also considers autoencoders able to disrupt hidden malicious contents without degrading the quality of the image

    Modelo Comparativo de Plataformas Cloud y EvaluaciĂłn de Microsoft Azure, Google App Engine y AmazonEC2

    Full text link
    [ES] Existe una gran cantidad de proveedores de servicios en la nube siendo los mås importantes Microsoft, Google y Amazon. Otros proveedores también son Rackspace, IBM, Oracle, Salesforce, etc. Un aspecto relevante para los desarrolladores y clientes es conocer las características de estos proveedores para tener información objetiva de cómo elegir entre una plataforma u otra dependiendo de sus objetivos y necesidades. En este proyecto se ha realizado un estudio para determinar las características de calidad relevantes de las plataformas cloud y se ha propuesto un modelo de calidad basado en la ISO/IEC 25010 para guiar a los usuarios en la comparación y selección de dichas plataformas. El modelo estå soportado por un sistema de recomendación que permite a los usuarios especificar sus objetivos y comparar plataformas cloud mediante un conjunto de atributos y métricas de calidad. Este modelo se ha aplicado a un estudio para comparar las plataformas Microsoft Azure, Google App Engine y Amazon Elastic Compute Cloud (EC2) permitiendo la evaluación de sus características de calidad mås relevantes.[EN] There is a large number of cloud service providers being the most important Microsoft, Google and Amazon. Other providers are also Rackspace, IBM, Oracle, Salesforce, etc. A relevant aspect for developers and customers is to determine the characteristics of these providers to have objective information on how to choose between one platform or another depending on their objectives and needs. In this project, we have carried out a study to determine the relevant quality characteristics of cloud platforms and a quality model based on ISO/IEC 25010 has been proposed to guide users in the comparison and selection of these platforms. The model is supported by a recommendation system that allows users to specify their objectives and compare cloud platforms through a set of quality attributes and metrics. This model has been applied to a case study which compares the Microsoft Azure, Google App Engine and Amazon Elastic Compute Cloud (EC2) platforms allowing the evaluation of its most relevant quality characteristics.Álvarez Vañó, JM. (2018). Modelo Comparativo de Plataformas Cloud y Evaluación de Microsoft Azure, Google App Engine y AmazonEC2. http://hdl.handle.net/10251/101221TFG

    Design and Evaluation of Compression, Classification and Localization Schemes for Various IoT Applications

    Get PDF
    Nowadays we are surrounded by a huge number of objects able to communicate, read information such as temperature, light or humidity, and infer new information through ex- changing data. These kinds of objects are not limited to high-tech devices, such as desktop PC, laptop, new generation mobile phone, i.e. smart phone, and others with high capabilities, but also include commonly used object, such as ID cards, driver license, clocks, etc. that can made smart by allowing them to communicate. Thus, the analog world of just a few years ago is becoming the a digital world of the Inter- net of Things (IoT), where the information from a single object can be retrieved from the Internet. The IoT paradigm opens several architectural challenges, including self-organization, self-managing, self-deployment of the smart objects, as well as the problem of how to minimize the usage of the limited resources of each device. The concept of IoT covers a lot of communication paradigms such as WiFi, Radio Frequency Identification (RFID), and Wireless Sensor Network (WSN). Each paradigm can be thought of as an IoT island where each device can communicate directly with other devices. The thesis is divided in sections in order to cover each problem mentioned above. The first step is to understand the possibility to infer new knowledge from the deployed device in a scenario. For this reason, the research is focused on the web semantic, web 3.0, to assign a semantic meaning to each thing inside the architecture. The sole semantic concept is unusable to infer new information from the data gathered; in fact, it is necessary to organize the data through a hierarchical form defined by an Ontology. Through the exploitation of the Ontology, it is possible to apply semantic engine reasoners to infer new knowledge about the network. The second step of the dissertation deals with the minimization of the usage of every node in a WSN. The main purpose of each node is to collect environmental data and to exchange hem with other nodes. To minimize battery consumption, it is necessary to limit the radio usage. Therefore, we implemented Razor, a new lightweight algorithm which is expected to improve data compression and classification by leveraging on the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. Data compression is performed studying the well-know Vector Quantization (VQ) theory in order to create the codebooks necessary for signal compression. At the same time, it is requested to give a semantic meaning to un- known signals. In this way, the codebook feature is able not only to compress the signals, but also to classify unknown signals. Razor is compared with both state-of-the-art compression and signal classification techniques for WSN . The third part of the thesis covers the concept of smart object applied to Robotic research. A critical issue is how a robot can localize and retrieve smart objects in a real scenario without any prior knowledge. In order to achieve the objectives, it is possible to exploit the smart object concept and localize them through RSSI measurements. After the localization phase, the robot can exploit its own camera to retrieve the objects. Several filtering algorithms are developed in order to mitigate the multi–path issue due to the wireless communication channel and to achieve a better distance estimation through the RSSI measurement. The last part of the dissertation deals with the design and the development of a Cognitive Network (CN) testbed using off the shelf devices. The device type is chosen considering the cost, usability, configurability, mobility and possibility to modify the Operating System (OS) source code. Thus, the best choice is to select some devices based on Linux kernel as Android OS. The feature to modify the Operating System is required to extract the TCP/IP protocol stack parameters for the CN paradigm. It is necessary to monitor the network status in real-time and to modify the critical parameters in order to improve some performance, such as bandwidth consumption, number of hops to exchange the data, and throughput

    A clusterized firewall framework for cloud computing

    Full text link
    Cloud computing is becoming popular as the next infrastructure of computing platform. However, with data and business applications outsourced to a third party, how to protect cloud data centers from numerous attacks has become a critical concern. In this paper, we propose a clusterized framework of cloud firewall, which characters performance and cost evaluation. To provide quantitative performance analysis of the cloud firewall, a novel M/Geo/1 analytical model is established. The model allows cloud defenders to extract key system measures such as request response time, and determine how many resources are needed to guarantee quality of service (QoS). Moreover, we give an insight into financial cost of the proposed cloud firewall. Finally, our analytical results are verified by simulation experiments
    corecore