10 research outputs found
Machine Learning Defence Mechanism for Securing the Cloud Environment
A computer paradigm known as ”cloud computing” offers end users on-demand, scalable, and measurable services. Today’s businesses rely heavily on computer technology for a variety of reasons, including cost savings, infrastructure, development platforms, data processing, data analytics, etc. The end users can access the cloud service providers’ (CSP) services from any location at any time using a web application. The protection of the cloud infrastructure is of the highest significance, and several studies using a variety of technologies have been conducted to develop more effective defenses against cloud threats. In recent years, machine learning technology has shown to be more effective in securing the cloud environment. In recent years, machine learning technology has shown to be more effective in securing the cloud environment. To create models that can automate the process of identifying cloud threats with better accuracy than any other technology, machine learning algorithms are trained on a variety of real-world datasets. In this study, various recent research publications that used machine learning as a defense mechanism against cloud threats are reviewed
Collaborative Intrusion Detection in Federated Cloud Environments
Moving services to the Cloud is a trend that has steadily gained popularity over recent years, with a constant increase in sophistication and complexity of such services. Today, critical infrastructure operators are considering moving their services and data to the Cloud. Infrastructure vendors will inevitably take advantage of the benefits Cloud Computing has to offer. As Cloud Computing grows in popularity, new models are deployed to exploit even further its full capacity, one of which is the deployment of Cloud federations. A Cloud federation is an association among different Cloud Service Providers (CSPs) with the goal of sharing resources and data. In providing a larger-scale and higher performance infrastructure, federation enables on-demand provisioning of complex services. In this paper we convey our contribution to this area by outlining our proposed methodology that develops a robust collaborative intrusion detection methodology in a federated Cloud environment. For collaborative intrusion detection we use the Dempster-Shafer theory of evidence to fuse the beliefs provided by the monitoring entities, taking the final decision regarding a possible attack. Protecting the federated Cloud against cyber attacks is a vital concern, due to the potential for significant economic consequences
DIaaS: Data Integrity as a Service in the Cloud
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately
Un acercamiento al estado del arte en cloud computing
Cloud computing uses the Internet to deliver core software and hardware resources, to a large gamma of end users, from international organizations to persons of “common”. In this regard and given his proposal for a technological management model changes the traditional way in the provision and deployment of network infrastructure and hardware and software resources, towards a new vision that makes services, cloud computing has positioned itself in the world of Information Technology and Communication ICT as a new paradigm which seems to announce a long stay.The main objective of this paper focuses on the approach to thestate of the art of this technology in beta constant, so that allowsit to be the preamble to the alchemy needed to put science at theservice of mankind.The first section offers a appetizer of different views of what is defined as cloud computing. Section two deals with the definition of Cloud Computing, types and characteristics. Section three focuses on the existing concessive in the definition of three basic layers: Software as a Service (SaaS), Platform as a Service (PaaS) and infrastructure as InService (IaaS). The fourth section, the cloud providers are mentioned most representative platforms and maturity levels considered in these organizations. The fifth part contains prevailing regulatory aspects Colombia on ICT `s. Finally, we present the conclusions drawn from the research work and future work.Cloud Computing se vale del core de internet para ofrecer recursos software y hardware, a una gran gamma de usuarios finales, desde organizaciones internacionales hasta personas del “comĂşn”. En este sentido y dada su propuesta de un modelo de gestiĂłn tecnolĂłgica que cambia la forma tradicional en el aprovisionamiento y despliegue de infraestructura de redes y recursos hardware y software; hacia una nueva visiĂłn que los convierte en servicios, cloud computing se ha posicionado en el mundo de las TecnologĂas de la InformaciĂłn y las Comunicaciones TIC´s como un nuevo paradigma que parece anunciar una larga estadĂa.El objetivo principal de este paper se centra en el acercamiento al estado del arte de esta tecnologĂa en constante versiĂłn beta; de manera que permita ser el preámbulo a la alquimia necesaria para poner la ciencia al servicio del ser humano.En la primera secciĂłn se realiza un abrebocas de las diferentes visiones de lo que se define como cloud computing. En la secciĂłn dos se aborda la definiciĂłn de la ComputaciĂłn en nube, tipos y caracterĂsticas. La secciĂłn tres, se centra en el conceso existente en la definiciĂłn de tres capas fundamentales: el Software como Servicio (SaaS), la Plataforma como Servicio (PaaS) y la Infraestructura como Servicio (IaaS).En la cuarta secciĂłn, proveedores de la nube, se mencionan las plataformas más representativas y los niveles de madurez que se consideran en estas organizaciones. La quinta parte recoge aspectos regulatorios reinantes en Colombia en materia de la TIC`s. Finalmente se presentan las conclusiones arrojadas del trabajo de investigaciĂłn y el trabajo futuro
Déni-de-service: Implémentation d'attaques XML-DOS et évaluation des défenses dans l'infonuagique
RÉSUMÉ
L’infonuagique est un paradigme informatique dont la popularité et les usages n’ont fait que croître ces dernières années. Il doit son succès à sa grande adaptabilité et capacité de mise à l’échelle, ainsi qu’à sa facilité d’utilisation. Son but est de permettre à l’individu ou à l’entreprise d’utiliser des ressources informatiques qu’il ou elle ne possède pas physiquement, en utilisant des techniques déjà connues et éprouvées comme la virtualisation et les services web. Plusieurs modèles d’infonuagique existent, selon la fonctionnalité recherchée. Par exemple, déposer, partager et accéder depuis n’importe quel terminal un fichier sur Dropbox, et exécuter une application extrêmement gourmande en ressources sur une machine louée le temps de l’exécution sont deux exemples d’utilisation de l’infonuagique. Le modèle d’infonuagique influe sur la portion des ressources gérées par l’utilisateur, en comparaison des ressources gérées par le fournisseur. Dans le cas de Dropbox, l’utilisateur n’a pas besoin de se soucier des ressources allouées à sa requête ni de savoir quel système d’exploitation a servi ou encore comment est gérée la base de données, alors que dans l’autre cas tous ces paramètres rentreront probablement en compte et seront donc à la charge de l’utilisateur. Un réseau d’infonuagique peut tout autant être un réseau public accessible de tous moyennant finances, qu’un réseau privé bâti et utilisé seulement par une entreprise pour ses propres besoins. L’attrait considérable de l’infonuagique, pour les particuliers comme pour les entreprises, augmente par le fait même les risques liés à la sécurité de ces réseaux, ceux-ci devenant une cible de choix pour les attaquants. Ce risque accru allié à la confiance que l’utilisateur doit porter au fournisseur pour gérer et protéger ses données explique que nombreux sont ceux encore réticents à utiliser l’infonuagique, que ce soit pour des questions de confidentialité pour une entreprise ou de vie privée pour un particulier. La diversité des technologies utilisées implique une grande variété d’attaques possibles. Notamment, les réseaux d’infonuagique pâtissent des mêmes vulnérabilités que les réseaux conventionnels, mais également des failles de sécurité liées à l’utilisation de machines virtuelles. Cependant, ces menaces sont dans l’ensemble bien connues et la plupart du temps des mesures sont mises en place pour les contrer. Ce n’est pas le cas des vulnérabilités liées à l’utilisation des services web, utilisés abondamment dans le cas de l’infonuagique.
Les réseaux d’infonuagique se donnent pour but d’être accessibles depuis n’importe où et n’importe quel appareil, ce qui passe nécessairement par l’utilisation de services web. Or les serveurs web sont extrêmement vulnérables aux attaques de type XML-DoS, mettant à profit des requêtes SOAP (Simple Object Access Protocol) utilisant un message XML malveillant.
Ces attaques ont pour but d’utiliser une très grande partie si ce n’est pas toutes les ressources CPU et mémoire de la machine hébergeant le serveur web victime, la rendant indisponible pour des utilisateurs légitimes, ce qui est le but recherché lors d’une attaque de déni de service. Ces attaques sont extrêmement intéressantes d’un double point de vue. Elles sont tout d’abord très difficiles à détecter, car l’utilisateur qui en est à l’origine est perçu comme un utilisateur légitime (attaque au niveau de la couche application, donc impossible de la détecter au niveau de la couche TCP/IP). De plus, elles présentent une dissymétrie importante entre les ressources dont l’attaquant a besoin pour monter l’attaque, et les ressources nécessaires pour traiter la requête. En effet, une requête SOAP mal formée bien que très basique peut déjà demander des ressources considérables au serveur web victime. Ce type d’attaques ayant été assez peu étudié, malgré son efficacité et l’omniprésence des services web dans l’infonuagique, nous nous sommes proposé de démontrer et de quantifier l’impact que peuvent avoir ces attaques dans le cadre de l’infonuagique, pour ensuite proposer des solutions possibles pour s’en défendre. Nous avons jugé qu’il était plus approprié de recourir à des outils de simulation pour mener nos travaux, pour plusieurs raisons comme notamment la possibilité de suivre de façon précise l’évolution des ressources des différents serveurs du réseau, et la plus grande liberté qui nous était laissée de construire notre propre topologie.
Notre première contribution est de mettre en avant les équipements vulnérables dans un réseau d’infonuagique, et les différentes façons de les attaquer, ainsi que les différents types d’attaques XML-DoS. Cette analyse nous a permis d’apporter notre deuxième contribution, qui consiste à utiliser et à modifier en profondeur un simulateur d’infonuagique (le simulateur GreenCloud, basé sur le simulateur NS2) afin de le rendre apte à l’étude des attaques XML-DoS et plus réaliste. Une fois ces changements effectués, nous montrons l’efficacité des attaques XML-DoS et les répercussions sur les usagers légitimes. Par ailleurs, nous réalisons une comparaison critique des principales défenses contre les attaques XML-DoS et contre les services web en général, et sélectionnons celle qui nous semble la plus pertinente, afin de la mettre à l’épreuve de la simulation et de mesurer son efficacité. Cette efficacité doit être démontrée aussi bien en terme de capacité à déjouer l’attaque menée à l’étape précédente, que de précision en terme de "false positives" et "false negatives". Un des défis majeurs consiste en effet à concilier une défense qui se veut universelle pour toutes les machines du réseau, tout en étant capable de s’adapter à la grande hétérogénéité des services web qui cohabitent au sein d’un réseau d’infonuagique. Ces expérimentations sont alors l’objet de discussions et de conclusions sur la position à adopter quant aux attaques XML-DoS dans les réseaux d’infonuagique, notamment les défenses à adopter et les pratiques à observer, la défense choisie à l’étape précédente ayant montré quelques limitations. Nous sommes partis de l’hypothèse que chacun des modèles d’infonuagique pouvait être touché par ce type d’attaques, bien que de façons différentes. Quel que soit le modèle d’infonuagique, il peut en effet avoir recours à des services web et se retrouve donc vulnérable d’une façon ou d’une autre, qu’il s’agisse d’un serveur web gérant toutes les requêtes entrantes des utilisateurs, ou un serveur web qu’un utilisateur a lui-même installé sur une machine virtuelle qu’il loue au fournisseur. Nous avons aussi jugé essentiel d’introduire les spécificités liées à l’utilisation de machines virtuelles, comme la compétition pour les ressources entre machines virtuelles situées sur une même machine physique.----------ABSTRACT
Cloud Computing is a computing paradigm that has emerged in the past few years as a very promising way of using highly scalable and adaptable computing resources, as well as accessing them from anywhere in the world and from any terminal (mobile phone, tablet, laptop...). It allows companies or individuals to use computing infrastructures without having to physically own them, and without the burden of maintenance, installations, or updates. To achieve that, Cloud Computing uses already known and tested technologies such as virtualization and web services. Several Cloud Computing models exist, divided by how much of the infrastructure the user is in charge of, ranking from nothing at all (the provider is in charge of everything, from the operating system to the applications installed) to managing a whole virtual machine without even an operating system preinstalled. For instance, sharing and accessing a document on Dropbox, or running a resource intensive application on a rented machine, are both exemples of what can be done with Cloud Computing. In the case of Dropbox, the user doesn’t care what resources are allocated for his requests, no more than he needs to know on what operating system the request was run or how the database was accessed. But all those aspects will be part of what the user has to know and adjust in the second case. A Cloud Computing network can be public, as it is the case for Amazon, which allows you to access its resources if you pay for it, or private, if a company decides to build a cluster for its own needs. The strong appeal of Cloud Computing, for both businesses and individuals, dramatically increases the security risks, because it becomes a key target for attackers. This increased risk, added to the confidence the users must have in their services provider when it comes to managing and protecting their data, may explain why many are still reluctant to take the leap to Cloud Computing. For instance, a company may be reluctant for confidentiality reasons, while individuals may hesitate for privacy concerns. The broad range of technologies used in Cloud Computing makes it at risk for a wide variety of attacks, since it already comes with all the vulnerabilities associated with any conventional network, and all the security breaches that affect virtual machines. However, those threats are usually well documented and easily prevented. But this is not the case of the vulnerabilities that come from the use of web services, that are heavily used in Cloud Computing.
Cloud Computing networks aim at being accessible from all over the world and on almost any device, and that implies using web services. Yet, web services are extremely vulnerable to XML-DoS type of attack. Those attacks take advantage of Simple Object Access Protocol (SOAP) requests using a malicious XML content. Those requests can easily deplete a web server from its resources, be it CPU or memory, making it unavailable for legitimate users; this is exactly the goal of a denial-of-service attack. The XML-DoS attacks are extremely interesting in two ways. First, they are very hard to detect, since the attack takes place on the application layer, so the user appears to be legitimate (it’s impossible to detect it on the TCP/IP layer). Second, the resources needed to mount the attack are very low compared to what is needed on the web server side to process even a basic but malformed request. This type of attack has been surprisingly left quite aside, despite its efficiency and the omnipresence of web services in Cloud Computing networks. This is the reason why we decided to prove and quantify the impact such attacks can have on Cloud Computing networks, to later propose possible solutions and defenses. We estimated that using a simulated environment was the best option for various reasons, like the possibility to monitor the resources of all the servers in the network, and the greater freedom we had to build our own topology.
Our first contribution is to emphasize the vulnerable equipments in a Cloud Computing network, and the various ways to attack them, as well as the various forms an XML-DoS attack can take. Our second contribution is then to use, modify and improve a Cloud Computing simulator (the GreenCloud simulator, based on NS2), in order to make the study of XML-DoS attacks possible. Once the changes are made, we show the efficiency of XML- DoS attacks, and the impact they have on legitimate users. In addition, we compare the main existing defenses against XML-DoS attacks and web services attacks in general, and pick the one that seems to be best suited to protect Cloud Computing networks. We then put this defense to the test in our simulator, to evaluate its efficiency. This evaluation must take into consideration not only the ability to mitigate the attack we led in the previous step, but also the number of false positives and false negatives. One of the major challenges is to have a defense that will conciliate the ability to protect all the machines in the network, while still being able to adapt to the great heterogeneity of the various web services hosted at the same time in a Cloud Computing network. Those experimentations are then subjected to discussions and conclusions on the decisions to take when it comes to XML-DoS attacks. In particular, what defenses should be adopted and what practices should be followed, because the evaluation of the defense at the previous step will show that it may not be the optimal solution; this will be our final contribution. We made the assumption that all the Cloud Computing models could be the target of a XML-DoS attack in some way. No matter the model, it can actually use web services and is then vulnerable to those attacks, whether it is a web server handling incoming requests for all the users, or a web server a user installed on the virtual machine he rents. We thought it was essential to take into consideration the specificity of virtual machines, such as the contention for resources when they are located on the same physical machine
Collaborative Intrusion Detection in Federated Cloud Environments using Dempster-Shafer Theory of Evidence
Moving services to the Cloud environment is a trend that has been increasing in recent years, with a constant increase in sophistication and complexity of such services. Today, even critical infrastructure operators are considering moving their services and data to the Cloud. As Cloud computing grows in popularity, new models are deployed to further the associated benefits. Federated Clouds are one such concept, which are an alternative for companies reluctant to move their data out of house to a Cloud Service Providers (CSP) due to security and confidentiality concerns. Lack of collaboration among different components within a Cloud federation, or among CSPs, for detection or prevention of attacks is an issue. For protecting these services and data, as Cloud environments and Cloud federations are large scale, it is essential that any potential solution should scale alongside the environment adapt to the underlying infrastructure without any issues or performance implications. This thesis presents a novel architecture for collaborative intrusion detection specifically for CSPs within a Cloud federation. Our approach offers a proactive model for Cloud intrusion detection based on the distribution of responsibilities, whereby the responsibility for managing the elements of the Cloud is distributed among several monitoring nodes and brokering, utilising our Service-based collaborative intrusion detection – “Security as a Service” methodology. For collaborative intrusion detection, the Dempster-Shafer (D-S) theory of evidence is applied, executing as a fusion node with the role of collecting and fusing the information provided by the monitoring entities, taking the final decision regarding a possible attack. This type of detection and prevention helps increase resilience to attacks in the Cloud. The main novel contribution of this project is that it provides the means by which DDoS attacks are detected within a Cloud federation, so as to enable an early propagated response to block the attack. This inter-domain cooperation will offer holistic security, and add to the defence in depth. However, while the utilisation of D-S seems promising, there is an issue regarding conflicting evidences which is addressed with an extended two stage D-S fusion process. The evidence from the research strongly suggests that fusion algorithms can play a key role in autonomous decision making schemes, however our experimentation highlights areas upon which improvements are needed before fully applying to federated environments
Recommended from our members
Cloud Broker Based Trust Assessment of Cloud Service Providers
Cloud computing is emerging as the future Internet technology due to its advantages such as sharing of IT resources, unlimited scalability and flexibility and high level of automation. Along the lines of rapid growth, the cloud computing technology also brings in concerns of security, trust and privacy of the applications and data that is hosted in the cloud environment. With large number of cloud service providers available, determining the providers that can be trusted for efficient operation of the service deployed in the provider’s environment is a key requirement for service consumers.
In this thesis, we provide an approach to assess the trustworthiness of the cloud service providers. We propose a trust model that considers real-time cloud transactions to model the trustworthiness of the cloud service providers. The trust model uses the unique uncertainty model used in the representation of opinion. The Trustworthiness of a cloud service provider is modelled using opinion obtained from three different computations, namely (i) compliance of SLA (Service Level Agreement) parameters (ii) service provider satisfaction ratings and (iii) service provider behaviour. In addition to this the trust model is extended to encompass the essential Cloud characteristics, credibility for weighing the feedbacks and filtering mechanisms to filter the dubious feedback providers. The credibility function and the early filtering mechanisms in the extended trust model are shown to assist in the reduction of impact of malicious feedback providers
A reactive architecture for cloud-based system engineering
PhD ThesisSoftware system engineering is increasingly practised over globally distributed locations. Such a practise is termed as Global Software Development (GSD). GSD has become a business necessity mainly because of the
scarcity of resources, cost, and the need to locate development closer to
the customers. GSD is highly dependent on requirements management,
but system requirements continuously change. Poorly managed change in
requirements affects the overall cost, schedule and quality of GSD projects.
It is particularly challenging to manage and trace such changes, and hence
we require a rigorous requirement change management (RCM) process.
RCM is not trivial in collocated software development; and with the presence of geographical, cultural, social and temporal factors, it makes RCM
profoundly difficult for GSD. Existing RCM methods do not take into
consideration these issues faced in GSD. Considering the state-of-the-art
in RCM, design and analysis of architecture, and cloud accountability,
this work contributes:
1. an alternative and novel mechanism for effective information and
knowledge-sharing towards RCM and traceability.
2. a novel methodology for the design and analysis of small-to-medium
size cloud-based systems, with a particular focus on the trade-off of
quality attributes.
3. a dependable framework that facilitates the RCM and traceability
method for cloud-based system engineering.
4. a novel methodology for assuring cloud accountability in terms of
dependability.
5. a cloud-based framework to facilitate the cloud accountability methodology.
The results show a traceable RCM linkage between system engineering
processes and stakeholder requirements for cloud-based GSD projects,
which is better than existing approaches. Also, the results show an improved dependability assurance of systems interfacing with the unpredictable cloud environment. We reach the conclusion that RCM with
a clear focus on traceability, which is then facilitated by a dependable
framework, improves the chance of developing a cloud-based GSD project
successfully