73 research outputs found

    A Literature Survey on Resource Management Techniques, Issues and Challenges in Cloud Computing

    Get PDF
    Cloud computing is a large scale distributed computing which provides on demand services for clients. Cloud Clients use web browsers, mobile apps, thin clients, or terminal emulators to request and control their cloud resources at any time and anywhere through the network. As many companies are shifting their data to cloud and as many people are being aware of the advantages of storing data to cloud, there is increasing number of cloud computing infrastructure and large amount of data which lead to the complexity management for cloud providers. We surveyed the state-of-the-art resource management techniques for IaaS (infrastructure as a service) in cloud computing. Then we put forward different major issues in the deployment of the cloud infrastructure in order to avoid poor service delivery in cloud computing

    Strategically Addressing the Latest Challenges of Workplace Mobility to Meet the Increasing Mobile Usage Demands

    Get PDF
    During this post-PC era, many organizations are embracing the concept of IT consumerization/ Bring-Your-Own Device (BYOD) in their workplace. BYOD is a strategy that enables employees to utilize their personally-owned mobile devices, such as smart phones, tablets, laptops, and netbooks, to connect to the corporate network and access enterprise data. It is estimated that employees will bring two to four Internet-capable devices to work for personal and professional activities. From increased employee satisfaction and productivity to lower IT equipment and operational expenditures, companies have recognized that mobile devices are reasonably essential to their own success. However, many organizations are facing significant challenges with the explosion of mobile devices being used today along with provisioning the appropriate supporting infrastructure due to the unprecedented demands on the wireless and network infrastructures. For example, there is not only a growth in the number of wirelessly connected devices but the amount of bandwidth being consumed on the enterprise networks as well which is furthermore driven by increased usage of video and enterprise applications. Managing mobility and storage along with securing corporate assets have become difficult tasks for IT professionals as many organizations underestimate the potential security and privacy risks of using wireless devices to access organizational resources and data. Therefore, to address the needs and requirements of a new mobile workforce, organizations must involve key members from the Information Technology (IT), Human Resources (HR) and various business units to evaluate the existing and emerging issues and risks posed by BYOD. Then a mobile strategy should be developed by taking into consideration the enterprise objectives to ensure it aligns with the overall organizational strategy. There are various solutions available to address the needs and demands of an organization, such as Distributed Intelligence Architecture, network optimization, monitoring tools, unified management and security platforms, and other security measures. By implementing a suitable mobile strategy, organizations can ensure their particular enterprise network and wireless architecture is designed for highly scalability, performance and reliability. They must also evaluate their existing policies and procedures to ensure appropriate security and privacy measures are in place to address the increasing mobile usage demands and potential liability risks. By taking these factors into consideration, our team has analyzed the current BYOD issues for Educational Testing Service (ETS), which is a non-profit organization based in Princeton, New Jersey. Our findings have revealed a few major technical concerns relating to inadequate network and wireless infrastructure and the lack of a unified management and security platform. Thus, the team has recommended for ETS to implement Distributed Intelligence Architecture, network optimization and Enterprise Mobility Management (EMM) to address and resolve their current issues and risks. In conclusion, companies are beginning to seize this transition in order to become competitive and productive in the workplace; however the unprecedented demands on the corporate network and risk to data security are critical aspects that need to be evaluated on an on-going basis. With this analysis, organizations can review, evaluate and implement the proposed solutions and best practices to address the most common BYOD-related issues that companies are facing these days. However, organizations should continually research the latest technologies that may be available and implement solutions that specifically meet their issues

    Practical Implementation of the Virtual Organization Cluster Model

    Get PDF
    Virtualization has great potential in the realm of scientific computing because of its inherent advantages with regard to environment customization and isolation. Virtualization technology is not without it\u27s downsides, most notably, increased computational overhead. This thesis introduces the operating mechanisms of grid technologies in general, and the Open Science Grid in particular, including a discussion of general organization and specific software implementation. A model for utilization of virtualization resources with separate administrative domains for the virtual machines (VMs) and the physical resources is then presented. Two well-known virtual machine monitors, Xen and the Kernel-based Virtual Machine (KVM), are introduced and a performance analysis conducted. The High-Performance Computing Challenge (HPCC) benchmark suite is used in conjunction with independent High-Performance Linpack (HPL) trials in order to analyze specific performance issues. Xen was found to introduce much lower performance overhead than KVM, however, KVM retains advantages with regard to ease of deployment, both of the VMM itself and of the VM images. KVM\u27s snapshot mode is of special interest, as it allows multiple VMs to be instantiated from a single image located on a network store. With virtualization overhead shown to be acceptable for high-throughput computing tasks, the Virtual Organization Cluster (VOC) Model was implemented as a prototype. Dynamic scaling and multi-site scheduling extensions were also successfully implemented using this prototype. It is also shown that traditional overlay networks have scaling issues and that a new approach to wide-area scheduling is needed. The use of XMPP messaging and the Google App Engine service to implement a virtual machine monitoring system is presented. Detailed discussions of the relevant sections of the XMPP protocol and libraries are presented. XMPP is found to be a good choice for sending status information due to its inherent advantages in a bandwidth-limited NAT environment. Thus, it is concluded that the VOC Model is a practical way to implement virtualization of high-throughput computing tasks. Smaller VOCs may take advantage of traditional overlay networks whereas larger VOCs need an alternative approach to scheduling

    Analysis of security in cloud platforms using OpenStack as case study

    Get PDF
    In the last few years, cloud computing has grown from being a promising business concept to one of the fastest growing segments of the IT industry. Big companies like Amazon, Google, Microsoft etc., expand their market by adopting Cloud Computing systems which enhance their services provided to a large number of users. However, security and privacy issues present a strong barrier for users to adapt into the Cloud.This research investigates the security features and issues of cloud platforms using OpenStack as a case study. The goal was to identify security weakness in terms of Authentication and Identity Management(IAM), and Data Management. Base on the findings, specific recommendations on security standards and management models have been proffered in order to address these problems. These Recommendations if implemented, will lead to trust in cloud computing systems, which in turn would encourage more companies to adopt cloud computing, as a means of providing better IT services

    Application-based authentication on an inter-VM traffic in a Cloud environment

    Get PDF
    Cloud Computing (CC) is an innovative computing model in which resources are provided as a service over the Internet, on an as-needed basis. It is a large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. Since cloud is often enabled by virtualization and share a common attribute, that is, the allocation of resources, applications, and even OSs, adequate safeguards and security measures are essential. In fact, Virtualization creates new targets for intrusion due to the complexity of access and difficulty in monitoring all interconnection points between systems, applications, and data sets. This raises many questions about the appropriate infrastructure, processes, and strategy for enacting detection and response to intrusion in a Cloud environment. Hence, without strict controls put in place within the Cloud, guests could violate and bypass security policies, intercept unauthorized client data, and initiate or become the target of security attacks. This article shines the light on the issues of security within Cloud Computing, especially inter-VM traffic visibility. In addition, the paper lays the proposition of an Application Based Security (ABS) approach in order to enforce an application-based authentication between VMs, through various security mechanisms, filtering, structures, and policies

    Understanding and Leveraging Virtualization Technology in Commodity Computing Systems

    Get PDF
    Commodity computing platforms are imperfect, requiring various enhancements for performance and security purposes. In the past decade, virtualization technology has emerged as a promising trend for commodity computing platforms, ushering many opportunities to optimize the allocation of hardware resources. However, many abstractions offered by virtualization not only make enhancements more challenging, but also complicate the proper understanding of virtualized systems. The current understanding and analysis of these abstractions are far from being satisfactory. This dissertation aims to tackle this problem from a holistic view, by systematically studying the system behaviors. The focus of our work lies in performance implication and security vulnerabilities of a virtualized system.;We start with the first abstraction---an intensive memory multiplexing for I/O of Virtual Machines (VMs)---and present a new technique, called Batmem, to effectively reduce the memory multiplexing overhead of VMs and emulated devices by optimizing the operations of the conventional emulated Memory Mapped I/O in hypervisors. Then we analyze another particular abstraction---a nested file system---and attempt to both quantify and understand the crucial aspects of performance in a variety of settings. Our investigation demonstrates that the choice of a file system at both the guest and hypervisor levels has significant impact upon I/O performance.;Finally, leveraging utilities to manage VM disk images, we present a new patch management framework, called Shadow Patching, to achieve effective software updates. This framework allows system administrators to still take the offline patching approach but retain most of the benefits of live patching by using commonly available virtualization techniques. to demonstrate the effectiveness of the approach, we conduct a series of experiments applying a wide variety of software patches. Our results show that our framework incurs only small overhead in running systems, but can significantly reduce maintenance window

    VISOR: virtual machine images management service for cloud infarestructures

    Get PDF
    Cloud Computing is a relatively novel paradigm that aims to fulfill the computing as utility dream. It has appeared to bring the possibility of providing computing resources (such as servers, storage and networks) as a service and on demand, making them accessible through common Internet protocols. Through cloud offers, users only need to pay for the amount of resources they need and for the time they use them. Virtualization is the clouds key technology, acting upon virtual machine images to deliver fully functional virtual machine instances. Therefore, virtual machine images play an important role in Cloud Computing and their efficient management becomes a key concern that should be carefully addressed. To tackle this requirement, most cloud offers provide their own image repository, where images are stored and retrieved from, in order to instantiate new virtual machines. However, the rise of Cloud Computing has brought new problems in managing large collections of images. Existing image repositories are not able to efficiently manage, store and catalogue virtual machine images from other clouds through the same centralized service repository. This becomes especially important when considering the management of multiple heterogeneous cloud offers. In fact, despite the hype around Cloud Computing, there are still existing barriers to its widespread adoption. Among them, clouds interoperability is one of the most notable issues. Interoperability limitations arise from the fact that current cloud offers provide proprietary interfaces, and their services are tied to their own requirements. Therefore, when dealing with multiple heterogeneous clouds, users face hard to manage integration and compatibility issues. The management and delivery of virtual machine images across different clouds is an example of such interoperability constraints. This dissertation presents VISOR, a cloud agnostic virtual machine images management service and repository. Our work towards VISOR aims to provide a service not designed to fit in a specific cloud offer but rather to overreach sharing and interoperability limitations among different clouds. With VISOR, the management of clouds interoperability can be seamlessly abstracted from the underlying procedures details. In this way, it aims to provide users with the ability to manage and expose virtual machine images across heterogeneous clouds, throughout the same generic and centralized repository and management service. VISOR is an open source software with a community-driven development process, thus it can be freely customized and further improved by everyone. The conducted tests to evaluate its performance and resources usage rate have shown VISOR as a stable and high performance service, even when compared with other services already in production. Lastly, placing clouds as the main target audience is not a limitation for other use cases. In fact, virtualization and virtual machine images are not exclusively linked to cloud environments. Therefore and given the service agnostic design concerns, it is possible to adapt it to other usage scenarios as well.A Computação em Nuvem (”Cloud Computing”) é um paradigma relativamente novo que visa cumprir o sonho de fornecer a computação como um serviço. O mesmo surgiu para possibilitar o fornecimento de recursos de computação (servidores, armazenamento e redes) como um serviço de acordo com as necessidades dos utilizadores, tornando-os acessíveis através de protocolos de Internet comuns. Através das ofertas de ”cloud”, os utilizadores apenas pagam pela quantidade de recursos que precisam e pelo tempo que os usam. A virtualização é a tecnologia chave das ”clouds”, atuando sobre imagens de máquinas virtuais de forma a gerar máquinas virtuais totalmente funcionais. Sendo assim, as imagens de máquinas virtuais desempenham um papel fundamental no ”Cloud Computing” e a sua gestão eficiente torna-se um requisito que deve ser cuidadosamente analisado. Para fazer face a tal necessidade, a maioria das ofertas de ”cloud” fornece o seu próprio repositório de imagens, onde as mesmas são armazenadas e de onde são copiadas a fim de criar novas máquinas virtuais. Contudo, com o crescimento do ”Cloud Computing” surgiram novos problemas na gestão de grandes conjuntos de imagens. Os repositórios existentes não são capazes de gerir, armazenar e catalogar images de máquinas virtuais de forma eficiente a partir de outras ”clouds”, mantendo um único repositório e serviço centralizado. Esta necessidade torna-se especialmente importante quando se considera a gestão de múltiplas ”clouds” heterogéneas. Na verdade, apesar da promoção extrema do ”Cloud Computing”, ainda existem barreiras à sua adoção generalizada. Entre elas, a interoperabilidade entre ”clouds” é um dos constrangimentos mais notáveis. As limitações de interoperabilidade surgem do fato de as ofertas de ”cloud” atuais possuírem interfaces proprietárias, e de os seus serviços estarem vinculados às suas próprias necessidades. Os utilizadores enfrentam assim problemas de compatibilidade e integração difíceis de gerir, ao lidar com ”clouds” de diferentes fornecedores. A gestão e disponibilização de imagens de máquinas virtuais entre diferentes ”clouds” é um exemplo de tais restrições de interoperabilidade. Esta dissertação apresenta o VISOR, o qual é um repositório e serviço de gestão de imagens de máquinas virtuais genérico. O nosso trabalho em torno do VISOR visa proporcionar um serviço que não foi concebido para lidar com uma ”cloud” específica, mas sim para superar as limitações de interoperabilidade entre ”clouds”. Com o VISOR, a gestão da interoperabilidade entre ”clouds” é abstraída dos detalhes subjacentes. Desta forma pretende-se proporcionar aos utilizadores a capacidade de gerir e expor imagens entre ”clouds” heterogéneas, mantendo um repositório e serviço de gestão centralizados. O VISOR é um software de código livre com um processo de desenvolvimento aberto. O mesmo pode ser livremente personalizado e melhorado por qualquer pessoa. Os testes realizados para avaliar o seu desempenho e a taxa de utilização de recursos mostraram o VISOR como sendo um serviço estável e de alto desempenho, mesmo quando comparado com outros serviços já em utilização. Por fim, colocar as ”clouds” como principal público-alvo não representa uma limitação para outros tipos de utilização. Na verdade, as imagens de máquinas virtuais e a virtualização não estão exclusivamente ligadas a ambientes de ”cloud”. Assim sendo, e tendo em conta as preocupações tidas no desenho de um serviço genérico, também é possível adaptar o nosso serviço a outros cenários de utilização

    Nas nuvens ou fora delas, eis a questão

    Get PDF
    Mestrado em Sistemas de InformaçãoO proposito desta dissertação é contribuir no sentido de uma melhor compreensão sobre a decisão de ir ou não ir para uma solução na cloud quando uma organização é confrontada com a necessidade de criar ou expandir um sistema de informação. Isto é feito recorrendo à identificação de factores técnicos e económicos que devem ser tomados em conta quando planeamos uma nova solução e desenvolver um framework para ajudar os decisores. Os seguintes aspetos são considerados: • Definição de um modelo de referência genérico para funcionalidades de um Sistemas de Informação. • Identificação de algumas métricas básicas para caracterizar performance e custos de Sistemas de Informação. • Analise e caracterização de Sistemas de Informação on-premises: Arquiteturas Elementos de custo Questões de Performance • Analise e caracterização de Sistemas de Informação Cloud: Topologias Estruturas de custo Questões de Performance • Estabelecimento de framework de comparação para a cloud versus on-premises • Casos de uso comparando soluções na cloud e on-premises; • Produção de guidelines (focadas no caso das clouds publicas) Para ilustrar o procedimento, são usados dois business cases, ambos com duas abordagens: uma dedicada aos Profissionais de IT (abordagem técnica), outra aos Gestores/Decisores (abordagem económica).The purpose of this dissertation is to contribute towards a better understanding about the decision to go or not to go for cloud solutions when an organization is confronted with the need to create or enlarge an information system. This is done resorting to the identification of technical and economic factors that must be taken into account when planning a new solution and developing a framework to help decision makers. The following aspects are considered: • Definition of a generic reference model for Information systems functionalities. • Identification of some basic metrics characterizing information systems performance. • Analysis and characterization of on-premisis information systems: Architectures Cost elements Performance issues • Analysis and characetrization of cloud information systems. Typology Cost structures Performance issues • Establishment of a comparison framework for cloud versus on-premises solutions as possible instances of information systems. • Use cases comparing cloud and on-premises solutions. • Production of guidelines (focus on public cloud case) To illustrate the procedure, two business cases are used, both with two approaches: one dedicated to IT Professionals (Technical approach), other to Managers/Decision Makers (Economic approach)

    Context-aware information delivery for mobile construction workers

    Get PDF
    The potential of mobile Information Technology (IT) applications to support the information needs of mobile construction workers has long been understood. However, existing mobile IT applications in the construction industry have underlined limitations, including their inability to respond to the changing user context, lack of semantic awareness and poor integration with the desktop-based infrastructure. This research argues that awareness of the user context (such as user role, preferences, task-at-hand, location, etc.) can enhance mobile IT applications in the construction industry by providing a mechanism to deliver highly specific information to mobile workers by intelligent interpretation of their context. Against this this background, the aim of this research was to investigate the applicability of context-aware information delivery (CAID) technologies in the construction industry. The research methodology adopted consisted of various methods. A literature review on context-aware and enabling technologies was undertaken and a conceptual framework developed, which addressed the key issues of context-capture, contextinference and context-integration. To illustrate the application of CAID in realistic construction situations, five futuristic deployment scenarios were developed which were analysed with several industry and technology experts. From the analysis, a common set of user needs was drawn up. These needs were subsequently translated into the system design goals, which acted as a key input to the design and evaluation of a prototype system, which was implemented on a Pocket-PC platform. The main achievements of this research include development of a CAID framework for mobile construction workers, demonstration of CAID concepts in realistic construction scenarios, analysis of the Construction industry needs for CAID and implementation and validation of the prototype to demonstrate the CAID concepts. The research concludes that CAID has the potential to significantly improve support for mobile construction workers and identifies the requirements for its effective deployment in the construction project delivery process. However, the industry needs to address various identified barriers to enable the realisation of the full potential of CAID

    Big data reference architecture for industry 4.0: including economic and ethical Implications

    Get PDF
    El rápido progreso de la Industria 4.0 se consigue gracias a las innovaciones en varios campos, por ejemplo, la fabricación, el big data y la inteligencia artificial. La tesis explica la necesidad de una arquitectura del Big Data para implementar la Inteligencia Artificial en la Industria 4.0 y presenta una arquitectura cognitiva para la inteligencia artificial - CAAI - como posible solución, que se adapta especialmente a los retos de las pequeñas y medianas empresas. La tesis examina las implicaciones económicas y éticas de esas tecnologías y destaca tanto los beneficios como los retos para los países, las empresas y los trabajadores individuales. El "Cuestionario de la Industria 4.0 para las PYME" se realizó para averiguar los requisitos y necesidades de las pequeñas y medianas empresas. Así, la nueva arquitectura de la CAAI presenta un modelo de diseño de software y proporciona un conjunto de bloques de construcción de código abierto para apoyar a las empresas durante la implementación. Diferentes casos de uso demuestran la aplicabilidad de la arquitectura y la siguiente evaluación verifica la funcionalidad de la misma.The rapid progress in Industry 4.0 is achieved through innovations in several fields, e.g., manufacturing, big data, and artificial intelligence. The thesis motivates the need for a Big Data architecture to apply artificial intelligence in Industry 4.0 and presents a cognitive architecture for artificial intelligence – CAAI – as a possible solution, which is especially suited for the challenges of small and medium-sized enterprises. The work examines the economic and ethical implications of those technologies and highlights the benefits but also the challenges for countries, companies and individual workers. The "Industry 4.0 Questionnaire for SMEs" was conducted to gain insights into smaller and medium-sized companies’ requirements and needs. Thus, the new CAAI architecture presents a software design blueprint and provides a set of open-source building blocks to support companies during implementation. Different use cases demonstrate the applicability of the architecture and the following evaluation verifies the functionality of the architecture
    corecore