6,144 research outputs found

    Multi-tenant hybrid cloud architecture

    Get PDF
    This paper examines the challenges associated with the multi-tenant hybrid cloud architecture and describes how this architectural approach was applied in two software development projects. The motivation for using this architectural approach is to allow developing new features on top of monolithic legacy systems – that are still in production use – but without using legacy technologies. The architectural approach considers these legacy systems as master systems that can be extended with multi-tenant cloud-based add-on applications. In general, legacy systems are run in customer-operated environments, whereas add-on applications can be deployed to cloud platforms. It is thus imperative to have a means connectivity between these environments over the internet. The technology stack used within the scope of this thesis is limited to the offering of the .NET Core ecosystem and Microsoft Azure. In the first part of the thesis work, a literature review was carried out. The literature review focused on the challenges associated with the architectural approach, and as a result, a list of challenges was formed. This list was utilized in the software development projects of the second part of the thesis. It should be noted that there were very few high-quality papers available focusing exactly on the multi-tenant hybrid cloud architecture, so, in the end, source material for the review was searched separately for multi-tenant and for hybrid cloud design challenges. This factor is noted in the evaluation of the review. In the second part of the thesis work, the architectural approach was applied in two software development projects. Goals were set for the architectural approach: the add-on applications should be developed with modern technology stacks; their delivery should be automated; their subscription should be straightforward for customer organizations and they should leverage multi-tenant resource sharing. In the first project a data quality management tool was developed on top of a legacy dealership management system. Due to database connectivity challenges, confidentiality of customer data and authentication requirements, the implemented solution does not fully utilize the architectural approach, as having the add-on application hosted in the customer environment was the most reasonable solution. Despite this, the add-on application was developed with a modern technology stack and its delivery is automated. The subscription process does involve certain manual steps and, if the customer infrastructure changes over time, these steps must be repeated by the developers. This decreases the scalability of the overall delivery model. In the second project a PDA application was developed on top of a legacy vehicle maintenance tire hotel system. The final implementation fully utilizes the architectural approach. Support for multi-tenancy was implemented using ASP.NET Core Dependency Injection and Finbuckle.MultiTenancy-library. Azure Relay Hybrid Connection was used for hybrid cloud connectivity between the add-on application and the master system. The delivery model incorporates the same challenges regarding subscription and customer infrastructure changes as the delivery model of the data quality management tool. However, the manual steps associated with these challenges must be performed only once per customer – not once per customer per application. In addition, the delivery model could be improved to support customer self-service governance, enabling the delegation of any customer environment installations to the customers themselves. Even further, the customer environment installation could potentially cover an entire product family. As an example, instead of just providing access for the PDA application, the installation could provide access for all vehicle maintenance family add-on applications. This would make customer environment management easier and developing new add-on applications faster

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security

    A Design Theory for Digital Platforms Supporting Online Communities: A Multiple Case Study

    Get PDF
    This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice

    System z and z/OS unique Characteristics

    Get PDF
    Many people still associate mainframes with obsolete technology. Surprisingly, the opposite is true. Mainframes feature many hardware, software, and system integration technologies, that are either not at all, or only in an elementary form, available on other server platforms. On the other hand, we know of no advanced server features which are not available on mainframes. This paper lists some 40 advanced mainframe technologies. There is a short description of each item together with a literature reference for more information

    Future benefits and applications of intelligent on-board processing to VSAT services

    Get PDF
    The trends and roles of VSAT services in the year 2010 time frame are examined based on an overall network and service model for that period. An estimate of the VSAT traffic is then made and the service and general network requirements are identified. In order to accommodate these traffic needs, four satellite VSAT architectures based on the use of fixed or scanning multibeam antennas in conjunction with IF switching or onboard regeneration and baseband processing are suggested. The performance of each of these architectures is assessed and the key enabling technologies are identified

    Applications of Context-Aware Systems in Enterprise Environments

    Get PDF
    In bring-your-own-device (BYOD) and corporate-owned, personally enabled (COPE) scenarios, employees’ devices store both enterprise and personal data, and have the ability to remotely access a secure enterprise network. While mobile devices enable users to access such resources in a pervasive manner, it also increases the risk of breaches for sensitive enterprise data as users may access the resources under insecure circumstances. That is, access authorizations may depend on the context in which the resources are accessed. In both scenarios, it is vital that the security of accessible enterprise content is preserved. In this work, we explore the use of contextual information to influence access control decisions within context-aware systems to ensure the security of sensitive enterprise data. We propose several context-aware systems that rely on a system of sensors in order to automatically adapt access to resources based on the security of users’ contexts. We investigate various types of mobile devices with varying embedded sensors, and leverage these technologies to extract contextual information from the environment. As a direct consequence, the technologies utilized determine the types of contextual access control policies that the context-aware systems are able to support and enforce. Specifically, the work proposes the use of devices pervaded in enterprise environments such as smartphones or WiFi access points to authenticate user positional information within indoor environments as well as user identities

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Live-Migration in Cloud Computing Environment

    Get PDF
    O tráfego global de IP aumentou cinco vezes nos últimos cinco anos, e prevê-se que crescerá três vezes nos próximos cinco. Já para o período de 2013 a 2018, anteviu-se que o total do tráfego de IP iria aumentar a sua taxa composta de crescimento anual (CAGR) em, aproximadamente, 3.9 vezes. Assim, os Prestadores de Serviços estão a sofrer com este acréscimo exponencial, que é proveniente do número abismal de dispositivos e utilizadores que estão ligados à Internet, bem como das suas exigências por vários recursos e serviços de rede (como por exemplo, distribuição de conteúdo multimédia, segurança, mobilidade, etc.). Mais especificamente, estes estão com dificuldades em: introduzir novos serviços geradores de receitas; e otimizar e adaptar as suas infraestruturas mais caras, centros de processamento de dados, e redes empresariais e de longa distância (COMpuTIN, 2015). Estas redes continuam a ter sérios problemas (no que toca a agilidade, gestão, mobilidade e no tempo despendido para se adaptarem), que não foram corrigidos até ao momento. Portanto, foram propostos novos modelos de Virtualização de Funções da Rede (NFV) e tecnologias de Redes de Software Definidos (SDN) para solucionar gastos operacionais e de capital não otimizado, e limitações das redes (Lopez, 2014, Hakiri and Berthou, 2015). Para se ultrapassar tais adversidades, o Instituto Europeu de Normas de Telecomunicações (ETSI) e outras organizações propuseram novas arquiteturas de rede. De acordo com o ETSI, a NFV é uma técnica emergente e poderosa, com grande aplicabilidade, e com o objetivo de transformar a maneira como os operadores desenham as redes. Isto é alcançado pela evolução da tecnologia padrão de virtualização TI, de forma a consolidar vários tipos de equipamentos de redes como: servidores de grande volume, routers, switches e armazenamento (Xilouris et al., 2014). Nesta dissertação, foram usadas as soluções mais atuais de SDN e NFV, de forma a produzir um caso de uso que possa solucionar o crescimento do tráfego de rede e a excedência da sua capacidade máxima. Para o desenvolvimento e avalização da solução, foi instalada a plataforma de computação na nuvem OpenStack, de modo a implementar, gerir e testar um caso de uso de Live Migration.Global IP traffic has increased fivefold over the past five years, and will continue increasing threefold over the next five years. The overall IP traffic will grow at a compound annual growth rate (CAGR) nearly 3.9-fold from 2013 to 2018. Service Providers are experiencing the exponential growth of IP traffic that comes from the incredible increased number of devices and users who are connected to the internet along with their demands for various resources and network services like multimedia content distribution, security, mobility and else. Therefore, Service Providers are finding difficult to introduce new revenue generating services, optimize and adapt their expensive infrastructures, data centers, wide-area networks and enterprise networks (COMpuTIN, 2015). The networks continue to have serious known problems, such as, agility, manageability, mobility and time-to-application that have not been successfully addressed so far. Thus, novel Network Function Virtualization (NFV) models and Software-defined Networking (SDN) technologies have been proposed to solve the non-optimal capital and operational expenditures and network’s limitations (Lopez, 2014, Hakiri and Berthou, 2015). In order to solve these issues, the European Telecommunications Standards Institute (ETSI) and other standard organizations are proposing new network architecture approaches. According to ETSI, The Network Functions Virtualization is a powerful emerging technique with widespread applicability, aiming to transform the way that network operators design networks by evolving standard IT virtualization technology to consolidate many network equipment types: high volume servers, routers, switches and storage (Xilouris et al., 2014). In this thesis, the current Software-Defined Networking (SDN) and Network Function Virtualization (NFV) solutions were used in order to make a use case that can address the increasing of network traffic and exceeding its maximum capacity. To develop and evaluate the solution, OpenStack cloud computing platform was installed in order to deploy, manage and test a Live-Migration use-case
    • …
    corecore