1,585 research outputs found
Recommended from our members
MANAGING AND SECURING ENDPOINTS: A SOLUTION FOR A TELEWORK ENVIRONMENT
This project introduces a business problem in which a water utility company – known as H2O District – was forced to discover and implement a solution that would enable the IT Department to effectively manage and secure their endpoints in a telework environment. Typically, an endpoint is defined as any device that is physically connected to a network. For the purposes of this project, the endpoints that the IT Department was concerned with consisted of Windows 10 PC’s, Laptops, and Apple iOS devices that employees use to access company resources while working outside of the corporate network. To properly manage endpoints, the IT Department was focused on being able to carry out their responsibilities for providing software deployments, software updates, operating system support, and remote support or troubleshooting. Regarding the security of their endpoints, the IT Department was concerned with being able to properly ensure endpoint compliance and provide adequate threat protection.
Ultimately a decision was made to utilize various cloud services from Microsoft to assist the IT Department with carrying out their responsibilities in the new telework environment. The project analyzed the cloud technology used, e.g., Microsoft Azure Active Directory, Endpoint Manager, a Cloud Management Gateway, Intune, and Microsoft Defender for Endpoint; and examined some of the current on-premises infrastructure technology such as Microsoft Endpoint Configuration Manager, Active Directory, VPN, and Group Policy. The project also documented the implementation steps for configuring the cloud services and onboarding the endpoints to be properly managed and secured.
The contribution of this project is: (i) to show how the H2O district examined the H2O district’s current infrastructure, (ii) identified any shortcomings with their current technological solutions, (iii) developed an understanding of the IT Departments service level agreements, and (iv) ultimately created a solution that allowed H2O to carry out its core responsibilities in the new telework environment. The project proved successful upon implementation and the IT Department was able to gain significant benefits by migrating some of their workloads to the cloud. The project also reports on some of the potential challenges the organization may face. Those include keeping up with the growing trend in hybrid remote work, managing the flow of information, and establishing zero trust. The solution implemented in this project can serve as an example for IT Departments facing similar challenges; namely, effectively managing and securing their endpoints in a telework environment
A highly-available and scalable microservice architecture for access management
Access management is a key aspect of providing secure services and applications in information technology. Ensuring secure access is particularly challenging in a cloud environment wherein resources are scaled dynamically. In fact keeping track of dynamic cloud instances and administering access to them requires careful coordination and mechanisms to ensure reliable operations. PrivX is a commercial offering from SSH Communications and Security Oyj that automatically scans and keeps track of the cloud instances and manages access to them. PrivX is currently built on the microservices approach, wherein the application is structured as a collection of loosely coupled services. However, PrivX requires external modules and with specific capabilities to ensure high availability. Moreover, complex scripts are required to monitor the whole system.
The goal of this thesis is to make PrivX highly-available and scalable by using a container orchestration framework. To this end, we first conduct a detailed study of mostly widely used container orchestration frameworks: Kubernetes, Docker Swarm and Nomad. We then select Kubernetes based on a feature evaluation relevant to the considered scenario. We package the individual components of PrivX, including its database, into Docker containers and deploy them on a Kubernetes cluster. We also build a prototype system to demonstrate how microservices can be managed on a Kubernetes cluster. Additionally, an auto scaling tool is created to scale specific services based on predefined rules. Finally, we evaluate the service recovery time for each of the services in PrivX, both in the RPM deployment model and the prototype Kubernetes deployment model. We find that there is no significant difference in service recovery time between the two models. However, Kubernetes ensured high availability of the services. We find that Kubernetes is the preferred mode for deploying PrivX and it makes PrivX highly available and scalable
funcX: A Federated Function Serving Fabric for Science
Exploding data volumes and velocities, new computational methods and
platforms, and ubiquitous connectivity demand new approaches to computation in
the sciences. These new approaches must enable computation to be mobile, so
that, for example, it can occur near data, be triggered by events (e.g.,
arrival of new data), be offloaded to specialized accelerators, or run remotely
where resources are available. They also require new design approaches in which
monolithic applications can be decomposed into smaller components, that may in
turn be executed separately and on the most suitable resources. To address
these needs we present funcX---a distributed function as a service (FaaS)
platform that enables flexible, scalable, and high performance remote function
execution. funcX's endpoint software can transform existing clouds, clusters,
and supercomputers into function serving systems, while funcX's cloud-hosted
service provides transparent, secure, and reliable function execution across a
federated ecosystem of endpoints. We motivate the need for funcX with several
scientific case studies, present our prototype design and implementation, show
optimizations that deliver throughput in excess of 1 million functions per
second, and demonstrate, via experiments on two supercomputers, that funcX can
scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and
Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap
with arXiv:1908.0490
Creation of a Cloud-Native Application: Building and operating applications that utilize the benefits of the cloud computing distribution approach
Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementVMware is a world-renowned company in the field of cloud infrastructure and digital workspace technology which supports organizations in digital transformations. VMware accelerates digital transformation for evolving IT environments by empowering clients to adopt a software-defined strategy towards their business and information technology. Previously present in the private cloud segment, the company has recently focused on developing offers related to the public cloud.
Comprehending how to devise cloud-compatible systems has become increasingly crucial in the present times. Cloud computing is rapidly evolving from a specialized technology favored by tech-savvy companies and startups to the cornerstone on which enterprise systems are constructed for future growth. To stay competitive in the current market, both big and small organizations are adopting cloud architectures and methodologies.
As a member of the technical pre-sales team, the main goal of my internship was the design, development, and deployment of a cloud native application and therefore this will be the subject of my internship report. The application is intended to interface with an existing one and demonstrates in question the possible uses of VMware's virtualization infrastructure and automation offerings. Since its official release, the application has already been presented to various existing and prospective customers and at conferences. The purpose of this work is to provide a permanent record of my internship experience at VMware. Through this undertaking, I am able to retrospect on the professional facets of my internship experience and the competencies I gained during the journey. This work is a descriptive and theoretical reflection, methodologically oriented towards the development of a cloud-native application in the context of my internship in the system engineering team at VMware. The scientific content of the internship of the report focuses on the benefits - not limited to scalability and maintainability - to move from a monolithic architecture to microservices
Hybrid clouds for data-Intensive, 5G-Enabled IoT applications: an overview, key issues and relevant architecture
Hybrid cloud multi-access edge computing (MEC) deployments have been proposed as efficient
means to support Internet of Things (IoT) applications, relying on a plethora of nodes and data. In this paper, an overview on the area of hybrid clouds considering relevant research areas is given, providing technologies and mechanisms for the formation of such MEC deployments, as well as emphasizing several key issues that should be tackled by novel approaches, especially under the 5G paradigm. Furthermore, a decentralized hybrid cloud MEC architecture, resulting in a Platform-as-a-Service (PaaS) is proposed and its main building blocks and layers are thoroughly described. Aiming to offer a broad perspective on the business potential of such a platform, the stakeholder ecosystem is also analyzed. Finally, two use cases in the context of smart cities and mobile health are presented, aimed at showing how the proposed PaaS enables the development of respective IoT applications.Peer ReviewedPostprint (published version
Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures
One of the significant shifts of the next-generation computing technologies will certainly be in
the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD
landmark, evolved as a widely deployed BD operating system. Its new features include
federation structure and many associated frameworks, which provide Hadoop 3.x with the
maturity to serve different markets. This dissertation addresses two leading issues involved in
exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely,
(i)Scalability that directly affects the system performance and overall throughput using
portable Docker containers. (ii) Security that spread the adoption of data protection practices
among practitioners using access controls. An Enhanced Mapreduce Environment (EME),
OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker
(BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for
data streaming to the cloud computing are the main contribution of this thesis study
Data Spaces
This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical
Nano Server and Containers in Windows Server 2016
The purpose of this project was to review the new Windows Server 2016 features and test out Nano Server and Containers in the Windows Server 2016 environment. Only small part of people know what Containers and Nano Server are. Thus, this theses introduce this new features in more details to everyone.
Windows Server 2016 has a lot of new impacts in various of fields. These impacts consist of compute, administration, identity and access, networking, storage, security and assurance and failover clustering. These impacts were analyzed, and based on my own decision I decided to test Nano Server and Containers in practice.
The operating system and working environment were given by the supervisor. Server was created in a cluster. Based on the project theme, all configurations were made on Windows Server 2016. Container management was made using the Docker program providing containerization.
The result of the thesis was a fully working Nano Server in Hyper-V that can do all the configurations. There is a web service that can be implemented, if needed. Later web services in a Container using nginx and docker-compose was made. Finally, fully running and working Nano Server in a container was implemented
- …