77 research outputs found
Models, methods, and tools for developing MMOG backends on commodity clouds
Online multiplayer games have grown to unprecedented scales, attracting millions of players
worldwide. The revenue from this industry has already eclipsed well-established entertainment
industries like music and films and is expected to continue its rapid growth in the future.
Massively Multiplayer Online Games (MMOGs) have also been extensively used in research
studies and education, further motivating the need to improve their development process.
The development of resource-intensive, distributed, real-time applications like MMOG backends
involves a variety of challenges. Past research has primarily focused on the development and
deployment of MMOG backends on dedicated infrastructures such as on-premise data centers
and private clouds, which provide more flexibility but are expensive and hard to set up and
maintain. A limited set of works has also focused on utilizing the Infrastructure-as-a-Service
(IaaS) layer of public clouds to deploy MMOG backends. These clouds can offer various advantages
like a lower barrier to entry, a larger set of resources, etc. but lack resource elasticity,
standardization, and focus on development effort, from which MMOG backends can greatly
benefit.
Meanwhile, other research has also focused on solving various problems related to consistency,
performance, and scalability. Despite major advancements in these areas, there is no standardized
development methodology to facilitate these features and assimilate the development of
MMOG backends on commodity clouds. This thesis is motivated by the results of a systematic
mapping study that identifies a gap in research, evident from the fact that only a handful
of studies have explored the possibility of utilizing serverless environments within commodity
clouds to host these types of backends. These studies are mostly vision papers and do
not provide any novel contributions in terms of methods of development or detailed analyses
of how such systems could be developed. Using the knowledge gathered from this mapping
study, several hypotheses are proposed and a set of technical challenges is identified, guiding
the development of a new methodology.
The peculiarities of MMOG backends have so far constrained their development and deployment
on commodity clouds despite rapid advancements in technology. To explore whether such
environments are viable options, a feasibility study is conducted with a minimalistic MMOG
prototype to evaluate a limited set of public clouds in terms of hosting MMOG backends. Foli
lowing encouraging results from this study, this thesis first motivates toward and then presents
a set of models, methods, and tools with which scalable MMOG backends can be developed
for and deployed on commodity clouds. These are encapsulated into a software development
framework called Athlos which allows software engineers to leverage the proposed development
methodology to rapidly create MMOG backend prototypes that utilize the resources of
these clouds to attain scalable states and runtimes. The proposed approach is based on a dynamic
model which aims to abstract the data requirements and relationships of many types of
MMOGs. Based on this model, several methods are outlined that aim to solve various problems
and challenges related to the development of MMOG backends, mainly in terms of performance
and scalability. Using a modular software architecture, and standardization in common development
areas, the proposed framework aims to improve and expedite the development process
leading to higher-quality MMOG backends and a lower time to market. The models and methods
proposed in this approach can be utilized through various tools during the development
lifecycle.
The proposed development framework is evaluated qualitatively and quantitatively. The thesis
presents three case study MMOG backend prototypes that validate the suitability of the proposed
approach. These case studies also provide a proof of concept and are subsequently used
to further evaluate the framework. The propositions in this thesis are assessed with respect to
the performance, scalability, development effort, and code maintainability of MMOG backends
developed using the Athlos framework, using a variety of methods such as small and large-scale
simulations and more targeted experimental setups. The results of these experiments uncover
useful information about the behavior of MMOG backends. In addition, they provide evidence
that MMOG backends developed using the proposed methodology and hosted on serverless
environments can: (a) support a very high number of simultaneous players under a given latency
threshold, (b) elastically scale both in terms of processing power and memory capacity
and (c) significantly reduce the amount of development effort. The results also show that this
methodology can accelerate the development of high-performance, distributed, real-time applications
like MMOG backends, while also exposing the limitations of Athlos in terms of code
maintainability.
Finally, the thesis provides a reflection on the research objectives, considerations on the hypotheses
and technical challenges, and outlines plans for future work in this domain
Edge and Big Data technologies for Industry 4.0 to create an integrated pre-sale and after-sale environment
The fourth industrial revolution, also known as Industry 4.0, has rapidly gained traction in businesses across Europe and the world, becoming a central theme in small, medium, and large enterprises alike. This new paradigm shifts the focus from locally-based and barely automated firms to a globally interconnected industrial sector, stimulating economic growth and productivity, and supporting the upskilling and reskilling of employees. However, despite the maturity and scalability of information and cloud technologies, the support systems already present in the machine field are often outdated and lack the necessary security, access control, and advanced communication capabilities.
This dissertation proposes architectures and technologies designed to bridge the gap between Operational and Information Technology, in a manner that is non-disruptive, efficient, and scalable. The proposal presents cloud-enabled data-gathering architectures that make use of the newest IT and networking technologies to achieve the desired quality of service and non-functional properties. By harnessing industrial and business data, processes can be optimized even before product sale, while the integrated environment enhances data exchange for post-sale support.
The architectures have been tested and have shown encouraging performance results, providing a promising solution for companies looking to embrace Industry 4.0, enhance their operational capabilities, and prepare themselves for the upcoming fifth human-centric revolution
Serverless middlewares to integrate heterogeneous and distributed services in cloud continuum environments
The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them.
Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments.
In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources
Data Communications and Network Technologies
This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence
Enabling Artificial Intelligence Analytics on The Edge
This thesis introduces a novel distributed model for handling in real-time, edge-based video analytics. The novelty of the model relies on decoupling and distributing the services into several decomposed functions, creating virtual function chains (V F C
model). The model considers both computational and communication constraints. Theoretical, simulation and experimental results have shown that the V F C model can enable the support of heavy-load services to an edge environment while improving the footprint of the service compared to state-of-the art frameworks. In detail, results on the V F C model have shown that it can reduce the total edge cost, compared with a monolithic and a simple frame distribution models. For experimenting on a real-case scenario, a testbed edge environment has been developed, where the aforementioned models, as well as a general distribution framework (Apache Spark ©), have been deployed. A cloud service has also been considered. Experiments have shown that V F C can outperform all alternative approaches, by reducing operational cost and improving the QoS. Finally, a migration model, a caching model and a QoS monitoring service based on Long-Term-Short-Term models are introduced
Computing Without Borders: The Way Towards Liquid Computing
Despite the de-facto technological uniformity fostered by the cloud and edge computing paradigms, resource fragmentation across isolated clusters hinders the dynamism in application placement, leading to suboptimal performance and operational complexity. Building upon and extending these paradigms, we propose a novel approach envisioning a transparent continuum of resources and services on top of the underlying fragmented infrastructure, called liquid computing. Fully decentralized, multi-ownership-oriented and intent-driven, it enables an overarching abstraction for improved applications execution, while at the same time opening up for new scenarios, including resource sharing and brokering. Following the above vision, we present liqo, an open-source project that materializes this approach through the creation of dynamic and seamless Kubernetes multi-cluster topologies. Extensive experimental evaluations have shown its effectiveness in different contexts, both in terms of Kubernetes overhead and compared to other open-source alternatives
Enabling Technology in Optical Fiber Communications: From Device, System to Networking
This book explores the enabling technology in optical fiber communications. It focuses on the state-of-the-art advances from fundamental theories, devices, and subsystems to networking applications as well as future perspectives of optical fiber communications. The topics cover include integrated photonics, fiber optics, fiber and free-space optical communications, and optical networking
AS Domain Tunnelling for User-Selectable Loose Source Routing
The use of the Internet as a ubiquitous means of e-commerce, social interaction and entertainment is well established. However, despite service diversity, all traffic is treated the same. Although this clearly “works” and is considered “fair” in terms of net neutrality, there are times when it would be particularly beneficial, if the end-user could have some control over the path his or her traffic takes, either avoiding geographic regions or exploiting lower latency options, should they exist. In this research work, we propose to design and evaluate a scheme that allows end-users to selectively exploit a sequence of tunnels along a path from the source to a chosen destination. The availability of such tunnels is advertised centrally through a broker, with the cooperation of the Autonomous System (AS) domains, allowing end-users to use them if so desired. The closest analogy this scheme is that of a driver choosing to use one or more toll roads along a route to avoid potential congestion or less desirable geographic locations. It thus takes the form of a type of loose source routing. Furthermore, the approach avoids the need for inter-operator cooperation, although such cooperation provides a means of extending tunnels across AS peers. In particular, we aim to ascertain the benefit in terms of delay and reliability for a given degree of tunnel presence within a portion of the Internet. The expectation is that a relatively small number of tunnels may be sufficient to provide worthwhile improvements in performance, at least for some users. Based on this premise, we first design and implement a simulation tool that uses Dijkstra’s Algorithm to calculate the least cost path(s) for differing percentages of randomly placed intra- AS tunnels. We consider end-to-end delay as the cost metric associated with each route and a number of experiments have been performed to confirm the improvement in delays using the tunnels. We then consider the inclusion of a small financial cost that the user would be expected to pay in order to use selected tunnels. Details of the payment mechanism is outside the scope of this thesis, however, the financial burden is taken into account when choosing a route. There is thus a trade-off between delay reduction and a financial penalty. First we explore a heuristic approach using a Genetic Algorithm (GA) we create whereby these conflicting goals are combined into a weighted fitness score associated with the alternative routes, allow a near-optimal compromise to be found, based on the weighting. The downside of this approach is that there is typically a single solution for a given selected weighting. It may be that the user wishes to see the spectrum of alternatives and decide a suitable “sweet spot” based on their current preferences. As such, we then design, implement and evaluate an end-user path selection tool using Multi-Objective Evolutionary Algorithm (MOEA). Unlike the GA, this approach presents a set of optimal solutions for different compromises between the performance objectives, which form a Pareto front. This scheme currently takes into account cost and delay but provides an extensible mechanism for other fitness factors to be considered
Optimization and Communication in UAV Networks
UAVs are becoming a reality and attract increasing attention. They can be remotely controlled or completely autonomous and be used alone or as a fleet and in a large set of applications. They are constrained by hardware since they cannot be too heavy and rely on batteries. Their use still raises a large set of exciting new challenges in terms of trajectory optimization and positioning when they are used alone or in cooperation, and communication when they evolve in swarm, to name but a few examples. This book presents some new original contributions regarding UAV or UAV swarm optimization and communication aspects
- …