20 research outputs found

    Results and achievements of the ALLIANCE Project: New network solutions for 5G and beyond

    Get PDF
    Leaving the current 4th generation of mobile communications behind, 5G will represent a disruptive paradigm shift integrating 5G Radio Access Networks (RANs), ultra-high-capacity access/metro/core optical networks, and intra-datacentre (DC) network and computational resources into a single converged 5G network infrastructure. The present paper overviews the main achievements obtained in the ALLIANCE project. This project ambitiously aims at architecting a converged 5G-enabled network infrastructure satisfying those needs to effectively realise the envisioned upcoming Digital Society. In particular, we present two networking solutions for 5G and beyond 5G (B5G), such as Software Defined Networking/Network Function Virtualisation (SDN/NFV) on top of an ultra-high-capacity spatially and spectrally flexible all-optical network infrastructure, and the clean-slate Recursive Inter-Network Architecture (RINA) over packet networks, including access, metro, core and DC segments. The common umbrella of all these solutions is the Knowledge-Defined Networking (KDN)-based orchestration layer which, by implementing Artificial Intelligence (AI) techniques, enables an optimal end-to-end service provisioning. Finally, the cross-layer manager of the ALLIANCE architecture includes two novel elements, namely the monitoring element providing network and user data in real time to the KDN, and the blockchain-based trust element in charge of exchanging reliable and confident information with external domains.This work has been partially funded by the Spanish Ministry of Economy and Competitiveness under contract FEDER TEC2017-90034-C2 (ALLIANCE project) and by the Generalitat de Catalunya under contract 2017SGR-1037 and 2017SGR-605.Peer ReviewedPostprint (published version

    Building a Digital Twin for network optimization using graph neural networks

    Get PDF
    Network modeling is a critical component of Quality of Service (QoS) optimization. Current networks implement Service Level Agreements (SLA) by careful configuration of both routing and queue scheduling policies. However, existing modeling techniques are not able to produce accurate estimates of relevant SLA metrics, such as delay or jitter, in networks with complex QoS-aware queueing policies (e.g., strict priority, Weighted Fair Queueing, Deficit Round Robin). Recently, Graph Neural Networks (GNNs) have become a powerful tool to model networks since they are specifically designed to work with graph-structured data. In this paper, we propose a GNN-based network model able to understand the complex relationship between the queueing policy (scheduling algorithm and queue sizes), the network topology, the routing configuration, and the input traffic matrix. We call our model TwinNet, a Digital Twin that can accurately estimate relevant SLA metrics for network optimization. TwinNet can generalize to its input parameters, operating successfully in topologies, routing, and queueing configurations never seen during training. We evaluate TwinNet over a wide variety of scenarios with synthetic traffic and validate it with real traffic traces. Our results show that TwinNet can provide accurate estimates of end-to-end path delays in 106 unseen real-world topologies, under different queuing configurations with a Mean Absolute Percentage Error (MAPE) of 3.8%, as well as a MAPE of 6.3% error when evaluated with a real testbed. We also showcase the potential of the proposed model for SLA-driven network optimization and what-if analysis.This publication is part of the Spanish I+D+i project TRAINER-A (ref.PID2020-118011GB-C21), funded by MCIN/ AEI/, Spain10.13039/501100011033. This work is also partially funded by the Catalan Institution for Research and Advanced Studies (ICREA), Spain and the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia, Spain and the European Social Fund.Peer ReviewedPostprint (published version

    BECA: A Blockchain-Based Edge Computing Architecture for Internet of Things Systems

    Get PDF
    The scale of Internet of Things (IoT) systems has expanded in recent times and, in tandem with this, IoT solutions have developed symbiotic relationships with technologies, such as edge Computing. IoT has leveraged edge computing capabilities to improve the capabilities of IoT solutions, such as facilitating quick data retrieval, low latency response, and advanced computation, among others. However, in contrast with the benefits offered by edge computing capabilities, there are several detractors, such as centralized data storage, data ownership, privacy, data auditability, and security, which concern the IoT community. This study leveraged blockchain’s inherent capabilities, including distributed storage system, non-repudiation, privacy, security, and immutability, to provide a novel, advanced edge computing architecture for IoT systems. Specifically, this blockchain-based edge computing architecture addressed centralized data storage, data auditability, privacy, data ownership, and security. Following implementation, the performance of this solution was evaluated to quantify performance in terms of response time and resource utilization. The results show the viability of the proposed and implemented architecture, characterized by improved privacy, device data ownership, security, and data auditability while implementing decentralized storage

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    User identification and community exploration via mining big personal data in online platforms

    Get PDF
    User-generated big data mining is vital important for large online platforms in terms of security, profits improvement, products recommendation and system management. Personal attributes recognition, user behavior prediction, user identification, and community detection are the most critical and interesting issues that remain as challenges in many real applications in terms of accuracy, efficiency and data security. For an online platform with tens of thousands of users, it is always vulnerable to malicious users who pose a threat to other innocent users and consume unnecessary resources, where accurate user identification is urgently required to prevent corresponding malicious attempts. Meanwhile, accurate prediction of user behavior will help large platforms provide satisfactory recommendations to users and efficiently allocate different amounts of resources to different users. In addition to individual identification, community exploration of large social networks that formed by online databases could also help managers gain knowledge of how a community evolves. And such large scale and diverse social networks can be used to validate network theories, which are previously developed from synthetic networks or small real networks. In this thesis, we study several specific cases to address some key challenges that remain in different types of large online platforms, such as user behavior prediction for cold-start users, privacy protection for user-generated data, and large scale and diverse social community analysis. In the first case, as an emerging business, online education has attracted tens of thousands users as it can provide diverse courses that can exactly satisfy whatever demands of the students. Due to the limitation of public school systems, many students pursue private supplementary tutoring for improving their academic performance. Similar to online shopping platform, online education system is also a user-product based service, where users usually have to select and purchase the courses that meet their demands. It is important to construct a course recommendation and user behavior prediction system based on user attributes or user-generated data. Item recommendation in current online shopping systems is usually based on the interactions between users and products, since most of the personal attributes are unnecessary for online shopping services, and users often provide false information during registration. Therefore, it is not possible to recommend items based on personal attributes by exploiting the similarity of attributes among users, such as education level, age, school, gender, etc. Different from most online shopping platforms, online education platforms have access to a large number of credible personal attributes since accurate personal information is important in education service, and user behaviors could be predicted with just user attribute. Moreover, previous works on learning individual attributes are based primarily on panel survey data, which ensures its credibility but lacks efficiency. Therefore, most works simply include hundreds or thousands of users in the study. With more than 200,000 anonymous K-12 students' 3-year learning data from one of the world's largest online extra-curricular education platforms, we uncover students' online learning behaviors and infer the impact of students' home location, family socioeconomic situation and attended school's reputation/rank on the students' private tutoring course participation and learning outcomes. Further analysis suggests that such impact may be largely attributed to the inequality of access to educational resources in different cities and the inequality in family socioeconomic status. Finally, we study the predictability of students' performance and behaviors using machine learning algorithms with different groups of features, showing students' online learning performance can be predicted based on personal attributes and user-generated data with MAE<10%<10\%. As mentioned above, user attributes are usually fake information in most online platforms, and online platforms are usually vulnerable of malicious users. It is very important to identify the users or verify their attributes. Many researches have used user-generated mobile phone data (which includes sensitive information) to identify diverse user attributes, such as social economic status, ages, education level, professions, etc. Most of these approaches leverage original sensitive user data to build feature-rich models that take private information as input, such as exact locations, App usages and call detailed records. However, accessing users' mobile phone raw data may violate the more and more strict private data protection policies and regulations (e.g. GDPR). We observe that appropriate statistical methods can offer an effective means to eliminate private information and preserve personal characteristics, thus enabling the identification of the user attributes without privacy concern. Typically, identifying an unfamiliar caller's profession is important to protect citizens' personal safety and property. Due to limited data protection of various popular online services in some countries such as taxi hailing or takeouts ordering, many users nowadays encounter an increasing number of phone calls from strangers. The situation may be aggravated when criminals pretend to be such service delivery staff, bringing threats to the user individuals as well as the society. Additionally, more and more people suffer from excessive digital marketing and fraud phone calls because of personal information leakage. Therefore, a real time identification of unfamiliar caller is urgently needed. We explore the feasibility of user identification with privacy-preserved user-generated mobile, and we develop CPFinder, a system which implements automatic user identification callers on end devices. The system could mainly identify four categories of users: taxi drivers, delivery and takeouts staffs, telemarketers and fraudsters, and normal users (other professions). Our evaluation over an anonymized dataset of 1,282 users with a period of 3 months in Shanghai City shows that the CPFinder can achieve an accuracy of 75+\% for multi-class classification and 92.35+\% for binary classification. In addition to the mining of personal attributes and behaviors, the community mining of a large group of people based on online big data also attracts lots of attention due to the accessibility of large scale social network in online platforms. As one of the very important branch of social network, scientific collaboration network has been studied for decades as online big publication databases are easy to access and many user attribute are available. Academic collaborations become regular and the connections among researchers become closer due to the prosperity of globalized academic communications. It has been found that many computer science conferences are closed communities in terms of the acceptance of newcomers' papers, especially are the well-regarded conferences~\cite{cabot2018cs}. However, an in-depth study on the difference in the closeness and structural features of different conferences and what caused these differences is still missing. %Also, reviewing the strong and weak tie theories, there are multifaceted influences exerted by the combination of this two types of ties in different context. More analysis is needed to determine whether the network is closed or has other properties. We envision that social connections play an increasing role in the academic society and influence the paper selection process. The influences are not only restricted within visible links, but also extended to weak ties that connect two distanced node. Previous studies of coauthor networks did not adequately consider the central role of some authors in the publication venues, such as \ac{PC} chairs of the conferences. Such people could influence the evolutionary patterns of coauthor networks due to their authorities and trust for members to select accepted papers and their core positions in the community. Thus, in addition to the ratio of newcomers' papers it would be interesting if the PC chairs' relevant metrics could be quantified to measure the closure of a conference from the perspective of old authors' papers. Additionally, the analysis of the differences among different conferences in terms of the evolution of coauthor networks and degree of closeness may disclose the formation of closed communities. Therefore, we will introduce several different outcomes due to the various structural characteristics of several typical conferences. In this paper, using the DBLP dataset of computer science publications and a PC chair dataset, we show the evidence of the existence of strong and weak ties in coauthor networks and the PC chairs' influences are also confirmed to be related with the tie strength and network structural properties. Several PC chair relevant metrics based on coauthor networks are introduced to measure the closure and efficiency of a conference.2021-10-2

    Languages of games and play: A systematic mapping study

    Get PDF
    Digital games are a powerful means for creating enticing, beautiful, educational, and often highly addictive interactive experiences that impact the lives of billions of players worldwide. We explore what informs the design and construction of good games to learn how to speed-up game development. In particular, we study to what extent languages, notations, patterns, and tools, can offer experts theoretical foundations, systematic techniques, and practical solutions they need to raise their productivity and improve the quality of games and play. Despite the growing number of publications on this topic there is currently no overview describing the state-of-the-art that relates research areas, goals, and applications. As a result, efforts and successes are often one-off, lessons learned go overlooked, language reuse remains minimal, and opportunities for collaboration and synergy are lost. We present a systematic map that identifies relevant publications and gives an overview of research areas and publication venues. In addition, we categorize research perspectives along common objectives, techniques, and approaches, illustrated by summaries of selected languages. Finally, we distill challenges and opportunities for future research and development
    corecore