3,873 research outputs found
Deep generative models for network data synthesis and monitoring
Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network.
Although networks inherently
have abundant amounts of monitoring data, its access and effective measurement is
another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset
without leaking commercial sensitive information. Second, it could be very expensive
to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of
flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources
in the network element that can be applied to support the measurement function are
too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex
structure. Various emerging optimization-based solutions (e.g., compressive sensing)
or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet
meet the current network requirements.
The contributions made in this thesis significantly advance the state of the art in
the domain of network measurement and monitoring techniques. Overall, we leverage
cutting-edge machine learning technology, deep generative modeling, throughout the
entire thesis. First, we design and realize APPSHOT , an efficient city-scale network
traffic sharing with a conditional generative model, which only requires open-source
contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we
design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time
network telemetry system with latent GANs and spectral-temporal networks. Finally,
we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through
this research are summarized, and interesting topics are discussed for future work in
this domain. All proposed solutions have been evaluated with real-world datasets and
applied to support different applications in real systems
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
Cloud Forensic: Issues, Challenges and Solution Models
Cloud computing is a web-based utility model that is becoming popular every
day with the emergence of 4th Industrial Revolution, therefore, cybercrimes
that affect web-based systems are also relevant to cloud computing. In order to
conduct a forensic investigation into a cyber-attack, it is necessary to
identify and locate the source of the attack as soon as possible. Although
significant study has been done in this domain on obstacles and its solutions,
research on approaches and strategies is still in its development stage. There
are barriers at every stage of cloud forensics, therefore, before we can come
up with a comprehensive way to deal with these problems, we must first
comprehend the cloud technology and its forensics environment. Although there
are articles that are linked to cloud forensics, there is not yet a paper that
accumulated the contemporary concerns and solutions related to cloud forensic.
Throughout this chapter, we have looked at the cloud environment, as well as
the threats and attacks that it may be subjected to. We have also looked at the
approaches that cloud forensics may take, as well as the various frameworks and
the practical challenges and limitations they may face when dealing with cloud
forensic investigations.Comment: 23 pages; 6 figures; 4 tables. Book chapter of the book titled "A
Practical Guide on Security and Privacy in Cyber Physical Systems
Foundations, Applications and Limitations", World Scientific Series in
Digital Forensics and Cybersecurit
Adaptive Data-driven Optimization using Transfer Learning for Resilient, Energy-efficient, Resource-aware, and Secure Network Slicing in 5G-Advanced and 6G Wireless Systems
Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 134-141)Dissertation (Ph.D)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 20225G–Advanced is the next step in the evolution of the fifth–generation (5G) technology. It will introduce a new level of expanded capabilities beyond connections and enables a broader range of advanced applications and use cases. 5G–Advanced will support modern applications with greater mobility and high dependability. Artificial intelligence and Machine Learning will enhance network performance with spectral efficiency and energy savings enhancements.
This research established a framework to optimally control and manage an appropriate selection of network slices for incoming requests from diverse applications and services in Beyond 5G networks. The developed DeepSlice model is used to optimize the network and individual slice load efficiency across isolated slices and manage slice lifecycle in case of failure. The DeepSlice framework can predict the unknown connections by utilizing the learning from a developed deep-learning neural network model.
The research also addresses threats to the performance, availability, and robustness of B5G networks by proactively preventing and resolving threats. The study proposed a Secure5G framework for authentication, authorization, trust, and control for a network slicing architecture in 5G systems. The developed model prevents the 5G infrastructure from Distributed Denial of Service by analyzing incoming connections and learning from the developed model. The research demonstrates the preventive measure against volume attacks, flooding attacks, and masking (spoofing) attacks. This research builds the framework towards the zero trust objective (never trust, always verify, and verify continuously) that improves resilience.
Another fundamental difficulty for wireless network systems is providing a desirable user experience in various network conditions, such as those with varying network loads and bandwidth fluctuations. Mobile Network Operators have long battled unforeseen network traffic events. This research proposed ADAPTIVE6G to tackle the network load estimation problem using knowledge-inspired Transfer Learning by utilizing radio network Key Performance Indicators from network slices to understand and learn network load estimation problems. These algorithms enable Mobile Network Operators to optimally coordinate their computational tasks in stochastic and time-varying network states.
Energy efficiency is another significant KPI in tracking the sustainability of network slicing. Increasing traffic demands in 5G dramatically increase the energy consumption of mobile networks. This increase is unsustainable in terms of dollar cost and environmental impact. This research proposed an innovative ECO6G model to attain sustainability and energy efficiency. Research findings suggested that the developed model can reduce network energy costs without negatively impacting performance or end customer experience against the classical Machine Learning and Statistical driven models. The proposed model is validated against the industry-standardized energy efficiency definition, and operational expenditure savings are derived, showing significant cost savings to MNOs.Introduction -- A deep neural network framework towards a resilient, efficient, and secure network slicing in Beyond 5G Networks -- Adaptive resource management techniques for network slicing in Beyond 5G networks using transfer learning -- Energy and cost analysis for network slicing deployment in Beyond 5G networks -- Conclusion and future scop
Architectural Vision for Quantum Computing in the Edge-Cloud Continuum
Quantum processing units (QPUs) are currently exclusively available from
cloud vendors. However, with recent advancements, hosting QPUs is soon possible
everywhere. Existing work has yet to draw from research in edge computing to
explore systems exploiting mobile QPUs, or how hybrid applications can benefit
from distributed heterogeneous resources. Hence, this work presents an
architecture for Quantum Computing in the edge-cloud continuum. We discuss the
necessity, challenges, and solution approaches for extending existing work on
classical edge computing to integrate QPUs. We describe how warm-starting
allows defining workflows that exploit the hierarchical resources spread across
the continuum. Then, we introduce a distributed inference engine with hybrid
classical-quantum neural networks (QNNs) to aid system designers in
accommodating applications with complex requirements that incur the highest
degree of heterogeneity. We propose solutions focusing on classical layer
partitioning and quantum circuit cutting to demonstrate the potential of
utilizing classical and quantum computation across the continuum. To evaluate
the importance and feasibility of our vision, we provide a proof of concept
that exemplifies how extending a classical partition method to integrate
quantum circuits can improve the solution quality. Specifically, we implement a
split neural network with optional hybrid QNN predictors. Our results show that
extending classical methods with QNNs is viable and promising for future work.Comment: 16 pages, 5 figures, Vision Pape
AI: Limits and Prospects of Artificial Intelligence
The emergence of artificial intelligence has triggered enthusiasm and promise of boundless opportunities as much as uncertainty about its limits. The contributions to this volume explore the limits of AI, describe the necessary conditions for its functionality, reveal its attendant technical and social problems, and present some existing and potential solutions. At the same time, the contributors highlight the societal and attending economic hopes and fears, utopias and dystopias that are associated with the current and future development of artificial intelligence
Cloud Computing Solutions for Speeding-up the Small and Medium Sized Enterprise (SME's) Businesses in China
China has experienced rapid digitalization in the last decade. Cloud computing makes possible for different users to access data resources from any geographical location through the Internet. This new paradigm has the ability to benefit businesses by offering low-cost, flexible, and customizable solutions that provide companies significant competitive advantages in the strongly competitive business environment on long-term timescale. It can be essential for all business, but it is especially indispensable for small and medium-sized enterprises (SME’s) to make prosperity in today’s accelerating social, economic and technological changes. In recent years, the SME’s have allocated more budget to invest in implementing digitization and data-based decision making processes as they have become more aware of the importance of technological development and managing information on boosting their competitiveness in values creation activities. However, they are restricted by the size of the business. Transformation of information construction and digitization process is still in its infant stage as a consequence of a shortage in experts and available resources. This situation has gradually changed with the advent of cloud computing technology. By leveraging cloud computing technology, SME’s could completely support the digital transformation process in an efficient and effective manner. Nevertheless, for those business organisations who make efforts for application of any cloud computing solution in their business processes, they have to face with some serious emerging questions. Whether the business deployed on the cloud provided by service providers has sufficient system robustness, and whether the data stored in the cloud has sufficient security. This review paper is aiming to provide comprehensive, relevant landscape about the different cloud computing solutions (Infrastructre as a Service -IaaS) Platform as a Service -PaaS; Software as a Service -SaaS) and services models (public, private and hybrid cloud). Besides that this study focuses on which cloud service model should SME’s choose and how Chinese SME businesses should take into account their own informational structure in the age of digital transformation improving businesses performance while minimizing operational expenses and risks at the same time
A Survey of FPGA Optimization Methods for Data Center Energy Efficiency
This article provides a survey of academic literature about field
programmable gate array (FPGA) and their utilization for energy efficiency
acceleration in data centers. The goal is to critically present the existing
FPGA energy optimization techniques and discuss how they can be applied to such
systems. To do so, the article explores current energy trends and their
projection to the future with particular attention to the requirements set out
by the European Code of Conduct for Data Center Energy Efficiency. The article
then proposes a complete analysis of over ten years of research in energy
optimization techniques, classifying them by purpose, method of application,
and impacts on the sources of consumption. Finally, we conclude with the
challenges and possible innovations we expect for this sector.Comment: Accepted for publication in IEEE Transactions on Sustainable
Computin
- …