1,579 research outputs found
Semantic models and knowledge graphs as manufacturing system reconfiguration enablers
Reconfigurable Manufacturing System (RMS) provides a cost-effective approach for manufacturers to adapt to fluctuating market demands by reconfiguring assets through automated analysis of asset utilization and resource allocation. Achieving this automation necessitates a clear understanding, formalization, and documentation of asset capabilities and capacity utilization. This paper introduces a unified model employing semantic modeling to delineate the manufacturing sector's capabilities, capacity, and reconfiguration potential. The model illustrates the integration of these three components to facilitate efficient system reconfiguration. Additionally, semantic modeling allows for the capture of historical experiences, thus enhancing long-term system reconfiguration through a knowledge graph. Two use cases are presented: capability matching and reconfiguration solution recommendation based on the proposed model. A thorough explication of the methodology and outcomes is provided, underscoring the advantages of this approach in terms of heightened efficiency, diminished costs, and augmented productivity
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
Adaptive Data-driven Optimization using Transfer Learning for Resilient, Energy-efficient, Resource-aware, and Secure Network Slicing in 5G-Advanced and 6G Wireless Systems
Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 134-141)Dissertation (Ph.D)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 20225G–Advanced is the next step in the evolution of the fifth–generation (5G) technology. It will introduce a new level of expanded capabilities beyond connections and enables a broader range of advanced applications and use cases. 5G–Advanced will support modern applications with greater mobility and high dependability. Artificial intelligence and Machine Learning will enhance network performance with spectral efficiency and energy savings enhancements.
This research established a framework to optimally control and manage an appropriate selection of network slices for incoming requests from diverse applications and services in Beyond 5G networks. The developed DeepSlice model is used to optimize the network and individual slice load efficiency across isolated slices and manage slice lifecycle in case of failure. The DeepSlice framework can predict the unknown connections by utilizing the learning from a developed deep-learning neural network model.
The research also addresses threats to the performance, availability, and robustness of B5G networks by proactively preventing and resolving threats. The study proposed a Secure5G framework for authentication, authorization, trust, and control for a network slicing architecture in 5G systems. The developed model prevents the 5G infrastructure from Distributed Denial of Service by analyzing incoming connections and learning from the developed model. The research demonstrates the preventive measure against volume attacks, flooding attacks, and masking (spoofing) attacks. This research builds the framework towards the zero trust objective (never trust, always verify, and verify continuously) that improves resilience.
Another fundamental difficulty for wireless network systems is providing a desirable user experience in various network conditions, such as those with varying network loads and bandwidth fluctuations. Mobile Network Operators have long battled unforeseen network traffic events. This research proposed ADAPTIVE6G to tackle the network load estimation problem using knowledge-inspired Transfer Learning by utilizing radio network Key Performance Indicators from network slices to understand and learn network load estimation problems. These algorithms enable Mobile Network Operators to optimally coordinate their computational tasks in stochastic and time-varying network states.
Energy efficiency is another significant KPI in tracking the sustainability of network slicing. Increasing traffic demands in 5G dramatically increase the energy consumption of mobile networks. This increase is unsustainable in terms of dollar cost and environmental impact. This research proposed an innovative ECO6G model to attain sustainability and energy efficiency. Research findings suggested that the developed model can reduce network energy costs without negatively impacting performance or end customer experience against the classical Machine Learning and Statistical driven models. The proposed model is validated against the industry-standardized energy efficiency definition, and operational expenditure savings are derived, showing significant cost savings to MNOs.Introduction -- A deep neural network framework towards a resilient, efficient, and secure network slicing in Beyond 5G Networks -- Adaptive resource management techniques for network slicing in Beyond 5G networks using transfer learning -- Energy and cost analysis for network slicing deployment in Beyond 5G networks -- Conclusion and future scop
A Survey of FPGA Optimization Methods for Data Center Energy Efficiency
This article provides a survey of academic literature about field
programmable gate array (FPGA) and their utilization for energy efficiency
acceleration in data centers. The goal is to critically present the existing
FPGA energy optimization techniques and discuss how they can be applied to such
systems. To do so, the article explores current energy trends and their
projection to the future with particular attention to the requirements set out
by the European Code of Conduct for Data Center Energy Efficiency. The article
then proposes a complete analysis of over ten years of research in energy
optimization techniques, classifying them by purpose, method of application,
and impacts on the sources of consumption. Finally, we conclude with the
challenges and possible innovations we expect for this sector.Comment: Accepted for publication in IEEE Transactions on Sustainable
Computin
Multi-objective resource optimization in space-aerial-ground-sea integrated networks
Space-air-ground-sea integrated (SAGSI) networks are envisioned to connect satellite, aerial, ground,
and sea networks to provide connectivity everywhere and all the time in sixth-generation (6G) networks. However, the success of SAGSI networks is constrained by several challenges including
resource optimization when the users have diverse requirements and applications. We present a
comprehensive review of SAGSI networks from a resource optimization perspective. We discuss
use case scenarios and possible applications of SAGSI networks. The resource optimization discussion considers the challenges associated with SAGSI networks. In our review, we categorized
resource optimization techniques based on throughput and capacity maximization, delay minimization, energy consumption, task offloading, task scheduling, resource allocation or utilization,
network operation cost, outage probability, and the average age of information, joint optimization (data rate difference, storage or caching, CPU cycle frequency), the overall performance of
network and performance degradation, software-defined networking, and intelligent surveillance
and relay communication. We then formulate a mathematical framework for maximizing energy
efficiency, resource utilization, and user association. We optimize user association while satisfying
the constraints of transmit power, data rate, and user association with priority. The binary decision
variable is used to associate users with system resources. Since the decision variable is binary and
constraints are linear, the formulated problem is a binary linear programming problem. Based on
our formulated framework, we simulate and analyze the performance of three different algorithms
(branch and bound algorithm, interior point method, and barrier simplex algorithm) and compare
the results. Simulation results show that the branch and bound algorithm shows the best results,
so this is our benchmark algorithm. The complexity of branch and bound increases exponentially
as the number of users and stations increases in the SAGSI network. We got comparable results
for the interior point method and barrier simplex algorithm to the benchmark algorithm with low
complexity. Finally, we discuss future research directions and challenges of resource optimization
in SAGSI networks
Computational Capabilities and Compiler Development for Neutral Atom Quantum Processors: Connecting Tool Developers and Hardware Experts
Neutral Atom Quantum Computing (NAQC) emerges as a promising hardware
platform primarily due to its long coherence times and scalability.
Additionally, NAQC offers computational advantages encompassing potential
long-range connectivity, native multi-qubit gate support, and the ability to
physically rearrange qubits with high fidelity. However, for the successful
operation of a NAQC processor, one additionally requires new software tools to
translate high-level algorithmic descriptions into a hardware executable
representation, taking maximal advantage of the hardware capabilities.
Realizing new software tools requires a close connection between tool
developers and hardware experts to ensure that the corresponding software tools
obey the corresponding physical constraints. This work aims to provide a basis
to establish this connection by investigating the broad spectrum of
capabilities intrinsic to the NAQC platform and its implications on the
compilation process. To this end, we first review the physical background of
NAQC and derive how it affects the overall compilation process by formulating
suitable constraints and figures of merit. We then provide a summary of the
compilation process and discuss currently available software tools in this
overview. Finally, we present selected case studies and employ the discussed
figures of merit to evaluate the different capabilities of NAQC and compare
them between two hardware setups.Comment: 32 pages, 13 figures, 2 table
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
Compiling Quantum Circuits for Dynamically Field-Programmable Neutral Atoms Array Processors
Dynamically field-programmable qubit arrays (DPQA) have recently emerged as a
promising platform for quantum information processing. In DPQA, atomic qubits
are selectively loaded into arrays of optical traps that can be reconfigured
during the computation itself. Leveraging qubit transport and parallel,
entangling quantum operations, different pairs of qubits, even those initially
far away, can be entangled at different stages of the quantum program
execution. Such reconfigurability and non-local connectivity present new
challenges for compilation, especially in the layout synthesis step which
places and routes the qubits and schedules the gates. In this paper, we
consider a DPQA architecture that contains multiple arrays and supports 2D
array movements, representing cutting-edge experimental platforms. Within this
architecture, we discretize the state space and formulate layout synthesis as a
satisfactory modulo theories problem, which can be solved by existing solvers
optimally in terms of circuit depth. For a set of benchmark circuits generated
by random graphs with complex connectivities, our compiler OLSQ-DPQA reduces
the number of two-qubit entangling gates on small problem instances by 1.7x
compared to optimal compilation results on a fixed planar architecture. To
further improve scalability and practicality of the method, we introduce a
greedy heuristic inspired by the iterative peeling approach in classical
integrated circuit routing. Using a hybrid approach that combined the greedy
and optimal methods, we demonstrate that our DPQA-based compiled circuits
feature reduced scaling overhead compared to a grid fixed architecture,
resulting in 5.1X less two-qubit gates for 90 qubit quantum circuits. These
methods enable programmable, complex quantum circuits with neutral atom quantum
computers, as well as informing both future compilers and future hardware
choices.Comment: An extended abstract of this work was presented at the 41st
International Conference on Computer-Aided Design (ICCAD '22
- …