2,600 research outputs found
Modern computing: Vision and challenges
Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
SmartChoices: Augmenting Software with Learned Implementations
We are living in a golden age of machine learning. Powerful models are being
trained to perform many tasks far better than is possible using traditional
software engineering approaches alone. However, developing and deploying those
models in existing software systems remains difficult. In this paper we present
SmartChoices, a novel approach to incorporating machine learning into mature
software stacks easily, safely, and effectively. We explain the overall design
philosophy and present case studies using SmartChoices within large scale
industrial systems
Auditable and performant Byzantine consensus for permissioned ledgers
Permissioned ledgers allow users to execute transactions against a data store, and retain proof of their execution in a replicated ledger. Each replica verifies the transactions’ execution and ensures that, in perpetuity, a committed transaction cannot be removed from the ledger. Unfortunately, this is not guaranteed by today’s permissioned ledgers, which can be re-written if an arbitrary number of replicas collude. In addition, the transaction throughput of permissioned ledgers is low, hampering real-world deployments, by not taking advantage of multi-core CPUs and hardware accelerators.
This thesis explores how permissioned ledgers and their consensus protocols can be made auditable in perpetuity; even when all replicas collude and re-write the ledger. It also addresses how Byzantine consensus protocols can be changed to increase the execution throughput of complex transactions. This thesis makes the following contributions:
1. Always auditable Byzantine consensus protocols. We present a permissioned ledger system that can assign blame to individual replicas regardless of how many of them misbehave. This is achieved by signing and storing consensus protocol messages in the ledger and providing clients with signed, universally-verifiable receipts.
2. Performant transaction execution with hardware accelerators. Next, we describe a cloud-based ML inference service that provides strong integrity guarantees, while staying compatible with current inference APIs. We change the Byzantine consensus protocol to execute machine learning (ML) inference computation on GPUs to optimize throughput and latency of ML inference computation.
3. Parallel transactions execution on multi-core CPUs. Finally, we introduce a permissioned ledger that executes transactions, in parallel, on multi-core CPUs. We separate the execution of transactions between the primary and secondary replicas. The primary replica executes transactions on multiple CPU cores and creates a dependency graph of the transactions that the backup replicas utilize to execute transactions in parallel.Open Acces
Potential of machine learning/Artificial Intelligence (ML/AI) for verifying configurations of 5G multi Radio Access Technology (RAT) base station
Abstract. The enhancements in mobile networks from 1G to 5G have greatly increased data transmission reliability and speed. However, concerns with 5G must be addressed. As system performance and reliability improve, ML and AI integration in products and services become more common. The integration teams in cellular network equipment creation test devices from beginning to end to ensure hardware and software parts function correctly. Radio unit integration is typically the first integration phase, where the radio is tested independently without additional network components like the BBU and UE. 5G architecture and the technology that it is using are explained further. The architecture defined by 3GPP for 5G differs from previous generations, using Network Functions (NFs) instead of network entities. This service-based architecture offers NF reusability to reduce costs and modularity, allowing for the best vendor options for customer radio products. 5G introduced the O-RAN concept to decompose the RAN architecture, allowing for increased speed, flexibility, and innovation. NG-RAN provided this solution to speed up the development and implementation process of 5G. The O-RAN concept aims to improve the efficiency of RAN by breaking it down into components, allowing for more agility and customization. The four protocols, the eCPRI interface, and the functionalities of fronthaul that NGRAN follows are expressed further. Additionally, the significance of NR is described with an explanation of its benefits. Some benefits are high data rates, lower latency, improved spectral efficiency, increased network flexibility, and improved energy efficiency. The timeline for 5G development is provided along with different 3GPP releases. Stand-alone and non-stand-alone architecture is integral while developing the 5G architecture; hence, it is also defined with illustrations. The two frequency bands that NR utilizes, FR1 and FR2, are expressed further. FR1 is a sub-6 GHz frequency band. It contains frequencies of low and high values; on the other hand, FR2 contains frequencies above 6GHz, comprising high frequencies. FR2 is commonly known as the mmWave band. Data collection for implementing the ML approaches is expressed that contains the test setup, data collection, data description, and data visualization part of the thesis work. The Test PC runs tests, executes test cases using test libraries, and collects data from various logs to analyze the system’s performance. The logs contain information about the test results, which can be used to identify issues and evaluate the system’s performance. The data collection part describes that the data was initially present in JSON files and extracted from there. The extraction took place using the Python code script and was then fed into an Excel sheet for further analysis. The data description explains the parameters that are taken while training the models. Jupyter notebook has been used for visualizing the data, and the visualization is carried out with the help of graphs. Moreover, the ML techniques used for analyzing the data are described. In total, three methods are used here. All the techniques come under the category of supervised learning. The explained models are random forest, XG Boost, and LSTM. These three models form the basis of ML techniques applied in the thesis. The results and discussion section explains the outcomes of the ML models and discusses how the thesis will be used in the future. The results include the parameters that are considered to apply the ML models to them. SINR, noise power, rxPower, and RSSI are the metrics that are being monitored. These parameters have variance, which is essential in evaluating the quality of the product test setup, the quality of the software being tested, and the state of the test environment. The discussion section of the thesis explains why the following parameters are taken, which ML model is most appropriate for the data being analyzed, and what the next steps are in implementation
Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5
This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered.
First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes.
Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification.
Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
The 2023 wearable photoplethysmography roadmap
Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
- …