35 research outputs found

    Enhancement of MARSALA Random Access with Coding Schemes, Power Distributions and Maximum Ratio Combining

    Get PDF
    Several random access (RA) techniques have been proposed recently for the satellite return link. The main objective of these techniques is to resolve packets collisions in order to enhance the limited throughput of traditional RA schemes. In this context, Multi-Replica Decoding using Correlation based Localisation(MARSALA) has been introduced and has shown good performance with DVB-RCS2 coding scheme and equi-powered transmissions. However, it has been shown in the literature that alternative coding schemes and packets power distributions can have a positive impact on RA performance. Therefore, in this paper, we investigate the behaviour of MARSALA with various coding schemes and various packet power distributions, then we propose a configuration for optimal performance. This paper also introduces the enhancement of MARSALA RA scheme by adding MRC to optimize replicas combination and study the impact on the throughput. We compare two different MRC techniques and we evaluate, via simulations, the gain achieved using MRC with different coding schemes and unbalanced packets. The simulation results demonstrate that the proposed enhancements to MARSALA show substantial performance gain, i.e. throughput achieved for a target Packet Loss Ratio (PLR)

    Design of Coded Slotted ALOHA with Interference Cancellation Errors

    Get PDF
    International audienceCoded Slotted ALOHA (CSA) is a random access scheme based on the application of packet erasure correcting codes to transmitted packets and the use of successive interference cancellation at the receiver. CSA has been widely studied and a common assumption is that interference cancellation can always be applied perfectly. In this paper, we study the design of CSA protocol, accounting for a non-zero probability of error due to imperfect interference cancellation (IC). A classical method to evaluate the performance of such protocols is density evolution, originating from coding theory, and that we adapt to our assumptions. Analyzing the convergence of density evolution in asymptotic conditions, we derive the optimal parameters of CSA, i.e., the set of code selection probabilities of users that maximizes the channel load. A new parameter is introduced to model the packet loss rate of the system, which is non-zero due to potential IC errors. Multi-packet reception (MPR) and the performance of 2-MPR are also studied. We investigate the trade-off between optimal load and packet loss rate, which sheds light on new optimal distributions that outperform known ones. Finally, we show that our asymptotic analytical results are consistent with simulations obtained on a finite number of slots

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    Physics-guided machine learning approaches to predict stability properties of fusion plasmas

    Get PDF
    Disruption prediction and avoidance is a critical need for next-step tokamaks such as the International Thermonuclear Experimental Reactor (ITER). The Disruption Event Characterization and Forecasting Code (DECAF) is a framework used to fully determine chains of events, such as magnetohydrodynamic (MHD) instabilities, that can lead to disruptions. In this thesis, several interpretable and physics-guided machine learning techniques (ML) to forecast the onset of resistive wall modes (RWM) in spherical tokamaks have been developed and incorporated into DECAF. The new DECAF model operates in a multi-step fashion by analysing the ideal stability properties and then by including kinetic effects on RWM stability. First, a random forest regressor (RFR) and a neural network (NN) ensemble are employed to reproduce the change in plasma potential energy without wall effects, δWno-wall, computed by the DCON ideal stability code for a large database of equilibria from the National Spherical Torus Experiment (NSTX). Moreover, outputs from the ML models are reduced and manipulated to get an estimation of the no-wall β limit, βno-wall, (where β is the ratio of plasma pressure to magnetic confinement field pressure). This exercise shows that the ML models are able to improve previous DECAF characterisation of stable and unstable equilibria and achieve accuracies within 85-88%, depending on the chosen level of interpretability. The physics guidance imposed on the NN objective function allowed for transferability outside the training domain by testing the algorithm on discharges from the Mega Ampere Spherical Tokamak (MAST). The estimated βno-wall and other important plasma characteristics, such as rotation, collisionality and low frequency MHD activity, are used as input to a customised random forest (RF) classifier to predict RWM stability for a set of human-labeled NSTX discharges. The proposed approach is real-time compatible and outperforms classical cost-sensitive methods by achieving a true positive rate (TPR) up to 90%, while also resulting in a threefold reduction in the training time. Finally, a model-agnostic method based on counterfactual explanations is developed in order to further understand the model's predictions. Good agreement is found between the model's decision and the rules imposed by physics expectation. These results also motivate the usage of counterfactuals to simulate real-time control by generating the βN levels that would keep the RWM stable

    Smart Energy Management for Smart Grids

    Get PDF
    This book is a contribution from the authors, to share solutions for a better and sustainable power grid. Renewable energy, smart grid security and smart energy management are the main topics discussed in this book

    Radio frequency communication and fault detection for railway signalling

    Get PDF
    The continuous and swift progression of both wireless and wired communication technologies in today's world owes its success to the foundational systems established earlier. These systems serve as the building blocks that enable the enhancement of services to cater to evolving requirements. Studying the vulnerabilities of previously designed systems and their current usage leads to the development of new communication technologies replacing the old ones such as GSM-R in the railway field. The current industrial research has a specific focus on finding an appropriate telecommunication solution for railway communications that will replace the GSM-R standard which will be switched off in the next years. Various standardization organizations are currently exploring and designing a radiofrequency technology based standard solution to serve railway communications in the form of FRMCS (Future Railway Mobile Communication System) to substitute the current GSM-R. Bearing on this topic, the primary strategic objective of the research is to assess the feasibility to leverage on the current public network technologies such as LTE to cater to mission and safety critical communication for low density lines. The research aims to identify the constraints, define a service level agreement with telecom operators, and establish the necessary implementations to make the system as reliable as possible over an open and public network, while considering safety and cybersecurity aspects. The LTE infrastructure would be utilized to transmit the vital data for the communication of a railway system and to gather and transmit all the field measurements to the control room for maintenance purposes. Given the significance of maintenance activities in the railway sector, the ongoing research includes the implementation of a machine learning algorithm to detect railway equipment faults, reducing time and human analysis errors due to the large volume of measurements from the field

    Advances in Forensic Genetics

    Get PDF
    The book has 25 articles about the status and new directions in forensic genetics. Approximately half of the articles are invited reviews, and the remaining articles deal with new forensic genetic methods. The articles cover aspects such as sampling DNA evidence at the scene of a crime; DNA transfer when handling evidence material and how to avoid DNA contamination of items, laboratory, etc.; identification of body fluids and tissues with RNA; forensic microbiome analysis with molecular biology methods as a supplement to the examination of human DNA; forensic DNA phenotyping for predicting visible traits such as eye, hair, and skin colour; new ancestry informative DNA markers for estimating ethnic origin; new genetic genealogy methods for identifying distant relatives that cannot be identified with conventional forensic DNA typing; sensitive DNA methods, including single-cell DNA analysis and other highly specialised and sensitive methods to examine ancient DNA from unidentified victims of war; forensic animal genetics; genetics of visible traits in dogs; statistical tools for interpreting forensic DNA analyses, including the most used IT tools for forensic STR-typing and DNA sequencing; haploid markers (Y-chromosome and mitochondria DNA); inference of ethnic origin; a comprehensive logical framework for the interpretation of forensic genetic DNA data; and an overview of the ethical aspects of modern forensic genetics

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp
    corecore