819 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Causal Sampling, Compressing, and Channel Coding of Streaming Data
With the emergence of the Internet of Things, communication systems, such as those employed in distributed control and tracking scenarios, are becoming increasingly dynamic, interactive, and delay-sensitive. The data in such real-time systems arrive at the encoder progressively in a streaming fashion. An intriguing question is: what codes can transmit streaming data with both high reliability and low latency? Classical non-causal (block) encoding schemes can transmit data reliably but under the assumption that the encoder knows the entire data block before the transmission. While this is a realistic assumption in delay-tolerant systems, it is ill-suited to real-time systems due to the delay introduced by collecting data into a block. This thesis studies causal encoding: the encoder transmits information based on the causally received data while the data is still streaming in and immediately incorporates the newly received data into a continuing transmission on the fly.
This thesis investigates causal encoding of streaming data in three scenarios: causal sampling, causal lossy compressing, and causal joint source-channel coding (JSCC). In the causal sampling scenario, a sampler observes a continuous-time source process and causally decides when to transmit real-valued samples of it under a constraint on the average number of samples per second; an estimator uses the causally received samples to approximate the source process in real time. We propose a causal sampling policy that achieves the best tradeoff between the sampling frequency and the end-to-end real-time estimation distortion for a class of continuous Markov processes. In the causal lossy compressing scenario, the sampling frequency constraint in the causal sampling scenario is replaced by a rate constraint on the average number of bits per second. We propose a causal code that achieves the best causal distortion-rate tradeoff for the same class of processes. In the causal JSCC scenario, the noiseless channel and the continuous-time process in the previous scenarios are replaced by a discrete memoryless channel with feedback and a sequence of streaming symbols, respectively. We propose a causal joint sourcechannel code that achieves the maximum exponentially decaying rate of the error probability compatible with a given rate. Remarkably, the fundamental limits in the causal lossy compressing and the causal JSCC scenarios achieved by our causal codes are no worse than those achieved by the best non-causal codes. In addition to deriving the fundamental limits and presenting the causal codes that achieve the limits, we also show that our codes apply to control systems, are resilient to system deficiencies such as channel delay and noise, and have low complexities.</p
Discovering Causal Relations and Equations from Data
Physics is a field of science that has traditionally used the scientific
method to answer questions about why natural phenomena occur and to make
testable models that explain the phenomena. Discovering equations, laws and
principles that are invariant, robust and causal explanations of the world has
been fundamental in physical sciences throughout the centuries. Discoveries
emerge from observing the world and, when possible, performing interventional
studies in the system under study. With the advent of big data and the use of
data-driven methods, causal and equation discovery fields have grown and made
progress in computer science, physics, statistics, philosophy, and many applied
fields. All these domains are intertwined and can be used to discover causal
relations, physical laws, and equations from observational data. This paper
reviews the concepts, methods, and relevant works on causal and equation
discovery in the broad field of Physics and outlines the most important
challenges and promising future lines of research. We also provide a taxonomy
for observational causal and equation discovery, point out connections, and
showcase a complete set of case studies in Earth and climate sciences, fluid
dynamics and mechanics, and the neurosciences. This review demonstrates that
discovering fundamental laws and causal relations by observing natural
phenomena is being revolutionised with the efficient exploitation of
observational data, modern machine learning algorithms and the interaction with
domain knowledge. Exciting times are ahead with many challenges and
opportunities to improve our understanding of complex systems.Comment: 137 page
Undetectable GPS-Spoofing Attack on Time Series Phasor Measurement Unit Data
The Phasor Measurement Unit (PMU) is an important metering device for smart
grid. Like any other Intelligent Electronic Device (IED), PMUs are prone to
various types of cyberattacks. However, one form of attack is unique to the
PMU, the GPS-spoofing attack, where the time and /or the one second pulse (1
PPS) that enables time synchronization are modified and the measurements are
computed using the modified time reference. This article exploits the
vulnerability of PMUs in their GPS time synchronization signal. At first, the
paper proposes an undetectable gradual GPS-spoofing attack with small
incremental angle deviation over time. The angle deviation changes power flow
calculation through the branches of the grids, without alerting the System
Operator (SO) during off-peak hour. The attacker keeps instigating slow
incremental variation in power flow calculation caused by GPS-spoofing
relentlessly over a long period of time, with a goal of causing the power flow
calculation breach the MVA limit of the branch at peak-hour. The attack is
applied by solving a convex optimization criterion at regular time interval, so
that after a specific time period the attack vector incurs a significant change
in the angle measurements transmitted by the PMU. Secondly, while the attack
modifies the angle measurements with GPS-spoofing attack, it ensures the
undetectibility of phase angle variation by keeping the attack vector less than
attack detection threshold. The proposed attack model is tested with Weighted
Least Squared Error (WLSE), Kalman Filtering, and Hankel-matrix based
GPS-spoofing attack detection models. Finally, we have proposed a gradient of
low-rank approximation of Hankel-matrix based detection method to detect such
relentless small incremental GPS-spoofing attack
On the Road to 6G: Visions, Requirements, Key Technologies and Testbeds
Fifth generation (5G) mobile communication systems have entered the stage of commercial development, providing users with new services and improved user experiences as well as offering a host of novel opportunities to various industries. However, 5G still faces many challenges. To address these challenges, international industrial, academic, and standards organizations have commenced research on sixth generation (6G) wireless communication systems. A series of white papers and survey papers have been published, which aim to define 6G in terms of requirements, application scenarios, key technologies, etc. Although ITU-R has been working on the 6G vision and it is expected to reach a consensus on what 6G will be by mid-2023, the related global discussions are still wide open and the existing literature has identified numerous open issues. This paper first provides a comprehensive portrayal of the 6G vision, technical requirements, and application scenarios, covering the current common understanding of 6G. Then, a critical appraisal of the 6G network architecture and key technologies is presented. Furthermore, existing testbeds and advanced 6G verification platforms are detailed for the first time. In addition, future research directions and open challenges are identified for stimulating the on-going global debate. Finally, lessons learned to date concerning 6G networks are discussed
Machine learning approach for dynamic event identification in power systems with wide area measurement systems
Orientador: Prof. Dr. Alexandre Rasi AokiCoorientador: Prof. Dr. Ricardo SchumacherDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia Elétrica. Defesa : Curitiba, 13/02/2023Inclui referênciasResumo: Ao longo dos últimos dez anos, a disponibilidade de WAMS (Wide Area Measurement Systems) tem constantemente aumentado e, com isso, a necessidade de se otimizar seu uso em relação a uma ampla gama de capacidades requeridas nos centros de operação. Concorrentemente, o sistema brasileiro tem observado diversos eventos em múltiplos níveis de criticalidade e, portanto, formas de rapidamente identificar irregularidades na rede elétrica têm sido requisitadas pelos operadores. Todavia, mesmo com tal diversidade de eventos registrados por PMUs (Phasor Measurement Unit), há dificuldades em se consolidar um banco de dados de eventos e, ademais, sistemas diferem uns dos outros - isto é, os volumes de dados requeridos para machine learning e a especificidade de cada sistema criam desafios para a construção de aplicações para detecção e identificação de eventos em uma dada rede. De tal maneira, o presente trabalho propõe uma forma de endereçar tais restrições e habilitar o uso de modelos de machine learning na vida real ao modelar um sistema real, simular uma grande quantia de eventos (como medição de PMU) e executar o processo de aprendizado de máquina com esses dados simulados. Tendo posse de qualquer conjunto de dados que contenha medições de evento da mesma PMU simulada, uma validação da aplicabilidade e performance do modelo obtido pode ser feita. Assim, um processo reprodutível e escalável foi definido pelo trabalho a partir de um estudo de caso no corredor Salto Caxias, um subsistema da rede elétrica paranaense operado pela COPEL, que forneceu três conjuntos de dados contendo eventos registrados em uma PMU dessa área. Alguns componentes como barras, linhas de transmissão, transformadores, geradores, PSSs, excitadores e controles de turbina foram modelados dentro da Power System Toolbox, embasada em MATLAB, para simulação de eventos. O algoritmo de machine learning selecionado para provar o conceito estabelecido foi rede neural artificial, definindo-se quatro classes possíveis para reconhecimento - "Curto-circuito", "Perda de Carga", "Perda de Linha" e "Normal". Com o modelo de machine learning definido e treinado, se aplicaram os dados de eventos reais nele. Os resultados mostram que as métricas da rede neural no processo de aprendizado foram geralmente suficientes para aplicação em vida real, mas que sua performance nos conjuntos de dados de eventos reais foi abaixo da registrada com os dados simulados. Todavia, considerando-se que os dados reais providenciados são de eventos longínquos à PMU observada e ao próprio sistema modelado, distorções e atenuações de sinal são inerentes. Assim, pode-se dizer que o método proposto é aplicável, com mais etapas de pré-processamento de dados, a qualquer dado sistema - caso ele seja minuciosamente modelado e haja disponibilidade de conjuntos de dados de eventos internos ao sistema.Abstract: Over the last ten years, the availability of WAMS (Wide Area Measurement Systems) has steadily increased and, with it, the need to optimize its usage concerning a large array of capabilities required at the operation centers. Concurrently, the Brazilian system has witnessed various events at multiple levels of criticality, and, thus, ways to quickly identify irregularities in the grid have been more and more requested by power transmission and distribution companies. The introduction of machine learning models and algorithms in such a context has been explored by the scientific community. However, even with such a diversity of events and their PMU (Phasor Measurement Unit) measurements, there is hardship in consolidating an event database and systems differ from each other - that is, the data volume required for machine learning and the specificity of each power system create challenges in constructing applications for detection and identification of events in a given grid. As such, the present work proposes a way to address those constraints and further enable the real-life application of machine learning models in a power system with WAMS through the modeling of a real-life system, simulating a large database of events as if they were registered through a PMU in said system and training machine learning models on this simulated data. If one has any dataset containing event measurements from the same PMU (which was simulated), a validation of model performance and applicability can be performed. A reproducible and scalable process was defined to achieve this through one case study for the Salto Caxias subsystem of the Paraná state grid, operated by COPEL, who provided the author with three event datasets captured from a PMU in the aforementioned system. Some components of the system were modeled in MATLAB-based Power System Toolbox for dynamic simulation, such as generators, PSSs, exciters, and turbine governors in addition to buses, transmission lines, and transformers. The selected algorithm for this proof-of-concept was artificial neural network, defining four distinct possible classes it can recognize - "Short-circuit", "Load Loss", "Line Loss" and "Normal". With the machine learning model defined and trained, its application was executed on real event datasets. The results show that the metrics of the neural network model on the learning process were generally sufficient for real-life solutions, but its performance on the real event datasets was below that of the performance on simulated data. However, considering that the provided datasets were from events that happened far away from the selected PMU and its modeled system, signal distortions and attenuations are present. Thus, it can be stated that the proposed method is applicable, with further data preprocessing, to any given system - as long as it is thoroughly modeled and there is availability of datasets of events that happened within it
Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services
This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
Low Power Memory/Memristor Devices and Systems
This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within
D4.2 Intelligent D-Band wireless systems and networks initial designs
This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project
- …