13,019 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
Networked Time Series Prediction with Incomplete Data
A networked time series (NETS) is a family of time series on a given graph,
one for each node. It has a wide range of applications from intelligent
transportation, environment monitoring to smart grid management. An important
task in such applications is to predict the future values of a NETS based on
its historical values and the underlying graph. Most existing methods require
complete data for training. However, in real-world scenarios, it is not
uncommon to have missing data due to sensor malfunction, incomplete sensing
coverage, etc. In this paper, we study the problem of NETS prediction with
incomplete data. We propose NETS-ImpGAN, a novel deep learning framework that
can be trained on incomplete data with missing values in both history and
future. Furthermore, we propose Graph Temporal Attention Networks, which
incorporate the attention mechanism to capture both inter-time series and
temporal correlations. We conduct extensive experiments on four real-world
datasets under different missing patterns and missing rates. The experimental
results show that NETS-ImpGAN outperforms existing methods, reducing the MAE by
up to 25%
Twenty-five years of sensor array and multichannel signal processing: a review of progress to date and potential research directions
In this article, a general introduction to the area of sensor array and multichannel signal processing is provided, including associated activities of the IEEE Signal Processing Society (SPS) Sensor Array and Multichannel (SAM) Technical Committee (TC). The main technological advances in five SAM subareas made in the past 25 years are then presented in detail, including beamforming, direction-of-arrival (DOA) estimation, sensor location optimization, target/source localization based on sensor arrays, and multiple-input multiple-output (MIMO) arrays. Six recent developments are also provided at the end to indicate possible promising directions for future SAM research, which are graph signal processing (GSP) for sensor networks; tensor-based array signal processing, quaternion-valued array signal processing, 1-bit and noncoherent sensor array signal processing, machine learning and artificial intelligence (AI) for sensor arrays; and array signal processing for next-generation communication systems
Antenna Selection With Beam Squint Compensation for Integrated Sensing and Communications
Next-generation wireless networks strive for higher communication rates,
ultra-low latency, seamless connectivity, and high-resolution sensing
capabilities. To meet these demands, terahertz (THz)-band signal processing is
envisioned as a key technology offering wide bandwidth and sub-millimeter
wavelength. Furthermore, THz integrated sensing and communications (ISAC)
paradigm has emerged jointly access spectrum and reduced hardware costs through
a unified platform. To address the challenges in THz propagation, THz-ISAC
systems employ extremely large antenna arrays to improve the beamforming gain
for communications with high data rates and sensing with high resolution.
However, the cost and power consumption of implementing fully digital
beamformers are prohibitive. While hybrid analog/digital beamforming can be a
potential solution, the use of subcarrier-independent analog beamformers leads
to the beam-squint phenomenon where different subcarriers observe distinct
directions because of adopting the same analog beamformer across all
subcarriers. In this paper, we develop a sparse array architecture for THz-ISAC
with hybrid beamforming to provide a cost-effective solution. We analyze the
antenna selection problem under beam-squint influence and introduce a manifold
optimization approach for hybrid beamforming design. To reduce computational
and memory costs, we propose novel algorithms leveraging grouped subarrays,
quantized performance metrics, and sequential optimization. These approaches
yield a significant reduction in the number of possible subarray
configurations, which enables us to devise a neural network with classification
model to accurately perform antenna selection.Comment: 14pages10figures, submitted to IEE
Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry
Despite the success of diffusion models (DMs), we still lack a thorough
understanding of their latent space. To understand the latent space
, we analyze them from a geometrical perspective.
Specifically, we utilize the pullback metric to find the local latent basis in
and their corresponding local tangent basis in , the
intermediate feature maps of DMs. The discovered latent basis enables
unsupervised image editing capability through latent space traversal. We
investigate the discovered structure from two perspectives. First, we examine
how geometric structure evolves over diffusion timesteps. Through analysis, we
show that 1) the model focuses on low-frequency components early in the
generative process and attunes to high-frequency details later; 2) At early
timesteps, different samples share similar tangent spaces; and 3) The simpler
datasets that DMs trained on, the more consistent the tangent space for each
timestep. Second, we investigate how the geometric structure changes based on
text conditioning in Stable Diffusion. The results show that 1) similar prompts
yield comparable tangent spaces; and 2) the model depends less on text
conditions in later timesteps. To the best of our knowledge, this paper is the
first to present image editing through -space traversal and provide
thorough analyses of the latent structure of DMs
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning
Adapting to regularities of the environment is critical for biological
organisms to anticipate events and plan. A prominent example is the circadian
rhythm corresponding to the internalization by organisms of the -hour
period of the Earth's rotation. In this work, we study the emergence of
circadian-like rhythms in deep reinforcement learning agents. In particular, we
deployed agents in an environment with a reliable periodic variation while
solving a foraging task. We systematically characterize the agent's behavior
during learning and demonstrate the emergence of a rhythm that is endogenous
and entrainable. Interestingly, the internal rhythm adapts to shifts in the
phase of the environmental signal without any re-training. Furthermore, we show
via bifurcation and phase response curve analyses how artificial neurons
develop dynamics to support the internalization of the environmental rhythm.
From a dynamical systems view, we demonstrate that the adaptation proceeds by
the emergence of a stable periodic orbit in the neuron dynamics with a phase
response that allows an optimal phase synchronisation between the agent's
dynamics and the environmental rhythm.Comment: ICML 202
- …