107 research outputs found
Reliable Packet Detection for Random Access Networks: Analysis, Benchmark, and Optimization
This paper reexamines and fundamentally improves the Schmidl-and-Cox (S&C)
algorithm, which is extensively used for packet detection in wireless networks,
and enhances its adaptability for multi-antenna receivers. First, we introduce
a new "compensated autocorrelation" metric, providing a more analytically
tractable solution with precise expressions for false-alarm and
missed-detection probabilities. Second, this paper proposes the Pareto
comparison principle for fair benchmarking packet-detection algorithms,
considering both false alarms and missed detections simultaneously. Third, with
the Pareto benchmarking scheme, we experimentally confirm that the performance
of S&C can be greatly improved by taking only the real part and discarding the
imaginary part of the autocorrelation, leading to the novel real-part S&C
(RP-S&C) scheme. Fourth, and perhaps most importantly, we utilize the
compensated autocorrelation metric we newly put forth to extend the
single-antenna algorithm to multi-antenna scenarios through a weighted-sum
approach. Two optimization problems, minimizing false-alarm and
missed-detection probabilities respectively, are formulated and solutions are
provided. Our experimental results reveal that the optimal weights for false
alarms (WFA) scheme is more desirable than the optimal weights for missed
detections (WMD) due to its simplicity, reliability, and superior performance.
This study holds considerable implications for the design and deployment of
packet-detection schemes in random-access networks
UV Curing and Micromolding of Polymer Coatings
University of Minnesota Ph.D. dissertation.August 2018. Major: Material Science and Engineering. Advisors: Lorraine Francis, Alon McCormick. 1 computer file (PDF); xii, 143 pages.UV curable films and coatings have a wide range of applications in everyday life and various industrial sectors. With surface microstructures, patterned coatings can be used as a general way to provide surface textures or serve critical design purposes, such as providing engineered optical performances and altering the surface hydrophobicity. This thesis addresses key challenges in the UV curing process and their patterning applications, aiming to make UV curing faster, better, and cheaper. Firstly, the UV curing speed is the bottleneck of the process throughput for many applications. Traditionally, high-intensity light sources are used to achieve fast cure but bring with potential problems, e.g. significant heat accumulation. In this thesis, the intense pulsed light was investigated as an alternative curing method, where the UV energy is delivered in discrete pulses with a dark period between pulses. A systematic study was performed on a model acrylate system to understand the curing conversion as a function of various processing parameter, including illumination conditions, the photoinitiator concentration, and the curing temperature. It was revealed that sufficient curing of acrylates was achieved within seconds without significant heat built-up in the substrate. Second, this thesis investigates the fabrication of surface microstructures with UV curable materials. The UV micromolding process is used for pattern replication, where a liquid coating is brought into contact with a patterned mold and then UV cured to obtain surface microstructures. However, the wide application of this process is limited the stringent material requirements of the process: low viscosities, fast cure, low surface energy, and tunable mechanical properties. This thesis describes the design of thiol-ene based coating formulations for the UV micromolding process. The coating system allows for the preparation of microstructured coatings within seconds and significantly expands the achievable mechanical and surface properties of cured materials. Finally, continuous fabrication of microstructured coatings was explored to move the process a step further towards mass production. The roll-to-roll imprinting process with thiol-ene based formulations was discussed. In the process, a roller mold was used for pattern replication on large-area substrates. The coating formulations were optimized for a fast cure and high curing extents. In addition, the influences of processing variables on the curing extents were studied systematically
The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms
Large language models (LLMs) have garnered significant attention across
various research disciplines, including the wireless communication community.
There have been several heated discussions on the intersection of LLMs and
wireless technologies. While recent studies have demonstrated the ability of
LLMs to generate hardware description language (HDL) code for simple
computation tasks, developing wireless prototypes and products via HDL poses
far greater challenges because of the more complex computation tasks involved.
In this paper, we aim to address this challenge by investigating the role of
LLMs in FPGA-based hardware development for advanced wireless signal
processing. We begin by exploring LLM-assisted code refactoring, reuse, and
validation, using an open-source software-defined radio (SDR) project as a case
study. Through the case study, we find that an LLM assistant can potentially
yield substantial productivity gains for researchers and developers. We then
examine the feasibility of using LLMs to generate HDL code for advanced
wireless signal processing, using the Fast Fourier Transform (FFT) algorithm as
an example. This task presents two unique challenges: the scheduling of
subtasks within the overall task and the multi-step thinking required to solve
certain arithmetic problem within the task. To address these challenges, we
employ in-context learning (ICL) and Chain-of-Thought (CoT) prompting
techniques, culminating in the successful generation of a 64-point Verilog FFT
module. Our results demonstrate the potential of LLMs for generalization and
imitation, affirming their usefulness in writing HDL code for wireless
communication systems. Overall, this work contributes to understanding the role
of LLMs in wireless communication and motivates further exploration of their
capabilities
Unveiling Fairness Biases in Deep Learning-Based Brain MRI Reconstruction
Deep learning (DL) reconstruction particularly of MRI has led to improvements
in image fidelity and reduction of acquisition time. In neuroimaging, DL
methods can reconstruct high-quality images from undersampled data. However, it
is essential to consider fairness in DL algorithms, particularly in terms of
demographic characteristics. This study presents the first fairness analysis in
a DL-based brain MRI reconstruction model. The model utilises the U-Net
architecture for image reconstruction and explores the presence and sources of
unfairness by implementing baseline Empirical Risk Minimisation (ERM) and
rebalancing strategies. Model performance is evaluated using image
reconstruction metrics. Our findings reveal statistically significant
performance biases between the gender and age subgroups. Surprisingly, data
imbalance and training discrimination are not the main sources of bias. This
analysis provides insights of fairness in DL-based image reconstruction and
aims to improve equity in medical AI applications.Comment: Accepted for publication at FAIMI 2023 (Fairness of AI in Medical
Imaging) at MICCA
AMD-DBSCAN: An Adaptive Multi-density DBSCAN for datasets of extremely variable density
DBSCAN has been widely used in density-based clustering algorithms. However,
with the increasing demand for Multi-density clustering, previous traditional
DSBCAN can not have good clustering results on Multi-density datasets. In order
to address this problem, an adaptive Multi-density DBSCAN algorithm
(AMD-DBSCAN) is proposed in this paper. An improved parameter adaptation method
is proposed in AMD-DBSCAN to search for multiple parameter pairs (i.e., Eps and
MinPts), which are the key parameters to determine the clustering results and
performance, therefore allowing the model to be applied to Multi-density
datasets. Moreover, only one hyperparameter is required for AMD-DBSCAN to avoid
the complicated repetitive initialization operations. Furthermore, the variance
of the number of neighbors (VNN) is proposed to measure the difference in
density between each cluster. The experimental results show that our AMD-DBSCAN
reduces execution time by an average of 75% due to lower algorithm complexity
compared with the traditional adaptive algorithm. In addition, AMD-DBSCAN
improves accuracy by 24.7% on average over the state-of-the-art design on
Multi-density datasets of extremely variable density, while having no
performance loss in Single-density scenarios. Our code and datasets are
available at https://github.com/AlexandreWANG915/AMD-DBSCAN.Comment: Accepted at DSAA202
Cine cardiac MRI reconstruction using a convolutional recurrent network with refinement
Cine Magnetic Resonance Imaging (MRI) allows for understanding of the heart's
function and condition in a non-invasive manner. Undersampling of the -space
is employed to reduce the scan duration, thus increasing patient comfort and
reducing the risk of motion artefacts, at the cost of reduced image quality. In
this challenge paper, we investigate the use of a convolutional recurrent
neural network (CRNN) architecture to exploit temporal correlations in
supervised cine cardiac MRI reconstruction. This is combined with a
single-image super-resolution refinement module to improve single coil
reconstruction by 4.4\% in structural similarity and 3.9\% in normalised mean
square error compared to a plain CRNN implementation. We deploy a high-pass
filter to our loss to allow greater emphasis on high-frequency details
which are missing in the original data. The proposed model demonstrates
considerable enhancements compared to the baseline case and holds promising
potential for further improving cardiac MRI reconstruction.Comment: MICCAI STACOM workshop 202
An Autonomous Large Language Model Agent for Chemical Literature Data Mining
Chemical synthesis, which is crucial for advancing material synthesis and
drug discovery, impacts various sectors including environmental science and
healthcare. The rise of technology in chemistry has generated extensive
chemical data, challenging researchers to discern patterns and refine synthesis
processes. Artificial intelligence (AI) helps by analyzing data to optimize
synthesis and increase yields. However, AI faces challenges in processing
literature data due to the unstructured format and diverse writing style of
chemical literature. To overcome these difficulties, we introduce an end-to-end
AI agent framework capable of high-fidelity extraction from extensive chemical
literature. This AI agent employs large language models (LLMs) for prompt
generation and iterative optimization. It functions as a chemistry assistant,
automating data collection and analysis, thereby saving manpower and enhancing
performance. Our framework's efficacy is evaluated using accuracy, recall, and
F1 score of reaction condition data, and we compared our method with human
experts in terms of content correctness and time efficiency. The proposed
approach marks a significant advancement in automating chemical literature
extraction and demonstrates the potential for AI to revolutionize data
management and utilization in chemistry
- …