458 research outputs found
A Software-Defined-Radio Platform for Multiple-Input-Multiple-Output Over-The-Air Measurement
This paper presents a 2 × 2 multiple-inputmultiple-output over-the-air (MIMO OTA) measurement system with user-programmable, reconfigurable and real-time signal processing field-programmable gate arrays (FPGAs)-based software-defined radio (SDR) capability. Signal generation and analysis as well as channel emulation are all implemented using vector signal transceivers (VSTs). As a demonstration, we performed the Third Generation Partnership Project (3GPP) two-stage MIMO OTA conducted test using a downlink time division long-term evolution (TD-LTE) duplex scheme. The channel emulation was operated in a stochastic mode. Some preliminary results of the system verification are shown
InGVIO: A Consistent Invariant Filter for Fast and High-Accuracy GNSS-Visual-Inertial Odometry
Combining Global Navigation Satellite System (GNSS) with visual and inertial
sensors can give smooth pose estimation without drifting in geographical
coordinates. The fusion system gradually degrades to Visual-Inertial Odometry
(VIO) with the number of satellites decreasing, which guarantees robust global
navigation in GNSS unfriendly environments. In this letter, we propose an
open-sourced invariant filter-based platform, InGVIO, to tightly fuse
monocular/stereo visual-inertial measurements, along with raw data from GNSS,
i.e. pseudo ranges and Doppler shifts. InGVIO gives highly competitive results
in terms of accuracy and computational load compared to current graph-based and
`naive' EKF-based algorithms. Thanks to our proposed key-frame marginalization
strategies, the baseline for triangulation is large although only a few cloned
poses are kept. Besides, landmarks are anchored to a single cloned pose to fit
the nonlinear log-error form of the invariant filter while achieving decoupled
propagation with IMU states. Moreover, we exploit the infinitesimal symmetries
of the system, which gives equivalent results for the pattern of degenerate
motions and the structure of unobservable subspaces compared to our previous
work using observability analysis. We show that the properly-chosen invariant
error captures such symmetries and has intrinsic consistency properties. InGVIO
is tested on both open datasets and our proposed fixed-wing datasets with
variable levels of difficulty. The latter, to the best of our knowledge, are
the first datasets open-sourced to the community on a fixed-wing aircraft with
raw GNSS.Comment: 8 pages, 8 figures; manuscript will be submitted to IEEE RA-L for
possible publicatio
A LTE MIMO OTA Test System Using Vector Signal Transceivers
A 2 × 2 multiple-input-multiple-output over-the-air (MIMO OTA) test system based on four field-programmable Vector-Signal-Transceiver (VST) modules is presented. The system enables 2 x 2 MIMO OTA testing by assembling of a twochannel Evolved Node B (eNodeB) LTE base station emulator, a 2x2 channel emulator, and a two-channel user equipment (UE) simulator. A two-stage MIMO OTA test method has been demonstrated with downlink Long-Term Evolution Time-Division Duplex (LTE-TDD) mode using different modulation and coding schemes (MCSs). Test results and analysis are shown. This system will allow a systematic study of MIMO OTA metrology needs
Measurement-Based Characterization of 39 GHz Millimeter-Wave Dual-Polarized Channel Under Foliage Loss Impact
This paper presents a measurement-based analysis of wideband 39 GHz millimeter wave (mm-wave) dual-polarized propagation channel under the impact of foliage presence between a transmitter (Tx) and a receiver (Rx). The measurements were conducted in a rich-vegetation area, and the so-called direction-scan-sounding (DSS) method which rotates a horn antenna in angular domains was applied, aiming at investigating the direction-of-arrival (DoA)-dependent characteristics of polarimetric channels. Four Tx-to-Rx polarization configurations were considered, including co-polarization scenarios with vertical Tx-polarization to vertical Rx-polarization (VV) and horizontal to horizontal (HH), as well as cross-polarization with vertical to horizontal (VH) and horizontal to vertical (HV), which allow scrutinizing the differences in delay-direction dispersion for usually-encountered scenarios. A foliage loss model for various vegetation depths in VV polarization configuration, was also presented in this paper. The results show that the foliage-loss DoA spectra for VH and HV are similar, while the spectra exhibit less penetration loss in most directions for VV than for the HH. Furthermore, the presence of vegetation between the Tx and the Rx leads to larger dispersion in delay compared to the clear line-of-sight (LoS) scenario, particularly for vertical polarization in the Tx side, and additionally, the foliage presence also results in evident DoA dispersion, specially in the HV scenario. Selectivity in directions caused by foliage is more significant in vertically-polarized Tx scenarios than in the horizontally-polarized Tx scenarios. A statistical model is established summarizing these comparison details
Finding and Editing Multi-Modal Neurons in Pre-Trained Transformer
Multi-modal large language models (LLM) have achieved powerful capabilities
for visual semantic understanding in recent years. However, little is known
about how LLMs comprehend visual information and interpret different modalities
of features. In this paper, we propose a new method for identifying multi-modal
neurons in transformer-based multi-modal LLMs. Through a series of experiments,
We highlight three critical properties of multi-modal neurons by four
well-designed quantitative evaluation metrics. Furthermore, we introduce a
knowledge editing method based on the identified multi-modal neurons, for
modifying a specific token to another designative token. We hope our findings
can inspire further explanatory researches on understanding mechanisms of
multi-modal LLMs
Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation
Recent years have witnessed the strong power of 3D generation models, which
offer a new level of creative flexibility by allowing users to guide the 3D
content generation process through a single image or natural language. However,
it remains challenging for existing 3D generation methods to create
subject-driven 3D content across diverse prompts. In this paper, we introduce a
novel 3D customization method, dubbed Make-Your-3D that can personalize
high-fidelity and consistent 3D content from only a single image of a subject
with text description within 5 minutes. Our key insight is to harmonize the
distributions of a multi-view diffusion model and an identity-specific 2D
generative model, aligning them with the distribution of the desired 3D
subject. Specifically, we design a co-evolution framework to reduce the
variance of distributions, where each model undergoes a process of learning
from the other through identity-aware optimization and subject-prior
optimization, respectively. Extensive experiments demonstrate that our method
can produce high-quality, consistent, and subject-specific 3D content with
text-driven modifications that are unseen in subject image.Comment: Project page: https://liuff19.github.io/Make-Your-3
OrchMoE: Efficient Multi-Adapter Learning with Task-Skill Synergy
We advance the field of Parameter-Efficient Fine-Tuning (PEFT) with our novel
multi-adapter method, OrchMoE, which capitalizes on modular skill architecture
for enhanced forward transfer in neural networks. Unlike prior models that
depend on explicit task identification inputs, OrchMoE automatically discerns
task categories, streamlining the learning process. This is achieved through an
integrated mechanism comprising an Automatic Task Classification module and a
Task-Skill Allocation module, which collectively deduce task-specific
classifications and tailor skill allocation matrices. Our extensive evaluations
on the 'Super Natural Instructions' dataset, featuring 1,600 diverse
instructional tasks, indicate that OrchMoE substantially outperforms comparable
multi-adapter baselines in terms of both performance and sample utilization
efficiency, all while operating within the same parameter constraints. These
findings suggest that OrchMoE offers a significant leap forward in multi-task
learning efficiency.Comment: 9 pages, 3 figure
Discovering Galaxy Features via Dataset Distillation
In many applications, Neural Nets (NNs) have classification performance on
par or even exceeding human capacity. Moreover, it is likely that NNs leverage
underlying features that might differ from those humans perceive to classify.
Can we "reverse-engineer" pertinent features to enhance our scientific
understanding? Here, we apply this idea to the notoriously difficult task of
galaxy classification: NNs have reached high performance for this task, but
what does a neural net (NN) "see" when it classifies galaxies? Are there
morphological features that the human eye might overlook that could help with
the task and provide new insights? Can we visualize tracers of early evolution,
or additionally incorporated spectral data? We present a novel way to summarize
and visualize galaxy morphology through the lens of neural networks, leveraging
Dataset Distillation, a recent deep-learning methodology with the primary
objective to distill knowledge from a large dataset and condense it into a
compact synthetic dataset, such that a model trained on this synthetic dataset
achieves performance comparable to a model trained on the full dataset. We
curate a class-balanced, medium-size high-confidence version of the Galaxy Zoo
2 dataset, and proceed with dataset distillation from our accurate
NN-classifier to create synthesized prototypical images of galaxy morphological
features, demonstrating its effectiveness. Of independent interest, we
introduce a self-adaptive version of the state-of-the-art Matching Trajectory
algorithm to automate the distillation process, and show enhanced performance
on computer vision benchmarks.Comment: Accepted to NeurIPS Workshop on Machine Learning and the Physical
Sciences, 202
Prognostic and Predictive Value of Three DNA Methylation Signatures in Lung Adenocarcinoma
Background: Lung adenocarcinoma (LUAD) is the leading cause of cancer-related mortality worldwide. Molecular characterization-based methods hold great promise for improving the diagnostic accuracy and for predicting treatment response. The DNA methylation patterns of LUAD display a great potential as a specific biomarker that will complement invasive biopsy, thus improving early detection.
Method: In this study, based on the whole-genome methylation datasets from The Cancer Genome Atlas (TCGA) and several machine learning methods, we evaluated the possibility of DNA methylation signatures for identifying lymph node metastasis of LUAD, differentiating between tumor tissue and normal tissue, and predicting the overall survival (OS) of LUAD patients. Using the regularized logistic regression, we built a classifier based on the 3616 CpG sites to identify the lymph node metastasis of LUAD. Furthermore, a classifier based on 14 CpG sites was established to differentiate between tumor and normal tissues. Using the Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression, we built a 16-CpG-based model to predict the OS of LUAD patients.
Results: With the aid of 3616-CpG-based classifier, we were able to identify the lymph node metastatic status of patients directly by the methylation signature from the primary tumor tissues. The 14-CpG-based classifier could differentiate between tumor and normal tissues. The area under the receiver operating characteristic (ROC) curve (AUC) for both classifiers achieved values close to 1, demonstrating the robust classifier effect. The 16-CpG-based model showed independent prognostic value in LUAD patients.
Interpretation: These findings will not only facilitate future treatment decisions based on the DNA methylation signatures but also enable additional investigations into the utilization of LUAD DNA methylation pattern by different machine learning methods
Cooperative Spin Amplification
Quantum amplification is recognized as a key resource for precision
measurements. However, most conventional paradigms employ an ensemble of
independent particles that usually limit the performance of quantum
amplification in gain, spectral linewidth, etc. Here we demonstrate a new
signal amplification using cooperative 129Xe nuclear spins embedded within a
feedback circuit, where the noble-gas spin coherence time is enhanced by at
least one order of magnitude. Using such a technique, magnetic field can be
substantially pre-enhanced by more than three orders and is in situ readout
with an embedded 87Rb magnetometer. We realize an ultrahigh magnetic
sensitivity of 4.0 fT/Hz that surpasses the photon-shot noise and even
below the spin-projection noise of the embedded atomic magnetometer, allowing
for exciting applications including searches for dark matter with sensitivity
well beyond supernova constraints. Our findings extend the physics of quantum
amplification to cooperative spin systems and can be generalized to a wide
variety of existing sensors, enabling a new class of cooperative quantum
sensors.Comment: 7 pages, 4 figure
- …