1,115 research outputs found
RNA-Seq analysis uncovers non-coding small RNA system of Mycobacterium neoaurum in the metabolism of sterols to accumulate steroid intermediates
Additional file 6: Figure S2. Negative networks of differentially expressed sRNA candidates and their target genes. (a): Mn-CC/C; (b): Mn-9OHAD/CC; (c) Mn-ADD/CC; and (d): Mn-BNA/CC. Squares represent the upregulated (amaranth) or downregulated sRNA candidates (blue); circles represent the putative upregulated (red) or downregulated (green) target genes; links represent the regulation of sRNAs on their target genes
Self-supervised remote sensing feature learning: Learning Paradigms, Challenges, and Future Works
Deep learning has achieved great success in learning features from massive
remote sensing images (RSIs). To better understand the connection between
feature learning paradigms (e.g., unsupervised feature learning (USFL),
supervised feature learning (SFL), and self-supervised feature learning
(SSFL)), this paper analyzes and compares them from the perspective of feature
learning signals, and gives a unified feature learning framework. Under this
unified framework, we analyze the advantages of SSFL over the other two
learning paradigms in RSIs understanding tasks and give a comprehensive review
of the existing SSFL work in RS, including the pre-training dataset,
self-supervised feature learning signals, and the evaluation methods. We
further analyze the effect of SSFL signals and pre-training data on the learned
features to provide insights for improving the RSI feature learning. Finally,
we briefly discuss some open problems and possible research directions.Comment: 24 pages, 11 figures, 3 table
Visual landmark sequence-based indoor localization
This paper presents a method that uses common objects as landmarks for smartphone-based indoor localization and navigation. First, a topological map marking relative positions of common objects such as doors, stairs and toilets is generated from floor plan. Second, a computer vision technique employing the latest deep learning technology has been developed for detecting common indoor objects from videos captured by smartphone. Third, second order Hidden Markov model is applied to match detected indoor landmark sequence to topological map. We use videos captured by users holding smartphones and walking through corridors of an office building to evaluate our method. The experiment shows that computer vision technique is able to accurately and reliably detect 10 classes of common indoor objects and that second order hidden Markov model can reliably match the detected landmark sequence with the topological map. This work demonstrates that computer vision and machine learning techniques can play a very useful role in developing smartphone-based indoor positioning applications
Bilinear effect in complex systems
The distribution of the lifetime of Chinese dynasties (as well as that of the
British Isles and Japan) in a linear Zipf plot is found to consist of two
straight lines intersecting at a transition point. This two-section
piecewise-linear distribution is different from the power law or the stretched
exponent distribution, and is called the Bilinear Effect for short. With
assumptions mimicking the organization of ancient Chinese regimes, a 3-layer
network model is constructed. Numerical results of this model show the bilinear
effect, providing a plausible explanation of the historical data. Bilinear
effect in two other social systems is presented, indicating that such a
piecewise-linear effect is widespread in social systems.Comment: 5 pages, 5 figure
SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with 4D Imaging Radar
The 4D Millimeter wave (mmWave) radar is a promising technology for vehicle
sensing due to its cost-effectiveness and operability in adverse weather
conditions. However, the adoption of this technology has been hindered by
sparsity and noise issues in radar point cloud data. This paper introduces
spatial multi-representation fusion (SMURF), a novel approach to 3D object
detection using a single 4D imaging radar. SMURF leverages multiple
representations of radar detection points, including pillarization and density
features of a multi-dimensional Gaussian mixture distribution through kernel
density estimation (KDE). KDE effectively mitigates measurement inaccuracy
caused by limited angular resolution and multi-path propagation of radar
signals. Additionally, KDE helps alleviate point cloud sparsity by capturing
density features. Experimental evaluations on View-of-Delft (VoD) and
TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of
SMURF, outperforming recently proposed 4D imaging radar-based
single-representation models. Moreover, while using 4D imaging radar only,
SMURF still achieves comparable performance to the state-of-the-art 4D imaging
radar and camera fusion-based method, with an increase of 1.22% in the mean
average precision on bird's-eye view of TJ4DRadSet dataset and 1.32% in the 3D
mean average precision on the entire annotated area of VoD dataset. Our
proposed method demonstrates impressive inference time and addresses the
challenges of real-time detection, with the inference time no more than 0.05
seconds for most scans on both datasets. This research highlights the benefits
of 4D mmWave radar and is a strong benchmark for subsequent works regarding 3D
object detection with 4D imaging radar
LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
As an emerging technology and a relatively affordable device, the 4D imaging
radar has already been confirmed effective in performing 3D object detection in
autonomous driving. Nevertheless, the sparsity and noisiness of 4D radar point
clouds hinder further performance improvement, and in-depth studies about its
fusion with other modalities are lacking. On the other hand, most of the
camera-based perception methods transform the extracted image perspective view
features into the bird's-eye view geometrically via "depth-based splatting"
proposed in Lift-Splat-Shoot (LSS), and some researchers exploit other modals
such as LiDARs or ordinary automotive radars for enhancement. Recently, a few
works have applied the "sampling" strategy for image view transformation,
showing that it outperforms "splatting" even without image depth prediction.
However, the potential of "sampling" is not fully unleashed. In this paper, we
investigate the "sampling" view transformation strategy on the camera and 4D
imaging radar fusion-based 3D object detection. In the proposed model, LXL,
predicted image depth distribution maps and radar 3D occupancy grids are
utilized to aid image view transformation, called "radar occupancy-assisted
depth-based sampling". Experiments on VoD and TJ4DRadSet datasets show that the
proposed method outperforms existing 3D object detection methods by a
significant margin without bells and whistles. Ablation studies demonstrate
that our method performs the best among different enhancement settings
Strain Enhanced Visible–Ultraviolet Absorption of Blue Phosphorene/MoX2 (X = S,Se) Heterolayers
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/149231/1/pssr201800659.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149231/2/pssr201800659_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/149231/3/pssr201800659-sup-0001-SuppFig-S1.pd
Highly Efficient Polarized GeS/MoSe2 van der Waals Heterostructure for Water Splitting from Ultraviolet to Near‐Infrared Light
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/152744/1/pssr201900582.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/152744/2/pssr201900582_am.pd
FRAMU: Attention-based Machine Unlearning using Federated Reinforcement Learning
Machine Unlearning is an emerging field that addresses data privacy issues by
enabling the removal of private or irrelevant data from the Machine Learning
process. Challenges related to privacy and model efficiency arise from the use
of outdated, private, and irrelevant data. These issues compromise both the
accuracy and the computational efficiency of models in both Machine Learning
and Unlearning. To mitigate these challenges, we introduce a novel framework,
Attention-based Machine Unlearning using Federated Reinforcement Learning
(FRAMU). This framework incorporates adaptive learning mechanisms, privacy
preservation techniques, and optimization strategies, making it a well-rounded
solution for handling various data sources, either single-modality or
multi-modality, while maintaining accuracy and privacy. FRAMU's strength lies
in its adaptability to fluctuating data landscapes, its ability to unlearn
outdated, private, or irrelevant data, and its support for continual model
evolution without compromising privacy. Our experiments, conducted on both
single-modality and multi-modality datasets, revealed that FRAMU significantly
outperformed baseline models. Additional assessments of convergence behavior
and optimization strategies further validate the framework's utility in
federated learning applications. Overall, FRAMU advances Machine Unlearning by
offering a robust, privacy-preserving solution that optimizes model performance
while also addressing key challenges in dynamic data environments.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
- …