537 research outputs found
Leakage discharge separation in multi-leaks pipe networks based on improved Independent Component Analysis with Reference (ICA-R) algorithm
The existing leakage assessment methods are not accurate and timely, making it difficult to meet the needs of water companies. In this paper, a methodology based on Independent Component Analysis with Reference (ICA-R) algorithm was proposed to give an more accurate estimation of leakage discharge in multi-leaks water distribution network without considering the specific individuality of one single leak. The proposed algorithm has been improved is improved to prevent error convergence in multi-leak pipe networks. Then an example EPANET model and a physical experimental platform were built to simulate and evaluate the flow in multi-leak WDNs, and the leakage flow rate is calculated by improved ICA-R algorithm and FastICA algorithm. The simulation results are shown the improved ICA-R algorithm has better performanc
wmh_seg: Transformer based U-Net for Robust and Automatic White Matter Hyperintensity Segmentation across 1.5T, 3T and 7T
White matter hyperintensity (WMH) remains the top imaging biomarker for
neurodegenerative diseases. Robust and accurate segmentation of WMH holds
paramount significance for neuroimaging studies. The growing shift from 3T to
7T MRI necessitates robust tools for harmonized segmentation across field
strengths and artifacts. Recent deep learning models exhibit promise in WMH
segmentation but still face challenges, including diverse training data
representation and limited analysis of MRI artifacts' impact. To address these,
we introduce wmh_seg, a novel deep learning model leveraging a
transformer-based encoder from SegFormer. wmh_seg is trained on an unmatched
dataset, including 1.5T, 3T, and 7T FLAIR images from various sources,
alongside with artificially added MR artifacts. Our approach bridges gaps in
training diversity and artifact analysis. Our model demonstrated stable
performance across magnetic field strengths, scanner manufacturers, and common
MR imaging artifacts. Despite the unique inhomogeneity artifacts on ultra-high
field MR images, our model still offers robust and stable segmentation on 7T
FLAIR images. Our model, to date, is the first that offers quality white matter
lesion segmentation on 7T FLAIR images
Leveraging The Finite States of Emotion Processing to Study Late-Life Mental Health
Traditional approaches in mental health research apply General Linear Models
(GLM) to describe the longitudinal dynamics of observed psycho-behavioral
measurements (questionnaire summary scores). Similarly, GLMs are also applied
to characterize relationships between neurobiological measurements (regional
fMRI signals) and perceptual stimuli or other regional signals. While these
methods are useful for exploring linear correlations among the isolated signals
of those constructs (i.e., summary scores or fMRI signals), these classical
frameworks fall short in providing insights into the comprehensive system-level
dynamics underlying observable changes. Hidden Markov Models (HMM) are a
statistical model that enable us to describe the sequential relations among
multiple observable constructs, and when applied through the lens of Finite
State Automata (FSA), can provide a more integrated and intuitive framework for
modeling and understanding the underlying controller (the prescription for how
to respond to inputs) that fundamentally defines any system, as opposed to
linearly correlating output signals produced by the controller. We present a
simple and intuitive HMM processing pipeline vcHMM (See Preliminary Data) that
highlights FSA theory and is applicable for both behavioral analysis of
questionnaire data and fMRI data. HMMs offer theoretic promise as they are
computationally equivalent to the FSA, the control processor of a Turing
Machine (TM) The dynamic programming Viterbi algorithm is used to leverage the
HMM model. It efficiently identifies the most likely sequence of hidden states.
The vcHMM pipeline leverages this grammar to understand how behavior and neural
activity relate to depression
NTU4DRadLM: 4D Radar-centric Multi-Modal Dataset for Localization and Mapping
Simultaneous Localization and Mapping (SLAM) is moving towards a robust
perception age. However, LiDAR- and visual- SLAM may easily fail in adverse
conditions (rain, snow, smoke and fog, etc.). In comparison, SLAM based on 4D
Radar, thermal camera and IMU can work robustly. But only a few literature can
be found. A major reason is the lack of related datasets, which seriously
hinders the research. Even though some datasets are proposed based on 4D radar
in past four years, they are mainly designed for object detection, rather than
SLAM. Furthermore, they normally do not include thermal camera. Therefore, in
this paper, NTU4DRadLM is presented to meet this requirement. The main
characteristics are: 1) It is the only dataset that simultaneously includes all
6 sensors: 4D radar, thermal camera, IMU, 3D LiDAR, visual camera and RTK GPS.
2) Specifically designed for SLAM tasks, which provides fine-tuned ground truth
odometry and intentionally formulated loop closures. 3) Considered both
low-speed robot platform and fast-speed unmanned vehicle platform. 4) Covered
structured, unstructured and semi-structured environments. 5) Considered both
middle- and large- scale outdoor environments, i.e., the 6 trajectories range
from 246m to 6.95km. 6) Comprehensively evaluated three types of SLAM
algorithms. Totally, the dataset is around 17.6km, 85mins, 50GB and it will be
accessible from this link: https://github.com/junzhang2016/NTU4DRadLMComment: 2023 IEEE International Intelligent Transportation Systems Conference
(ITSC 2023
A comprehensive overview of diffuse correlation spectroscopy: theoretical framework, recent advances in hardware, analysis, and applications
Diffuse correlation spectroscopy (DCS) is a powerful tool for assessing microvascular hemodynamic in deep tissues. Recent advances in sensors, lasers, and deep learning have further boosted the development of new DCS methods. However, newcomers might feel overwhelmed, not only by the already complex DCS theoretical framework but also by the broad range of component options and system architectures. To facilitate new entry into this exciting field, we present a comprehensive review of DCS hardware architectures (continuous-wave, frequency-domain, and time-domain) and summarize corresponding theoretical models. Further, we discuss new applications of highly integrated silicon single-photon avalanche diode (SPAD) sensors in DCS, compare SPADs with existing sensors, and review other components (lasers, fibers, and correlators), as well as new data analysis tools, including deep learning. Potential applications in medical diagnosis are discussed, and an outlook for the future directions is provided, to offer effective guidance to embark on DCS research
The Genomes of Oryza sativa: A History of Duplications
We report improved whole-genome shotgun sequences for the genomes of indica and japonica rice, both with multimegabase contiguity, or almost 1,000-fold improvement over the drafts of 2002. Tested against a nonredundant collection of 19,079 full-length cDNAs, 97.7% of the genes are aligned, without fragmentation, to the mapped super-scaffolds of one or the other genome. We introduce a gene identification procedure for plants that does not rely on similarity to known genes to remove erroneous predictions resulting from transposable elements. Using the available EST data to adjust for residual errors in the predictions, the estimated gene count is at least 38,000–40,000. Only 2%–3% of the genes are unique to any one subspecies, comparable to the amount of sequence that might still be missing. Despite this lack of variation in gene content, there is enormous variation in the intergenic regions. At least a quarter of the two sequences could not be aligned, and where they could be aligned, single nucleotide polymorphism (SNP) rates varied from as little as 3.0 SNP/kb in the coding regions to 27.6 SNP/kb in the transposable elements. A more inclusive new approach for analyzing duplication history is introduced here. It reveals an ancient whole-genome duplication, a recent segmental duplication on Chromosomes 11 and 12, and massive ongoing individual gene duplications. We find 18 distinct pairs of duplicated segments that cover 65.7% of the genome; 17 of these pairs date back to a common time before the divergence of the grasses. More important, ongoing individual gene duplications provide a never-ending source of raw material for gene genesis and are major contributors to the differences between members of the grass family
- …