315 research outputs found
RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series
Decomposing complex time series into trend, seasonality, and remainder
components is an important task to facilitate time series anomaly detection and
forecasting. Although numerous methods have been proposed, there are still many
time series characteristics exhibiting in real-world data which are not
addressed properly, including 1) ability to handle seasonality fluctuation and
shift, and abrupt change in trend and reminder; 2) robustness on data with
anomalies; 3) applicability on time series with long seasonality period. In the
paper, we propose a novel and generic time series decomposition algorithm to
address these challenges. Specifically, we extract the trend component robustly
by solving a regression problem using the least absolute deviations loss with
sparse regularization. Based on the extracted trend, we apply the the non-local
seasonal filtering to extract the seasonality component. This process is
repeated until accurate decomposition is obtained. Experiments on different
synthetic and real-world time series datasets demonstrate that our method
outperforms existing solutions.Comment: Accepted to the thirty-third AAAI Conference on Artificial
Intelligence (AAAI 2019), 9 pages, 5 figure
A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior
Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect
Deformable Image Registration with Inclusion of Autodetected Homologous Tissue Features
A novel deformable registration algorithm is proposed in the application of radiation therapy. The algorithm starts with autodetection of a number of points with distinct tissue features. The feature points are then matched by using the scale invariance features transform (SIFT) method. The associated feature point pairs are served as landmarks for the subsequent thin plate spline (TPS) interpolation. Several registration experiments using both digital phantom and clinical data demonstrate the accuracy and efficiency of the method. For the 3D phantom case, markers with error less than 2 mm are over 85% of total test markers, and it takes only 2-3 minutes for 3D feature points association. The proposed method provides a clinically practical solution and should be valuable for various image-guided radiation therapy (IGRT) applications
The Influences of Key Factors on the Consequences Following the Natural Gas Leakage from Pipeline
AbstractThe effects of the environmental dispersion (i.e. atmospheric stability, wind speed, temperature, humidity and ground roughness) and source release factors (i.e. pipeline diameter, length, pressure and release opening area) on the suffocation distance, flammable vapor cloud distance, overpressure distance and thermal radiation distance after the natural gas released from pipeline were evaluated and analyzed. The results show that all the environmental dispersion factors except humidity have an effect on the flammable vapor cloud distance. The more stable atmospheric condition, lower wind speed and smaller ground roughness lead to the longer flammable vapor cloud distance. The atmosphere temperature has a very limited influence on the flammable vapor cloud distance. The higher ambient temperature and larger humidity result in the longer downwind thermal radiation distance, while the atmospheric stability, wind speed and ground roughness nearly does not. All the four source release factors significantly influence the flammable vapor cloud distance and thermal radiation distance, which is due to the different release amount, release rate and initial momentum
Partition of a Binary Matrix into k
A biclustering problem consists of objects and an attribute vector for each object. Biclustering aims at finding a bicluster—a subset of objects that exhibit similar behavior across a subset of attributes, or vice versa. Biclustering in matrices with binary entries (“0”/“1”) can be simplified into the problem of finding submatrices with entries of “1.” In this paper, we consider a variant of the biclustering problem: the k-submatrix partition of binary matrices problem. The input of the problem contains an n×m matrix with entries (“0”/“1”) and a constant positive integer k. The k-submatrix partition of binary matrices problem is to find exactly k submatrices with entries of “1” such that these k submatrices are pairwise row and column exclusive and each row (column) in the matrix occurs in exactly one of the k submatrices. We discuss the complexity of the k-submatrix partition of binary matrices problem and show that the problem is NP-hard for any k≥3 by reduction from a biclustering problem in bipartite graphs
UOD: Universal One-shot Detection of Anatomical Landmarks
One-shot medical landmark detection gains much attention and achieves great
success for its label-efficient training process. However, existing one-shot
learning methods are highly specialized in a single domain and suffer domain
preference heavily in the situation of multi-domain unlabeled data. Moreover,
one-shot learning is not robust that it faces performance drop when annotating
a sub-optimal image. To tackle these issues, we resort to developing a
domain-adaptive one-shot landmark detection framework for handling multi-domain
medical images, named Universal One-shot Detection (UOD). UOD consists of two
stages and two corresponding universal models which are designed as
combinations of domain-specific modules and domain-shared modules. In the first
stage, a domain-adaptive convolution model is self-supervised learned to
generate pseudo landmark labels. In the second stage, we design a
domain-adaptive transformer to eliminate domain preference and build the global
context for multi-domain data. Even though only one annotated sample from each
domain is available for training, the domain-shared modules help UOD aggregate
all one-shot samples to detect more robust and accurate landmarks. We
investigated both qualitatively and quantitatively the proposed UOD on three
widely-used public X-ray datasets in different anatomical domains (i.e., head,
hand, chest) and obtained state-of-the-art performances in each domain.Comment: Eealy accepted by MICCAI 2023. 11pages, 4 figures, 2 table
Unsupervised augmentation optimization for few-shot medical image segmentation
The augmentation parameters matter to few-shot semantic segmentation since
they directly affect the training outcome by feeding the networks with varying
perturbated samples. However, searching optimal augmentation parameters for
few-shot segmentation models without annotations is a challenge that current
methods fail to address. In this paper, we first propose a framework to
determine the ``optimal'' parameters without human annotations by solving a
distribution-matching problem between the intra-instance and intra-class
similarity distribution, with the intra-instance similarity describing the
similarity between the original sample of a particular anatomy and its
augmented ones and the intra-class similarity representing the similarity
between the selected sample and the others in the same class. Extensive
experiments demonstrate the superiority of our optimized augmentation in
boosting few-shot segmentation models. We greatly improve the top competing
method by 1.27\% and 1.11\% on Abd-MRI and Abd-CT datasets, respectively, and
even achieve a significant improvement for SSL-ALP on the left kidney by 3.39\%
on the Abd-CT dataset
UCDFormer: Unsupervised Change Detection Using a Transformer-driven Image Translation
Change detection (CD) by comparing two bi-temporal images is a crucial task
in remote sensing. With the advantages of requiring no cumbersome labeled
change information, unsupervised CD has attracted extensive attention in the
community. However, existing unsupervised CD approaches rarely consider the
seasonal and style differences incurred by the illumination and atmospheric
conditions in multi-temporal images. To this end, we propose a change detection
with domain shift setting for remote sensing images. Furthermore, we present a
novel unsupervised CD method using a light-weight transformer, called
UCDFormer. Specifically, a transformer-driven image translation composed of a
light-weight transformer and a domain-specific affinity weight is first
proposed to mitigate domain shift between two images with real-time efficiency.
After image translation, we can generate the difference map between the
translated before-event image and the original after-event image. Then, a novel
reliable pixel extraction module is proposed to select significantly
changed/unchanged pixel positions by fusing the pseudo change maps of fuzzy
c-means clustering and adaptive threshold. Finally, a binary change map is
obtained based on these selected pixel pairs and a binary classifier.
Experimental results on different unsupervised CD tasks with seasonal and style
changes demonstrate the effectiveness of the proposed UCDFormer. For example,
compared with several other related methods, UCDFormer improves performance on
the Kappa coefficient by more than 12\%. In addition, UCDFormer achieves
excellent performance for earthquake-induced landslide detection when
considering large-scale applications. The code is available at
\url{https://github.com/zhu-xlab/UCDFormer}Comment: 16 pages, 7 figures, IEEE Transactions on Geoscience and Remote
Sensin
- …