145 research outputs found
Chandra Detection of Intra-cluster X-ray sources in Virgo
We present a survey of X-ray point sources in the nearest and dynamically
young galaxy cluster, Virgo, using archival Chandra observations that sample
the vicinity of 80 early-type member galaxies. The X-ray source populations at
the outskirt of these galaxies are of particular interest. We detect a total of
1046 point sources (excluding galactic nuclei) out to a projected
galactocentric radius of 40 kpc and down to a limiting 0.5-8 keV
luminosity of . Based on the cumulative
spatial and flux distributions of these sources, we statistically identify
120 excess sources that are not associated with the main stellar content
of the individual galaxies, nor with the cosmic X-ray background. This excess
is significant at a 3.5 level, when Poisson error and cosmic variance
are taken into account. On the other hand, no significant excess sources are
found at the outskirt of a control sample of field galaxies, suggesting that at
least some fraction of the excess sources around the Virgo galaxies are truly
intra-cluster X-ray sources. Assisted with ground-based and HST optical imaging
of Virgo, we discuss the origins of these intra-cluster X-ray sources, in terms
of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs
associated with the diffuse intra-cluster light, stripped nucleated dwarf
galaxies and free-floating massive black holes.Comment: 29 pages, 8 figures. Accepted for publication in ApJ. Comments
welcom
EMMA: Adding Sequences into a Constraint Alignment with High Accuracy and Scalability (Abstract)
Multiple sequence alignment (MSA) is a crucial precursor to many downstream biological analyses, such as phylogeny estimation [Morrison, 2006], RNA structure prediction [Shapiro et al., 2007], protein structure prediction [Jumper et al., 2021], etc. Obtaining an accurate MSA can be challenging, especially when the dataset is large (i.e., more than 1000 sequences). A key technique for large-scale MSA estimation is to add sequences into an existing alignment. For example, biological knowledge can be used to form a reference alignment on a subset of the sequences, and then the remaining sequences can be added to the reference alignment. Another case where adding sequences into an existing alignment occurs is when new sequences or genomes are added to databases, leading to the opportunity to add the new sequences for each gene in the genome into a growing alignment. A third case is for de novo multiple sequence alignment, where a subset of the sequences is selected and aligned, and then the remaining sequences are added into this "backbone alignment" [Nguyen et al., 2015; Park et al., 2023; Shen et al., 2022; Liu and Warnow, 2023; Park and Warnow, 2023; Yamada et al., 2016]. Thus, adding sequences into existing alignments is a natural problem with multiple applications to biological sequence analysis.
A few methods have been developed to add sequences into an existing alignment, with MAFFT--add [Katoh and Frith, 2012] perhaps the most well-known. However, several multiple sequence alignment methods that operate in two steps (first extract and align the backbone sequences and then add the remaining sequences into this backbone alignment) also provide utilities for adding sequences into a user-provided alignment. We present EMMA, a new approach for adding "query" sequences into an existing "constraint" alignment. By construction, EMMA never changes the constraint alignment, except through the introduction of additional sites to represent homologies between the query sequences. EMMA uses a divide-and-conquer technique combined with MAFFT--add (using the most accurate setting, MAFFT-linsi--add) to add sequences into a user-provided alignment. We evaluate EMMA by comparing it to MAFFT-linsi--add, MAFFT--add (the default setting), and WITCH-ng-add. We include a range of biological and simulated datasets (nucleotides and proteins) ranging in size from 1000 to almost 200,000 sequences and evaluate alignment accuracy and scalability. MAFFT-linsi--add was the slowest and least scalable method, only able to run on datasets with at most 1000 sequences in this study, but had excellent accuracy (often the best) on those datasets. We also see that EMMA has better recall than WITCH-ng-add and MAFFT--add on large datasets, especially when the backbone alignment is small or clade-based
Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models
While manga is a popular entertainment form, creating manga is tedious,
especially adding screentones to the created sketch, namely manga screening.
Unfortunately, there is no existing method that tailors for automatic manga
screening, probably due to the difficulty of generating high-quality shaded
high-frequency screentones. The classic manga screening approaches generally
require user input to provide screentone exemplars or a reference manga image.
The recent deep learning models enables the automatic generation by learning
from a large-scale dataset. However, the state-of-the-art models still fail to
generate high-quality shaded screentones due to the lack of a tailored model
and high-quality manga training data. In this paper, we propose a novel
sketch-to-manga framework that first generates a color illustration from the
sketch and then generates a screentoned manga based on the intensity guidance.
Our method significantly outperforms existing methods in generating
high-quality manga with shaded high-frequency screentones.Comment: 7 pages, 6 figure
Improved Diffusion-based Image Colorization via Piggybacked Models
Image colorization has been attracting the research interests of the
community for decades. However, existing methods still struggle to provide
satisfactory colorized results given grayscale images due to a lack of
human-like global understanding of colors. Recently, large-scale Text-to-Image
(T2I) models have been exploited to transfer the semantic information from the
text prompts to the image domain, where text provides a global control for
semantic objects in the image. In this work, we introduce a colorization model
piggybacking on the existing powerful T2I diffusion model. Our key idea is to
exploit the color prior knowledge in the pre-trained T2I diffusion model for
realistic and diverse colorization. A diffusion guider is designed to
incorporate the pre-trained weights of the latent diffusion model to output a
latent color prior that conforms to the visual semantics of the grayscale
input. A lightness-aware VQVAE will then generate the colorized result with
pixel-perfect alignment to the given grayscale image. Our model can also
achieve conditional colorization with additional inputs (e.g. user hints and
texts). Extensive experiments show that our method achieves state-of-the-art
performance in terms of perceptual quality.Comment: project page: https://piggyback-color.github.io
Exploring the Cosmic Reionization Epoch in Frequency Space: An Improved Approach to Remove the Foreground in 21 cm Tomography
Aiming to correctly restore the redshifted 21 cm signals emitted by the
neutral hydrogen during the cosmic reionization processes, we re-examine the
separation approaches based on the quadratic polynomial fitting technique in
frequency space to investigate whether they works satisfactorily with complex
foreground, by quantitatively evaluate the quality of restored 21 cm signals in
terms of sample statistics. We construct the foreground model to characterize
both spatial and spectral substructures of the real sky, and use it to simulate
the observed radio spectra. By comparing between different separation
approaches through statistical analysis of restored 21 cm spectra and
corresponding power spectra, as well as their constraints on the mean halo bias
and average ionization fraction of the reionization processes, at
and the noise level of 60 mK we find that, although the complex
foreground can be well approximated with quadratic polynomial expansion, a
significant part of Mpc-scale components of the 21 cm signals (75% for Mpc scales and 34% for Mpc scales) is lost because
it tends to be mis-identified as part of the foreground when
single-narrow-segment separation approach is applied. The best restoration of
the 21 cm signals and the tightest determination of and can be
obtained with the three-narrow-segment fitting technique as proposed in this
paper. Similar results can be obtained at other redshifts.Comment: 33 pages, 14 figures. Accepted for publication in Ap
ELUCID - Exploring the Local Universe with reConstructed Initial Density field III: Constrained Simulation in the SDSS Volume
A method we developed recently for the reconstruction of the initial density
field in the nearby Universe is applied to the Sloan Digital Sky Survey Data
Release 7. A high-resolution N-body constrained simulation (CS) of the
reconstructed initial condition, with particles evolved in a 500 Mpc/h
box, is carried out and analyzed in terms of the statistical properties of the
final density field and its relation with the distribution of SDSS galaxies. We
find that the statistical properties of the cosmic web and the halo populations
are accurately reproduced in the CS. The galaxy density field is strongly
correlated with the CS density field, with a bias that depend on both galaxy
luminosity and color. Our further investigations show that the CS provides
robust quantities describing the environments within which the observed
galaxies and galaxy systems reside. Cosmic variance is greatly reduced in the
CS so that the statistical uncertainties can be controlled effectively even for
samples of small volumes.Comment: submitted to ApJ, 19 pages, 22 figures. Please download the
high-resolution version at http://staff.ustc.edu.cn/~whywang/paper
- …