1,517 research outputs found
Globular Clusters in the Outer Halo of M31
In this paper, we present photometry of 53 globular clusters (GCs) in the M31
outer halo, including the {\sl GALEX} FUV and NUV, SDSS , 15
intermediate-band filters of BATC, and 2MASS bands. By comparing
the multicolour photometry with stellar population synthesis models, we
determine the metallicities, ages, and masses for these GCs, aiming to probe
the merging/accretion history of M31. We find no clear trend of metallicity and
mass with the de-projected radius. The halo GCs with age younger than
8 Gyr are mostly located at the de-projected radii around 100 kpc, but this may
be due to a selection effect. We also find that the halo GCs have consistent
metallicities with their spatially-associated substructures, which provides
further evidence of the physical association between them. Both the disk and
halo GCs in M31 show a bimodal luminosity distribution. However, we should
emphasize that there are more faint halo GCs which are not being seen in the
disk. The bimodal luminosity function of the halo GCs may reflect different
origin or evolution environment in their original hosts. The M31 halo GCs
includes one intermediate metallicity group ( [Fe/H] ) and one
metal-poor group ([Fe/H] ), while the disk GCs have one metal-rich group
more. There are considerable differences between the halo GCs in M31 and the
Milky Way (MW). The total number of M31 GCs is approximately three times more
numerous than that of the MW, however, M31 has about six times the number of
halo GCs in the MW. Compared to M31 halo GCs, the Galactic halo ones are mostly
metal-poor. Both the numerous halo GCs and the higher-metallicity component are
suggestive of an active merger history of M31.Comment: 14 pages, 16 figures, 6 tables. Accepted for publication in A&
Flow-Guided Feature Aggregation for Video Object Detection
Extending state-of-the-art object detectors from image to video is
challenging. The accuracy of detection suffers from degenerated object
appearances in videos, e.g., motion blur, video defocus, rare poses, etc.
Existing work attempts to exploit temporal information on box level, but such
methods are not trained end-to-end. We present flow-guided feature aggregation,
an accurate and end-to-end learning framework for video object detection. It
leverages temporal coherence on feature level instead. It improves the
per-frame features by aggregation of nearby features along the motion paths,
and thus improves the video recognition accuracy. Our method significantly
improves upon strong single-frame baselines in ImageNet VID, especially for
more challenging fast moving objects. Our framework is principled, and on par
with the best engineered systems winning the ImageNet VID challenges 2016,
without additional bells-and-whistles. The proposed method, together with Deep
Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The
code is available at
https://github.com/msracver/Flow-Guided-Feature-Aggregation
- …