435,141 research outputs found
Satellite Luminosities in Galaxy Groups
Halo model interpretations of the luminosity dependence of galaxy clustering
assume that there is a central galaxy in every sufficiently massive halo, and
that this central galaxy is very different from all the others in the halo. The
halo model decomposition makes the remarkable prediction that the mean
luminosity of the non-central galaxies in a halo should be almost independent
of halo mass: the predicted increase is about 20% while the halo mass increases
by a factor of more than 20. In contrast, the luminosity of the central object
is predicted to increase approximately linearly with halo mass at low to
intermediate masses, and logarithmically at high masses. We show that this
weak, almost non-existent mass-dependence of the satellites is in excellent
agreement with the satellite population in group catalogs constructed by two
different collaborations. This is remarkable, because the halo model prediction
was made without ever identifying groups and clusters. The halo model also
predicts that the number of satellites in a halo is drawn from a Poisson
distribution with mean which depends on halo mass. This, combined with the weak
dependence of satellite luminosity on halo mass, suggests that the Scott
effect, such that the luminosities of very bright galaxies are merely the
statistically extreme values of a general luminosity distribution, may better
apply to the most luminous satellite galaxy in a halo than to BCGs. If galaxies
are identified with halo substructure at the present time, then central
galaxies should be about 4 times more massive than satellite galaxies of the
same luminosity, whereas the differences between the stellar M/L ratios should
be smaller. Therefore, a comparison of the weak lensing signal from central and
satellite galaxies should provide useful constraints. [abridged]Comment: 8 pages, 3 figures. Matches version accepted by MNRA
Visualizing and Interacting with Concept Hierarchies
Concept Hierarchies and Formal Concept Analysis are theoretically well
grounded and largely experimented methods. They rely on line diagrams called
Galois lattices for visualizing and analysing object-attribute sets. Galois
lattices are visually seducing and conceptually rich for experts. However they
present important drawbacks due to their concept oriented overall structure:
analysing what they show is difficult for non experts, navigation is
cumbersome, interaction is poor, and scalability is a deep bottleneck for
visual interpretation even for experts. In this paper we introduce semantic
probes as a means to overcome many of these problems and extend usability and
application possibilities of traditional FCA visualization methods. Semantic
probes are visual user centred objects which extract and organize reduced
Galois sub-hierarchies. They are simpler, clearer, and they provide a better
navigation support through a rich set of interaction possibilities. Since probe
driven sub-hierarchies are limited to users focus, scalability is under control
and interpretation is facilitated. After some successful experiments, several
applications are being developed with the remaining problem of finding a
compromise between simplicity and conceptual expressivity
Attend and Interact: Higher-Order Object Interactions for Video Understanding
Human actions often involve complex interactions across several inter-related
objects in the scene. However, existing approaches to fine-grained video
understanding or visual relationship detection often rely on single object
representation or pairwise object relationships. Furthermore, learning
interactions across multiple objects in hundreds of frames for video is
computationally infeasible and performance may suffer since a large
combinatorial space has to be modeled. In this paper, we propose to efficiently
learn higher-order interactions between arbitrary subgroups of objects for
fine-grained video understanding. We demonstrate that modeling object
interactions significantly improves accuracy for both action recognition and
video captioning, while saving more than 3-times the computation over
traditional pairwise relationships. The proposed method is validated on two
large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and
SINet-Caption achieve state-of-the-art performances on both datasets even
though the videos are sampled at a maximum of 1 FPS. To the best of our
knowledge, this is the first work modeling object interactions on open domain
large-scale video datasets, and we additionally model higher-order object
interactions which improves the performance with low computational costs.Comment: CVPR 201
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Hyperspectral cameras can provide unique spectral signatures for consistently
distinguishing materials that can be used to solve surveillance tasks. In this
paper, we propose a novel real-time hyperspectral likelihood maps-aided
tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving
object tracking system generally consists of registration, object detection,
and tracking modules. We focus on the target detection part and remove the
necessity to build any offline classifiers and tune a large amount of
hyperparameters, instead learning a generative target model in an online manner
for hyperspectral channels ranging from visible to infrared wavelengths. The
key idea is that, our adaptive fusion method can combine likelihood maps from
multiple bands of hyperspectral imagery into one single more distinctive
representation increasing the margin between mean value of foreground and
background pixels in the fused map. Experimental results show that the HLT not
only outperforms all established fusion methods but is on par with the current
state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and
Pattern Recognition Workshops, 201
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices
We introduce an extremely computation-efficient CNN architecture named
ShuffleNet, which is designed specially for mobile devices with very limited
computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new
operations, pointwise group convolution and channel shuffle, to greatly reduce
computation cost while maintaining accuracy. Experiments on ImageNet
classification and MS COCO object detection demonstrate the superior
performance of ShuffleNet over other structures, e.g. lower top-1 error
(absolute 7.8%) than recent MobileNet on ImageNet classification task, under
the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet
achieves ~13x actual speedup over AlexNet while maintaining comparable
accuracy
- …