202 research outputs found
Perceptual Generative Adversarial Networks for Small Object Detection
Detecting small objects is notoriously challenging due to their low
resolution and noisy representation. Existing object detection pipelines
usually detect small objects through learning representations of all the
objects at multiple scales. However, the performance gain of such ad hoc
architectures is usually limited to pay off the computational cost. In this
work, we address the small object detection problem by developing a single
architecture that internally lifts representations of small objects to
"super-resolved" ones, achieving similar characteristics as large objects and
thus more discriminative for detection. For this purpose, we propose a new
Perceptual Generative Adversarial Network (Perceptual GAN) model that improves
small object detection through narrowing representation difference of small
objects from the large ones. Specifically, its generator learns to transfer
perceived poor representations of the small objects to super-resolved ones that
are similar enough to real large objects to fool a competing discriminator.
Meanwhile its discriminator competes with the generator to identify the
generated representation and imposes an additional perceptual requirement -
generated representations of small objects must be beneficial for detection
purpose - on the generator. Extensive evaluations on the challenging
Tsinghua-Tencent 100K and the Caltech benchmark well demonstrate the
superiority of Perceptual GAN in detecting small objects, including traffic
signs and pedestrians, over well-established state-of-the-arts
Integrated Face Analytics Networks through Cross-Dataset Hybrid Training
Face analytics benefits many multimedia applications. It consists of a number
of tasks, such as facial emotion recognition and face parsing, and most
existing approaches generally treat these tasks independently, which limits
their deployment in real scenarios. In this paper we propose an integrated Face
Analytics Network (iFAN), which is able to perform multiple tasks jointly for
face analytics with a novel carefully designed network architecture to fully
facilitate the informative interaction among different tasks. The proposed
integrated network explicitly models the interactions between tasks so that the
correlations between tasks can be fully exploited for performance boost. In
addition, to solve the bottleneck of the absence of datasets with comprehensive
training data for various tasks, we propose a novel cross-dataset hybrid
training strategy. It allows "plug-in and play" of multiple datasets annotated
for different tasks without the requirement of a fully labeled common dataset
for all the tasks. We experimentally show that the proposed iFAN achieves
state-of-the-art performance on multiple face analytics tasks using a single
integrated model. Specifically, iFAN achieves an overall F-score of 91.15% on
the Helen dataset for face parsing, a normalized mean error of 5.81% on the
MTFL dataset for facial landmark localization and an accuracy of 45.73% on the
BNU dataset for emotion recognition with a single model.Comment: 10 page
Two-photon excited hemoglobin fluorescence
We discovered that hemoglobin emits high energy Soret fluorescence when two-photon excited by the visible femtosecond light sources. The unique spectral and temporal characteristics of hemoglobin fluorescence were measured by using a time-resolved spectroscopic detection system. The high energy Soret fluorescence of hemoglobin shows the spectral peak at 438 nm with extremely short lifetime. This discovery enables two-photon excitation fluorescence microscopy to become a potentially powerful tool for in vivo label-free imaging of blood cells and vessels
- …