470 research outputs found
Webly Supervised Learning of Convolutional Networks
We present an approach to utilize large amounts of web data for learning
CNNs. Specifically inspired by curriculum learning, we present a two-step
approach for CNN training. First, we use easy images to train an initial visual
representation. We then use this initial CNN and adapt it to harder, more
realistic images by leveraging the structure of data and categories. We
demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on
ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly
supervised learning by localizing objects in web images and training a R-CNN
style detector. It achieves the best performance on VOC 2007 where no VOC
training data is used. Finally, we show our approach is quite robust to noise
and performs comparably even when we use image search results from March 2013
(pre-CNN image search era)
Learning a Recurrent Visual Representation for Image Caption Generation
In this paper we explore the bi-directional mapping between images and their
sentence-based descriptions. We propose learning this mapping using a recurrent
neural network. Unlike previous approaches that map both sentences and images
to a common embedding, we enable the generation of novel sentences given an
image. Using the same model, we can also reconstruct the visual features
associated with an image given its visual description. We use a novel recurrent
visual memory that automatically learns to remember long-term visual concepts
to aid in both sentence generation and visual feature reconstruction. We
evaluate our approach on several tasks. These include sentence generation,
sentence retrieval and image retrieval. State-of-the-art results are shown for
the task of generating novel image descriptions. When compared to human
generated captions, our automatically generated captions are preferred by
humans over of the time. Results are better than or comparable to
state-of-the-art results on the image and sentence retrieval tasks for methods
using similar visual features
Event-triggered Consensus for Multi-agent Systems with Asymmetric and Reducible Topologies
This paper studies the consensus problem of multi-agent systems with
asymmetric and reducible topologies. Centralized event-triggered rules are
provided so as to reduce the frequency of system's updating. The diffusion
coupling feedbacks of each agent are based on the latest observations from its
in-neighbors and the system's next observation time is triggered by a criterion
based on all agents' information. The scenario of continuous monitoring is
first considered, namely all agents' instantaneous states can be observed. It
is proved that if the network topology has a spanning tree, then the
centralized event-triggered coupling strategy can realize consensus for the
multi-agent system. Then the results are extended to discontinuous monitoring,
where the system computes its next triggering time in advance without having to
observe all agents' states continuously. Examples with numerical simulation are
provided to show the effectiveness of the theoretical results
Achieving synchronization in arrays of coupled differential systems with time-varying couplings
In this paper, we study complete synchronization of the complex dynamical
networks described by linearly coupled ordinary differential equation systems
(LCODEs). The coupling considered here is time-varying in both the network
structure and the reaction dynamics. Inspired by our previous paper [6], the
extended Hajnal diameter is introduced and used to measure the synchronization
in a general differential system. Then we find that the Hajnal diameter of the
linear system induced by the time-varying coupling matrix and the largest
Lyapunov exponent of the synchronized system play the key roles in
synchronization analysis of LCODEs with the identity inner coupling matrix. As
an application, we obtain a general sufficient condition guaranteeing directed
time-varying graph to reach consensus. Example with numerical simulation is
provided to show the effectiveness the theoretical results.Comment: 22 pages, 4 figure
Generalized generalized gradient approximation: An improved density-functional theory for accurate orbital eigenvalues
The generalized gradient approximation (GGA) for the exchange functional in conjunction with accurate expressions for the correlation functional have led to numerous applications in which density-functional theory (DFT) provides structures, bond energies, and reaction activation energies in excellent agreement with the most accurate ab initio calculations and with the experiment. However, the orbital energies that arise from the Kohn-Sham auxiliary equations of DFT may differ by a factor of 2 from the ionization potentials, indicating that excitation energies and properties involving sums over excited states (nonlinear-optical properties, van der Waals attraction) may be in serious error.mWe propose herein a generalization of the GGA in which the changes in the functionals due to virtual changes in the orbitals are allowed to differ from the functional used to map the exact density onto the exact energy. Using the simplest version of this generalized GGA we show that orbital energies are within ∼5% of the correct values and the long-range behavior has the correct form
- …