175,373 research outputs found
Describing Common Human Visual Actions in Images
Which common human actions and interactions are recognizable in monocular
still images? Which involve objects and/or other people? How many is a person
performing at a time? We address these questions by exploring the actions and
interactions that are detectable in the images of the MS COCO dataset. We make
two main contributions. First, a list of 140 common `visual actions', obtained
by analyzing the largest on-line verb lexicon currently available for English
(VerbNet) and human sentences used to describe images in MS COCO. Second, a
complete set of annotations for those `visual actions', composed of
subject-object and associated verb, which we call COCO-a (a for `actions').
COCO-a is larger than existing action datasets in terms of number of actions
and instances of these actions, and is unique because it is data-driven, rather
than experimenter-biased. Other unique features are that it is exhaustive, and
that all subjects and objects are localized. A statistical analysis of the
accuracy of our annotations and of each action, interaction and subject-object
combination is provided
COCO: Performance Assessment
We present an any-time performance assessment for benchmarking numerical
optimization algorithms in a black-box scenario, applied within the COCO
benchmarking platform. The performance assessment is based on runtimes measured
in number of objective function evaluations to reach one or several quality
indicator target values. We argue that runtime is the only available measure
with a generic, meaningful, and quantitative interpretation. We discuss the
choice of the target values, runlength-based targets, and the aggregation of
results by using simulated restarts, averages, and empirical distribution
functions
Towards Accurate Multi-person Pose Estimation in the Wild
We propose a method for multi-person detection and 2-D pose estimation that
achieves state-of-art results on the challenging COCO keypoints task. It is a
simple, yet powerful, top-down approach consisting of two stages.
In the first stage, we predict the location and scale of boxes which are
likely to contain people; for this we use the Faster RCNN detector. In the
second stage, we estimate the keypoints of the person potentially contained in
each proposed bounding box. For each keypoint type we predict dense heatmaps
and offsets using a fully convolutional ResNet. To combine these outputs we
introduce a novel aggregation procedure to obtain highly localized keypoint
predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression
(NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based
confidence score estimation, instead of box-level scoring.
Trained on COCO data alone, our final system achieves average precision of
0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming
the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Further, by using additional in-house labeled data we obtain an even higher
average precision of 0.685 on the test-dev set and 0.673 on the test-standard
set, more than 5% absolute improvement compared to the previous best performing
method on the same dataset.Comment: Paper describing an improved version of the G-RMI entry to the 2016
COCO keypoints challenge (http://image-net.org/challenges/ilsvrc+coco2016).
Camera ready version to appear in the Proceedings of CVPR 201
Deep Residual Learning for Image Recognition
Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation.Comment: Tech repor
Testing and optimizing MST coaxial collinear arrays, part 6.4A
Many clear-air VHF wind profiles use coaxial collinear (COCO) arrays for their antenna. A COCO array is composed of long lines of half-wave dipoles spaced one-half wavelength apart. An inexpensive method of checking a COCO array is described and its performance is optimized by measuring and then correcting the relative rf phase among its lines at their feed point. This method also gives an estimate of the rf current amplitude among the lines. The strength and location of the sidelobes in the H-plane of the array can be estimated
INFORMATION AND COMMUNICATION IN BANKS - KEY ELEMENTS OF THE INTERNAL CONTROL SYSTEM – AN EMPIRICAL ANALYSIS BETWEEN ROMANIAN, AMERICAN AND CANADIAN MODELS OF CONTROL
The purpose of this paper is to focus on one of the most important aspect of the internal control in banking system – information and communication - trying to identify on which of the two well-known international models of control (COSO or CoCo) iInformation, Communication, COSO model, CoCo model, Romanian framework
COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting
We introduce COCO, an open source platform for Comparing Continuous
Optimizers in a black-box setting. COCO aims at automatizing the tedious and
repetitive task of benchmarking numerical optimization algorithms to the
greatest possible extent. The platform and the underlying methodology allow to
benchmark in the same framework deterministic and stochastic solvers for both
single and multiobjective optimization. We present the rationales behind the
(decade-long) development of the platform as a general proposition for
guidelines towards better benchmarking. We detail underlying fundamental
concepts of COCO such as the definition of a problem as a function instance,
the underlying idea of instances, the use of target values, and runtime defined
by the number of function calls as the central performance measure. Finally, we
give a quick overview of the basic code structure and the currently available
test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press,
pp.1-3
- …