258 research outputs found
Rhythm and Vowel Quality in Accents of English
In a sample of 27 speakers of Scottish Standard English two notoriously variable consonantal features are investigated: the contrast of /m/ and /w/ and non-prevocalic /r/, the latter both in terms of its presence or absence and the phonetic form it takes, if present. The pattern of realisation of non-prevocalic /r/ largely confirms previously reported findings. But there are a number of surprising results regarding the merger of /m/ and /w/ and the loss of non-prevocalic /r/: While the former is more likely to happen in younger speakers and females, the latter seems more likely in older speakers and males. This is suggestive of change in progress leading to a loss of the /m/ - /w/ contrast, while the variation found in non-prevocalic /r/ follows an almost inverse sociolinguistic pattern that does not suggest any such change and is additionally largely explicable in language-internal terms. One phenomenon requiring further investigation is the curious effect direct contact with Southern English accents seems to have on non-prevocalic /r/: innovation on the structural level (i.e. loss) and conservatism on the realisational level (i.e. increased incidence of [r] and [r]) appear to be conditioned by the same sociolinguistic factors
Meta-Tracker: Fast and Robust Online Adaptation for Visual Object Trackers
This paper improves state-of-the-art visual object trackers that use online
adaptation. Our core contribution is an offline meta-learning-based method to
adjust the initial deep networks used in online adaptation-based tracking. The
meta learning is driven by the goal of deep networks that can quickly be
adapted to robustly model a particular target in future frames. Ideally the
resulting models focus on features that are useful for future frames, and avoid
overfitting to background clutter, small parts of the target, or noise. By
enforcing a small number of update iterations during meta-learning, the
resulting networks train significantly faster. We demonstrate this approach on
top of the high performance tracking approaches: tracking-by-detection based
MDNet and the correlation based CREST. Experimental results on standard
benchmarks, OTB2015 and VOT2016, show that our meta-learned versions of both
trackers improve speed, accuracy, and robustness.Comment: Code: https://github.com/silverbottlep/meta_tracker
Shot-based object retrieval from video with compressed Fisher vectors
This paper addresses the problem of retrieving those shots from a database of video sequences that match a query image. Existing architectures are mainly based on Bag of Words model, which consists in matching the query image with a high-level representation of local features extracted from the video database. Such architectures lack however the capability to scale up to very large databases. Recently, Fisher Vectors showed promising results in large scale image retrieval
problems, but it is still not clear how they can be best exploited in video-related applications. In our work, we use compressed Fisher Vectors to represent the video-shots and we show that inherent correlation between video-frames can be proficiently exploited. Experiments show that our proposal enables better performance for lower computational requirements than similar architectures
Fast online object tracking and segmentation: A unifying approach
In this paper we illustrate how to perform both visual object tracking and semi-supervised video object segmentation, in real-time, with a single simple approach. Our method, dubbed SiamMask, improves the offline training procedure of popular fully-convolutional Siamese approaches for object tracking by augmenting their loss with a binary segmentation task. Once trained, SiamMask solely relies on a single bounding box initialisation and operates online, producing class-agnostic object segmentation masks and rotated bounding boxes at 55 frames per second. Despite its simplicity, versatility and fast speed, our strategy allows us to establish a new state-of-the-art among real-time trackers on VOT-2018, while at the same time demonstrating competitive performance and the best speed for the semi-supervised video object segmentation task on DAVIS-2016 and DAVIS-2017
An outline of an asymmetric two-component theory of aspect
The paper presents the bases of an asymmetric two-component model of aspect. The main theoretical conclusion of the study is that (grammatical) viewpoint aspect and situation aspect are not independent aspectual levels, since the former often modifies the input situation aspect of the phrase/sentence. As it is shown, besides the arguments and adjuncts of the predicate, viewpoint aspect is also an important factor in compositionally marking situation aspect. The aspectual framework put forward in the paper is verified and illustrated on the basis of the aspectual system of Hungarian and some examples taken from English linguistic data
AI-powered transmitted light microscopy for functional analysis of live cells
Transmitted light microscopy can readily visualize the morphology of living cells. Here, we introduce artificial-intelligence-powered transmitted light microscopy (AIM) for subcellular structure identification and labeling-free functional analysis of live cells. AIM provides accurate images of subcellular organelles; allows identification of cellular and functional characteristics (cell type, viability, and maturation stage); and facilitates live cell tracking and multimodality analysis of immune cells in their native form without labeling
Self-supervised Keypoint Correspondences for Multi-Person Pose Estimation and Tracking in Videos
Video annotation is expensive and time consuming. Consequently, datasets for
multi-person pose estimation and tracking are less diverse and have more sparse
annotations compared to large scale image datasets for human pose estimation.
This makes it challenging to learn deep learning based models for associating
keypoints across frames that are robust to nuisance factors such as motion blur
and occlusions for the task of multi-person pose tracking. To address this
issue, we propose an approach that relies on keypoint correspondences for
associating persons in videos. Instead of training the network for estimating
keypoint correspondences on video data, it is trained on a large scale image
datasets for human pose estimation using self-supervision. Combined with a
top-down framework for human pose estimation, we use keypoints correspondences
to (i) recover missed pose detections (ii) associate pose detections across
video frames. Our approach achieves state-of-the-art results for multi-frame
pose estimation and multi-person pose tracking on the PosTrack and
PoseTrack data sets.Comment: Submitted to ECCV 202
DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse points
Multi-view stereo (MVS) is the golden mean between the accuracy of active
depth sensing and the practicality of monocular depth estimation. Cost volume
based approaches employing 3D convolutional neural networks (CNNs) have
considerably improved the accuracy of MVS systems. However, this accuracy comes
at a high computational cost which impedes practical adoption. Distinct from
cost volume approaches, we propose an efficient depth estimation approach by
first (a) detecting and evaluating descriptors for interest points, then (b)
learning to match and triangulate a small set of interest points, and finally
(c) densifying this sparse set of 3D points using CNNs. An end-to-end network
efficiently performs all three steps within a deep learning framework and
trained with intermediate 2D image and 3D geometric supervision, along with
depth supervision. Crucially, our first step complements pose estimation using
interest point detection and descriptor learning. We demonstrate
state-of-the-art results on depth estimation with lower compute for different
scene lengths. Furthermore, our method generalizes to newer environments and
the descriptors output by our network compare favorably to strong baselines.
Code is available at https://github.com/magicleap/DELTASComment: ECCV 202
The visual object tracking VOT2015 challenge results
The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website
Class-agnostic counting
Nearly all existing counting methods are designed for a specific object class. Our work, however, aims to create a counting model able to count any class of object. To achieve this goal, we formulate counting as a matching problem, enabling us to exploit the image self-similarity property that naturally exists in object counting problems. We make the following three contributions: first, a Generic Matching Network (GMN) architecture that can potentially count any object in a class-agnostic manner; second, by reformulating the counting problem as one of matching objects, we can take advantage of the abundance of video data labeled for tracking, which contains natural repetitions suitable for training a counting model. Such data enables us to train the GMN. Third, to customize the GMN to different user requirements, an adapter module is used to specialize the model with minimal effort, i.e. using a few labeled examples, and adapting only a small fraction of the trained parameters. This is a form of few-shot learning, which is practical for domains where labels are limited due to requiring expert knowledge (e.g. microbiology). We demonstrate the flexibility of our method on a diverse set of existing counting benchmarks: specifically cells, cars, and human crowds. The model achieves competitive performance on cell and crowd counting datasets, and surpasses the state-of-the-art on the car dataset using only three training images. When training on the entire dataset, the proposed method outperforms all previous methods by a large margin
- …
