2,866 research outputs found
DancingLines: An Analytical Scheme to Depict Cross-Platform Event Popularity
Nowadays, events usually burst and are propagated online through multiple
modern media like social networks and search engines. There exists various
research discussing the event dissemination trends on individual medium, while
few studies focus on event popularity analysis from a cross-platform
perspective. Challenges come from the vast diversity of events and media,
limited access to aligned datasets across different media and a great deal of
noise in the datasets. In this paper, we design DancingLines, an innovative
scheme that captures and quantitatively analyzes event popularity between
pairwise text media. It contains two models: TF-SW, a semantic-aware popularity
quantification model, based on an integrated weight coefficient leveraging
Word2Vec and TextRank; and wDTW-CD, a pairwise event popularity time series
alignment model matching different event phases adapted from Dynamic Time
Warping. We also propose three metrics to interpret event popularity trends
between pairwise social platforms. Experimental results on eighteen real-world
event datasets from an influential social network and a popular search engine
validate the effectiveness and applicability of our scheme. DancingLines is
demonstrated to possess broad application potentials for discovering the
knowledge of various aspects related to events and different media
Human detection in surveillance videos and its applications - a review
Detecting human beings accurately in a visual surveillance system is crucial for diverse application areas including abnormal event detection, human gait characterization, congestion analysis, person identification, gender classification and fall detection for elderly people. The first step of the detection process is to detect an object which is in motion. Object detection could be performed using background subtraction, optical flow and spatio-temporal filtering techniques. Once detected, a moving object could be classified as a human being using shape-based, texture-based or motion-based features. A comprehensive review with comparisons on available techniques for detecting human beings in surveillance videos is presented in this paper. The characteristics of few benchmark datasets as well as the future research directions on human detection have also been discussed
Multi-Sensor Event Detection using Shape Histograms
Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results
Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ânaturalâ) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control
Spatiotemporal visual analysis of human actions
In this dissertation we propose four methods for the recognition of human activities. In all four of
them, the representation of the activities is based on spatiotemporal features that are automatically
detected at areas where there is a significant amount of independent motion, that is, motion that is
due to ongoing activities in the scene. We propose the use of spatiotemporal salient points as features
throughout this dissertation. The algorithms presented, however, can be used with any kind of features,
as long as the latter are well localized and have a well-defined area of support in space and time. We
introduce the utilized spatiotemporal salient points in the first method presented in this dissertation.
By extending previous work on spatial saliency, we measure the variations in the information content of
pixel neighborhoods both in space and time, and detect the points at the locations and scales for which
this information content is locally maximized. In this way, an activity is represented as a collection of
spatiotemporal salient points. We propose an iterative linear space-time warping technique in order
to align the representations in space and time and propose to use Relevance Vector Machines (RVM)
in order to classify each example into an action category. In the second method proposed in this
dissertation we propose to enhance the acquired representations of the first method. More specifically,
we propose to track each detected point in time, and create representations based on sets of trajectories,
where each trajectory expresses how the information engulfed by each salient point evolves over time.
In order to deal with imperfect localization of the detected points, we augment the observation model
of the tracker with background information, acquired using a fully automatic background estimation
algorithm. In this way, the tracker favors solutions that contain a large number of foreground pixels.
In addition, we perform experiments where the tracked templates are localized on specific parts of the
body, like the hands and the head, and we further augment the trackerâs observation model using a
human skin color model. Finally, we use a variant of the Longest Common Subsequence algorithm
(LCSS) in order to acquire a similarity measure between the resulting trajectory representations, and
RVMs for classification. In the third method that we propose, we assume that neighboring salient
points follow a similar motion. This is in contrast to the previous method, where each salient point was
tracked independently of its neighbors. More specifically, we propose to extract a novel set of visual
descriptors that are based on geometrical properties of three-dimensional piece-wise polynomials. The
latter are fitted on the spatiotemporal locations of salient points that fall within local spatiotemporal
neighborhoods, and are assumed to follow a similar motion. The extracted descriptors are invariant in
translation and scaling in space-time. Coupling the neighborhood dimensions to the scale at which the
corresponding spatiotemporal salient points are detected ensures the latter. The descriptors that are
extracted across the whole dataset are subsequently clustered in order to create a codebook, which is
used in order to represent the overall motion of the subjects within small temporal windows.Finally,we use boosting in order to select the most discriminative of these windows for each class, and RVMs for
classification. The fourth and last method addresses the joint problem of localization and recognition
of human activities depicted in unsegmented image sequences. Its main contribution is the use of an
implicit representation of the spatiotemporal shape of the activity, which relies on the spatiotemporal
localization of characteristic ensembles of spatiotemporal features. The latter are localized around
automatically detected salient points. Evidence for the spatiotemporal localization of the activity
is accumulated in a probabilistic spatiotemporal voting scheme. During training, we use boosting in
order to create codebooks of characteristic feature ensembles for each class. Subsequently, we construct
class-specific spatiotemporal models, which encode where in space and time each codeword ensemble
appears in the training set. During testing, each activated codeword ensemble casts probabilistic
votes concerning the spatiotemporal localization of the activity, according to the information stored
during training. We use a Mean Shift Mode estimation algorithm in order to extract the most probable
hypotheses from each resulting voting space. Each hypothesis corresponds to a spatiotemporal volume
which potentially engulfs the activity, and is verified by performing action category classification with
an RVM classifier
Recommended from our members
Geophysical data registration using modified plane-wave destruction filters
I propose a method to efficiently measure local shifts, slopes, and scaling functions between seismic traces using modified plane-wave destruction filters.
Plane-wave destruction can efficiently measure shifts of less than a few samples, making this algorithm particularly effective for detecting small shifts.
When shifts are large, amplitude-adjusted plane-wave destruction can also be used to refine shift estimates obtained by other methods.
Amplitude-adjusted plane-wave destruction separates estimation of local shifts and amplitude weights, allowing the time-shift to be measured more accurately.
This algorithm has clear applications to geophysical data registration problems, including time-lapse image registration, multicomponent image registration, automatic gather flattening, automatic seismic-well ties, and image merging.
The effectiveness of this algorithm in predicting shifts associated with fluid migration, wave mode conversions, and anisotropy and amplitude gradients associated with amplitude variations with offset or angle is demonstrated by applying the algorithm to a synthetic trace, a time-lapse field data example from the Cranfield COâ sequestration project, a multicomponent field data example from West Texas, and the Mobil AVO prestack seismic data.
Finding correspondence between different parts of the same dataset falls into the same category of problems as local shift estimation.
Computation of structure-oriented amplitude gradients for attribute-assisted interpretation requires the estimation of local slopes by correlating reflections between neighboring seismic traces in an image.
One of the major challenges of interpreting seismic images is the delineation of reflection discontinuities that are related to geologic features, such as faults, channels, salt boundaries, and unconformities.
Visually prominent reflection features often overshadow these subtle discontinuous features which are critical to understanding the structural and depositional environment of the subsurface.
For this reason, precise manual interpretation of these reflection discontinuities in seismic images can be tedious and time-consuming, especially when data quality is poor.
Discontinuity enhancement attributes are commonly used to facilitate the interpretation process by enhancing edges in seismic images and providing a quantitative measure of the significance of discontinuous features.
These attributes require careful pre-processing to maintain geologic features and suppress acquisition and processing artifacts which may be artificially detected as a geologic edge.
The plane-wave Sobel filter cascades plane-wave destruction filters with plane-wave shaping in the transverse direction to compute an enhanced discontinuity attribute.
The plane-wave Sobel attribute can be applied directly to a seismic image to efficiently and effectively enhance discontinuous features, or to a coherence image to create a sharper and more detailed image.
I demonstrate the effectiveness of this method by applying it to two field data sets from offshore New Zealand and offshore Nova Scotia with several faults and channel features and compare the results to other coherence attributes.Geological Science
Functional Data Analysis of Amplitude and Phase Variation
The abundance of functional observations in scientific endeavors has led to a
significant development in tools for functional data analysis (FDA). This kind
of data comes with several challenges: infinite-dimensionality of function
spaces, observation noise, and so on. However, there is another interesting
phenomena that creates problems in FDA. The functional data often comes with
lateral displacements/deformations in curves, a phenomenon which is different
from the height or amplitude variability and is termed phase variation. The
presence of phase variability artificially often inflates data variance, blurs
underlying data structures, and distorts principal components. While the
separation and/or removal of phase from amplitude data is desirable, this is a
difficult problem. In particular, a commonly used alignment procedure, based on
minimizing the norm between functions, does not provide
satisfactory results. In this paper we motivate the importance of dealing with
the phase variability and summarize several current ideas for separating phase
and amplitude components. These approaches differ in the following: (1) the
definition and mathematical representation of phase variability, (2) the
objective functions that are used in functional data alignment, and (3) the
algorithmic tools for solving estimation/optimization problems. We use simple
examples to illustrate various approaches and to provide useful contrast
between them.Comment: Published at http://dx.doi.org/10.1214/15-STS524 in the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- âŚ