21 research outputs found
And/or trees:a local limit point of view
International audienceWe present here a new and universal approach for the study of random and/or trees,unifying in one framework many different models, including some novel models, not yet understood in the literature.An and/or tree is a Boolean expression represented in (one of) its tree shape.Fix an integer , take a sequence of random (rooted) trees of increasing sizes, say, and label each of these random trees uniformly at random in order to get a random Boolean expression on variables.We prove that, under rather weak local conditions on the sequence of random trees , the distribution induced on Boolean functions by this procedure converges as . In particular, we characterise two different behaviours of this limit distribution depending on the shape of the local limit of : a degenerate case when the local limit has no leaves; and a non degenerate case, which we are able to describe in more details under stronger but reasonable conditions. In this latter case, we provide a relationship between the probability of a given Boolean function and its complexity. The examples we cover include, in a unified way, trees that interpolate between models with logarithmic typical distances (such as random binary search trees) and other ones with square root typical distances (such as conditioned Galton--Watson trees)
Connectivity Properties of the Flip Graph After Forbidding Triangulation Edges
The flip graph for a set of points in the plane has a vertex for every triangulation of , and an edge when two triangulations differ by one flip that replaces one triangulation edge by another. The flip graph is known to have some connectivity properties:
(1) the flip graph is connected;
(2) connectivity still holds
when restricted to triangulations containing some constrained edges between the points;
(3) for in general position of size , the flip graph is -connected, a recent result of Wagner and Welzl (SODA 2020).
We introduce the study of connectivity properties of the flip graph when some edges between points are forbidden. An edge between two points is a flip cut edge if eliminating triangulations containing results in a disconnected flip graph. More generally, a set of edges between points of is a flip cut set if eliminating all triangulations that contain edges of results in a disconnected flip graph. The flip cut number of is the minimum size of a flip cut set.
We give a characterization of flip cut edges that leads to an time algorithm to test if an edge is a flip cut edge and, with that as preprocessing, an time algorithm to test if two triangulations are in the same connected component of the flip graph. For a set of points in convex position (whose flip graph is the 1-skeleton of the associahedron) we prove that the flip cut number is
Subspace discovery for video anomaly detection
PhDIn automated video surveillance anomaly detection is a challenging task. We address
this task as a novelty detection problem where pattern description is limited
and labelling information is available only for a small sample of normal instances.
Classification under these conditions is prone to over-fitting. The contribution of this
work is to propose a novel video abnormality detection method that does not need
object detection and tracking. The method is based on subspace learning to discover
a subspace where abnormality detection is easier to perform, without the need of
detailed annotation and description of these patterns. The problem is formulated as
one-class classification utilising a low dimensional subspace, where a novelty classifier
is used to learn normal actions automatically and then to detect abnormal actions
from low-level features extracted from a region of interest. The subspace is discovered
(using both labelled and unlabelled data) by a locality preserving graph-based algorithm
that utilises the Graph Laplacian of a specially designed parameter-less nearest
neighbour graph.
The methodology compares favourably with alternative subspace learning algorithms
(both linear and non-linear) and direct one-class classification schemes commonly
used for off-line abnormality detection in synthetic and real data. Based on
these findings, the framework is extended to on-line abnormality detection in video
sequences, utilising multiple independent detectors deployed over the image frame to
learn the local normal patterns and infer abnormality for the complete scene. The
method is compared with an alternative linear method to establish advantages and
limitations in on-line abnormality detection scenarios. Analysis shows that the alternative
approach is better suited for cases where the subspace learning is restricted on
the labelled samples, while in the presence of additional unlabelled data the proposed
approach using graph-based subspace learning is more appropriate
Defining Interaction within Immersive Virtual Environments
PhDThis thesis is concerned with the design of Virtual Environments (YEs) -
in particular with the tools and techniques used to describe interesting and
useful environments. This concern is not only with respect to the appearance
of objects in the VE but also with their behaviours and their reactions to
actions of the participants. The main research hypothesis is that there are
several advantages to constructing these interactions and behaviours whilst
remaining immersed within the VE which they describe. These advantages
include the fact that editing is done interactively with immediate effect and
without having to resort to the usual edit-compile-test cycle. This means
that the participant doesn't have to leave the VE and lose their sense of
presence within it, and editing tasks can take advantage of the enhanced
spatial cognition and naturalistic interaction metaphors a VE provides.
To this end a data flow dialogue architecture with an immersive virtual
environment presentation system was designed and built. The data flow
consists of streams of data that originate at sensors that register the body
state of the participant, flowing through filters that modify the streams and
affect the yE.
The requirements for such a system and the filters it should contain are
derived from two pieces of work on interaction metaphors, one based on
a desktop system using a novel input device and the second a navigation
technique for an immersive system. The analysis of these metaphors highlighted
particular tasks that such a virtual environment dialogue architecture
(VEDA) system might be used to solve, and illustrate the scope of interactions
that should be accommodated.
Initial evaluation of the VEDA system is provided by moderately sized
demonstration environments and tools constructed by the author. Further
evaluation is provided by an in-depth study where three novice VE designers
were invited to construct VEs with the VEDA system. This highlighted the
flexibility that the VEDA approach provides and the utility of the immersive
presentation over traditional techniques in that it allows the participant to
use more natural and expressive techniques in the construction process. In
other words the evaluation shows how the immersive facilities of VEs can be
exploited in the process of constructing further VEs
Building models from multiple point sets with kernel density estimation
One of the fundamental problems in computer vision is point set registration. Point
set registration finds use in many important applications and in particular can be considered
one of the crucial stages involved in the reconstruction of models of physical
objects and environments from depth sensor data. The problem of globally aligning
multiple point sets, representing spatial shape measurements from varying sensor viewpoints,
into a common frame of reference is a complex task that is imperative due to
the large number of critical functions that accurate and reliable model reconstructions
contribute to.
In this thesis we focus on improving the quality and feasibility of model and environment
reconstruction through the enhancement of multi-view point set registration
techniques. The thesis makes the following contributions: First, we demonstrate that
employing kernel density estimation to reason about the unknown generating surfaces
that range sensors measure allows us to express measurement variability, uncertainty
and also to separate the problems of model design and viewpoint alignment optimisation.
Our surface estimates define novel view alignment objective functions that inform
the registration process. Our surfaces can be estimated from point clouds in a datadriven
fashion. Through experiments on a variety of datasets we demonstrate that we
have developed a novel and effective solution to the simultaneous multi-view registration
problem.
We then focus on constructing a distributed computation framework capable of solving
generic high-throughput computational problems. We present a novel task-farming
model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and
subsequently solving computationally distributable problems that benefit from both
independent and dependent distributed components and a level of communication between
process elements. We demonstrate that this framework is a novel schema for
parallel computer vision algorithms and evaluate the performance to establish computational
gains over serial implementations. We couple this framework with an accurate
computation-time prediction model to contribute a novel structure appropriate for
addressing expensive real-world algorithms with substantial parallel performance and
predictable time savings.
Finally, we focus on a timely instance of the multi-view registration problem: modern
range sensors provide large numbers of viewpoint samples that result in an abundance
of depth data information. The ability to utilise this abundance of depth data in a
feasible and principled fashion is of importance to many emerging application areas
making use of spatial information. We develop novel methodology for the registration
of depth measurements acquired from many viewpoints capturing physical object
surfaces. By defining registration and alignment quality metrics based on our density
estimation framework we construct an optimisation methodology that implicitly considers
all viewpoints simultaneously. We use a non-parametric data-driven approach
to consider varying object complexity and guide large view-set spatial transform optimisations.
By aligning large numbers of partial, arbitrary-pose views we evaluate this
strategy quantitatively on large view-set range sensor data where we find that we can
improve registration accuracy over existing methods and contribute increased registration
robustness to the magnitude of coarse seed alignment. This allows large-scale
registration on problem instances exhibiting varying object complexity with the added
advantage of massive parallel efficiency
Nanoinformatics
Machine learning; Big data; Atomic resolution characterization; First-principles calculations; Nanomaterials synthesi