112 research outputs found
Speeding up Simplification of Polygonal Curves using Nested Approximations
We develop a multiresolution approach to the problem of polygonal curve
approximation. We show theoretically and experimentally that, if the
simplification algorithm A used between any two successive levels of resolution
satisfies some conditions, the multiresolution algorithm MR will have a
complexity lower than the complexity of A. In particular, we show that if A has
a O(N2/K) complexity (the complexity of a reduced search dynamic solution
approach), where N and K are respectively the initial and the final number of
segments, the complexity of MR is in O(N).We experimentally compare the
outcomes of MR with those of the optimal "full search" dynamic programming
solution and of classical merge and split approaches. The experimental
evaluations confirm the theoretical derivations and show that the proposed
approach evaluated on 2D coastal maps either shows a lower complexity or
provides polygonal approximations closer to the initial curves.Comment: 12 pages + figure
Down-Sampling coupled to Elastic Kernel Machines for Efficient Recognition of Isolated Gestures
In the field of gestural action recognition, many studies have focused on
dimensionality reduction along the spatial axis, to reduce both the variability
of gestural sequences expressed in the reduced space, and the computational
complexity of their processing. It is noticeable that very few of these methods
have explicitly addressed the dimensionality reduction along the time axis.
This is however a major issue with regard to the use of elastic distances
characterized by a quadratic complexity. To partially fill this apparent gap,
we present in this paper an approach based on temporal down-sampling associated
to elastic kernel machine learning. We experimentally show, on two data sets
that are widely referenced in the domain of human gesture recognition, and very
different in terms of quality of motion capture, that it is possible to
significantly reduce the number of skeleton frames while maintaining a good
recognition rate. The method proves to give satisfactory results at a level
currently reached by state-of-the-art methods on these data sets. The
computational complexity reduction makes this approach eligible for real-time
applications.Comment: ICPR 2014, International Conference on Pattern Recognition, Stockholm
: Sweden (2014
Investigations sur la fréquence d’échantillonnage de la mobilité
Recent studies have leveraged tracking techniques based on positioning technologiesto discover new knowledge about human mobility. These investigations have revealed, amongothers, a high spatiotemporal regularity of individual movement patterns. Building on these findings,we aim at answering the question “at what frequency should one sample individual humanmovements so that they can be reconstructed from the collected samples with minimum loss of information?”.Our quest for a response leads to the discovery of (i) seemingly universal spectralproperties of human mobility, and (ii) a linear scaling law of the localization error with respectto the sampling interval. Our findings are based on the analysis of fine-grained GPS trajectoriesof 119 users worldwide. The applications of our findings are related to a number of fields relevantto ubiquitous computing, such as energy-efficient mobile computing, location-based service operations,active probing of subscribers’ positions in mobile networks and trajectory data compression.Des études récentes ont mis à profit des techniques de suivi basées sur des technologiesde positionnement pour étuder la mobilité humaine. Ces recherches ont révélé, entreautres, une grande régularité spatio-temporelle des mouvements individuels. Sur la base de cesrésultats, nous visons à répondre à la question «à quelle fréquence doit-on échantillonner lesmouvements humains individuels afin qu’ils puissent être reconstruits à partir des échantillonsrecueillis avec un minimum de perte d’information? Notre recherche d’une réponse à cette questionnous a conduit à la découverte de (i) propriétés spectrales apparemment universelles de lamobilité humaine, et (ii) une loi de mise à l’échelle linéaire de l’erreur de localisation par rapportà l’intervalle d’échantillonnage. Nos résultats sont basés sur l’analyse des trajectoires GPS de119 utilisateurs dans le monde entier. Les applications de nos résultats sont liées à un certainnombre de domaines pertinents pour l’informatique omniprésente, tels que l’informatique mobileéconome en énergie, les opérations de service basées sur l’emplacement, le sondage actif despositions des abonnés dans les réseaux mobiles et la compression des données de trajectoire
Multiresolutional Fault-Tolerant Sensor Integration and Object Recognition in Images.
This dissertation applies multiresolution methods to two important problems in signal analysis. The problem of fault-tolerant sensor integration in distributed sensor networks is addressed, and an efficient multiresolutional algorithm for estimating the sensors\u27 effective output is proposed. The problem of object/shape recognition in images is addressed in a multiresolutional setting using pyramidal decomposition of images with respect to an orthonormal wavelet basis. A new approach to efficient template matching to detect objects using computational geometric methods is put forward. An efficient paradigm for object recognition is described
Management of spatial data for visualization on mobile devices
Vector-based mapping is emerging as a preferred format in Location-based
Services(LBS), because it can deliver an up-to-date and interactive map visualization.
The Progressive Transmission(PT) technique has been developed to
enable the ecient transmission of vector data over the internet by delivering
various incremental levels of detail(LoD). However, it is still challenging to apply
this technique in a mobile context due to many inherent limitations of mobile
devices, such as small screen size, slow processors and limited memory. Taking
account of these limitations, PT has been extended by developing a framework of
ecient data management for the visualization of spatial data on mobile devices.
A data generalization framework is proposed and implemented in a software
application. This application can signicantly reduce the volume of data for
transmission and enable quick access to a simplied version of data while preserving
appropriate visualization quality. Using volunteered geographic information
as a case-study, the framework shows
exibility in delivering up-to-date spatial
information from dynamic data sources.
Three models of PT are designed and implemented to transmit the additional
LoD renements: a full scale PT as an inverse of generalisation, a viewdependent
PT, and a heuristic optimised view-dependent PT. These models are
evaluated with user trials and application examples. The heuristic optimised
view-dependent PT has shown a signicant enhancement over the traditional PT
in terms of bandwidth-saving and smoothness of transitions.
A parallel data management strategy associated with three corresponding
algorithms has been developed to handle LoD spatial data on mobile clients.
This strategy enables the map rendering to be performed in parallel with a process
which retrieves the data for the next map location the user will require. A viewdependent
approach has been integrated to monitor the volume of each LoD for
visible area. The demonstration of a
exible rendering style shows its potential
use in visualizing dynamic geoprocessed data. Future work may extend this
to integrate topological constraints and semantic constraints for enhancing the
vector map visualization
Human perception-oriented segmentation for triangle meshes
A segmentação de malhas é um tópico importante de investigação em computação gráfica, em particular em modelação geométrica. Isto deve-se ao facto de as técnicas de segmentaçãodemalhasteremváriasaplicações,nomeadamentenaproduçãodefilmes, animaçãoporcomputador, realidadevirtual, compressãodemalhas, assimcomoemjogosdigitais. Emconcreto, asmalhastriangularessãoamplamenteusadasemaplicações interativas, visto que sua segmentação em partes significativas (também designada por segmentação significativa, segmentação perceptiva ou segmentação perceptualmente significativa ) é muitas vezes vista como uma forma de acelerar a interação com o utilizador ou a deteção de colisões entre esses objetos 3D definidos por uma malha, bem como animar uma ou mais partes significativas (por exemplo, a cabeça de uma personagem) de um dado objeto, independentemente das restantes partes.
Acontece que não se conhece nenhuma técnica capaz de segmentar correctamente malhas arbitrárias −ainda que restritas aos domínios de formas livres e não-livres− em partes significativas. Algumas técnicas são mais adequadas para objetos de forma não-livre (por exemplo, peças mecânicas definidas geometricamente por quádricas), enquanto outras são mais talhadas para o domínio dos objectos de forma livre. Só na literatura recente surgem umas poucas técnicas que se aplicam a todo o universo de objetos de forma livre e não-livre. Pior ainda é o facto de que a maioria das técnicas de segmentação não serem totalmente automáticas, no sentido de que quase todas elas exigem algum tipo de pré-requisitos e assistência do utilizador. Resumindo, estes três desafios relacionados com a proximidade perceptual, generalidade e automação estão no cerne do trabalho descrito nesta tese.
Para enfrentar estes desafios, esta tese introduz o primeiro algoritmo de segmentação baseada nos contornos ou fronteiras dos segmentos, cuja técnica se inspira nas técnicas de segmentação baseada em arestas, tão comuns em análise e processamento de imagem,porcontraposiçãoàstécnicasesegmentaçãobaseadaemregiões. Aideiaprincipal é a de encontrar em primeiro lugar a fronteira de cada região para, em seguida, identificar e agrupar todos os seus triângulos internos. As regiões da malha encontradas correspondem a saliências e reentrâncias, que não precisam de ser estritamente convexas, nem estritamente côncavas, respectivamente. Estas regiões, designadas regiões relaxadamenteconvexas(ousaliências)eregiõesrelaxadamentecôncavas(oureentrâncias), produzem segmentações que são menos sensíveis ao ruído e, ao mesmo tempo, são mais intuitivas do ponto de vista da perceção humana; por isso, é designada por segmentação orientada à perceção humana (ou, human perception- oriented (HPO), do inglês). Além disso, e ao contrário do atual estado-da-arte da segmentação de malhas, a existência destas regiões relaxadas torna o algoritmo capaz de segmentar de maneira bastante plausível tanto objectos de forma não-livre como objectos de forma livre.
Nesta tese, enfrentou-se também um quarto desafio, que está relacionado com a fusão de segmentação e multi-resolução de malhas. Em boa verdade, já existe na literatura uma variedade grande de técnicas de segmentação, bem como um número significativo de técnicas de multi-resolução, para malhas triangulares. No entanto, não é assim tão comum encontrar estruturas de dados e algoritmos que façam a fusão ou a simbiose destes dois conceitos, multi-resolução e segmentação, num único esquema multi-resolução que sirva os propósitos das aplicações que lidam com malhas simples e segmentadas, sendo que neste contexto se entende que uma malha simples é uma malha com um único segmento. Sendo assim, nesta tese descreve-se um novo esquema (entenda-seestruturasdedadosealgoritmos)demulti-resoluçãoesegmentação,designado por extended Ghost Cell (xGC). Este esquema preserva a forma das malhas, tanto em termos globais como locais, ou seja, os segmentos da malha e as suas fronteiras, bem como os seus vincos e ápices são preservados, não importa o nível de resolução que usamos durante a/o simplificação/refinamento da malha. Além disso, ao contrário de outros esquemas de segmentação, tornou-se possível ter segmentos adjacentes com dois ou mais níveis de resolução de diferença. Isto é particularmente útil em animação por computador, compressão e transmissão de malhas, operações de modelação geométrica, visualização científica e computação gráfica.
Em suma, esta tese apresenta um esquema genérico, automático, e orientado à percepção humana, que torna possível a simbiose dos conceitos de segmentação e multiresolução de malhas trianguladas que sejam representativas de objectos 3D.The mesh segmentation is an important topic in computer graphics, in particular in geometric computing. This is so because mesh segmentation techniques find many applications in movies, computer animation, virtual reality, mesh compression, and games. Infact, trianglemeshesarewidelyusedininteractiveapplications, sothattheir segmentation in meaningful parts (i.e., human-perceptually segmentation, perceptive segmentationormeaningfulsegmentation)isoftenseenasawayofspeedinguptheuser interaction, detecting collisions between these mesh-covered objects in a 3D scene, as well as animating one or more meaningful parts (e.g., the head of a humanoid) independently of the other parts of a given object.
It happens that there is no known technique capable of correctly segmenting any mesh into meaningful parts. Some techniques are more adequate for non-freeform objects (e.g., quadricmechanicalparts), whileothersperformbetterinthedomainoffreeform objects. Only recently, some techniques have been developed for the entire universe of objects and shapes. Even worse it is the fact that most segmentation techniques are not entirely automated in the sense that almost all techniques require some sort of pre-requisites and user assistance. Summing up, these three challenges related to perceptual proximity, generality and automation are at the core of the work described in this thesis.
In order to face these challenges, we have developed the first contour-based mesh segmentation algorithm that we may find in the literature, which is inspired in the edgebased segmentation techniques used in image analysis, as opposite to region-based segmentation techniques. Its leading idea is to firstly find the contour of each region, and then to identify and collect all of its inner triangles. The encountered mesh regions correspond to ups and downs, which do not need to be strictly convex nor strictly concave, respectively. These regions, called relaxedly convex regions (or saliences) and relaxedly concave regions (or recesses), produce segmentations that are less-sensitive to noise and, at the same time, are more intuitive from the human point of view; hence it is called human perception- oriented (HPO) segmentation. Besides, and unlike the current state-of-the-art in mesh segmentation, the existence of these relaxed regions makes the algorithm suited to both non-freeform and freeform objects.
In this thesis, we have also tackled a fourth challenge, which is related with the fusion of mesh segmentation and multi-resolution. Truly speaking, a plethora of segmentation techniques, as well as a number of multiresolution techniques, for triangle meshes already exist in the literature. However, it is not so common to find algorithms and data structures that fuse these two concepts, multiresolution and segmentation, into a symbiotic multi-resolution scheme for both plain and segmented meshes, in which a plainmeshisunderstoodasameshwithasinglesegment. So, weintroducesuchanovel multiresolution segmentation scheme, called extended Ghost Cell (xGC) scheme. This scheme preserves the shape of the meshes in both global and local terms, i.e., mesh segments and their boundaries, as well as creases and apices are preserved, no matter the level of resolution we use for simplification/refinement of the mesh. Moreover, unlike other segmentation schemes, it was made possible to have adjacent segments with two or more resolution levels of difference. This is particularly useful in computer animation, mesh compression and transmission, geometric computing, scientific visualization, and computer graphics.
In short, this thesis presents a fully automatic, general, and human perception-oriented scheme that symbiotically integrates the concepts of mesh segmentation and multiresolution
Algorithms for Geometric Covering and Piercing Problems
This thesis involves the study of a range of geometric covering and piercing problems, where the unifying thread is approximation using disks. While some of the problems addressed in this work are solved exactly with polynomial time algorithms, many problems are shown to be at least NP-hard. For the latter, approximation algorithms are the best that we can do in polynomial time assuming that P is not equal to NP.
One of the best known problems involving unit disks is the Discrete Unit Disk Cover (DUDC) problem, in which the input consists of a set of points P and a set of unit disks in the plane D, and the objective is to compute a subset of the disks of minimum cardinality which covers all of the points. Another perspective on the problem is to consider the centre points (denoted Q) of the disks D as an approximating set of points for P. An optimal solution to DUDC provides a minimal cardinality subset Q*, a subset of Q, so that each point in P is within unit distance of a point in Q*. In order to approximate the general DUDC problem, we also examine several restricted variants.
In the Line-Separable Discrete Unit Disk Cover (LSDUDC) problem, P and Q are separated by a line in the plane. We write that l^- is the half-plane defined by l containing P, and l^+ is the half-plane containing Q. LSDUDC may be solved exactly in O(m^2n) time using a greedy algorithm. We augment this result by describing a 2-approximate solution for the Assisted LSDUDC problem, where the union of all disks centred in l^+ covers all points in P, but we consider using disks centred in l^- as well to try to improve the solution.
Next, we describe the Within-Strip Discrete Unit Disk Cover (WSDUDC) problem, where P and Q are confined to a strip of the plane of height h. We show that this problem is NP-complete, and we provide a range of approximation algorithms for the problem with trade-offs between the approximation factor and running time.
We outline approximation algorithms for the general DUDC problem which make use of the algorithms for LSDUDC and WSDUDC. These results provide the fastest known approximation algorithms for DUDC. As with the WSDUDC results, we present a set of algorithms in which better approximation factors may be had at the expense of greater running time, ranging from a 15-approximate algorithm which runs in O(mn + m log m + n log n) time to a 18-approximate algorithm which runs in O(m^6n+n log n) time.
The next problems that we study are Hausdorff Core problems. These problems accept an input polygon P, and we seek a convex polygon Q which is fully contained in P and minimizes the Hausdorff distance between P and Q. Interestingly, we show that this problem may be reduced to that of computing the minimum radius of disk, call it k_opt, so that a convex polygon Q contained in P intersects all disks of radius k_opt centred on the vertices of P. We begin by describing a polynomial time algorithm for the simple case where P has only a single reflex vertex. On general polygons, we provide a parameterized algorithm which performs a parametric search on the possible values of k_opt. The solution to the decision version of the problem, i.e. determining whether there exists a Hausdorff Core for P given k_opt, requires some novel insights. We also describe an FPTAS for the decision version of the Hausdorff Core problem.
Finally, we study Generalized Minimum Spanning Tree (GMST) problems, where the input consists of imprecise vertices, and the objective is to select a single point from each imprecise vertex in order to optimize the weight of the MST over the points. In keeping with one of the themes of the thesis, we begin by using disks as the imprecise vertices. We show that the minimization and maximization versions of this problem are NP-hard, and we describe some parameterized and approximation algorithms. Finally, we look at the case where the imprecise vertices consist of just two vertices each, and we show that the minimization version of the problem (which we call 2-GMST) remains NP-hard, even in the plane. We also provide an algorithm to solve the 2-GMST problem exactly if the combinatorial structure of the optimal solution is known.
We identify a number of open problems in this thesis that are worthy of further study.
Among them:
Is the Assisted LSDUDC problem NP-complete?
Can the WSDUDC results be used to obtain an improved PTAS for DUDC?
Are there classes of polygons for which the determination of the Hausdorff Core is easy?
Is there a PTAS for the maximum weight GMST problem on (unit) disks?
Is there a combinatorial approximation algorithm for the 2-GMST problem (particularly with an approximation factor under 4)
- …