113 research outputs found
Speeding up Simplification of Polygonal Curves using Nested Approximations
We develop a multiresolution approach to the problem of polygonal curve
approximation. We show theoretically and experimentally that, if the
simplification algorithm A used between any two successive levels of resolution
satisfies some conditions, the multiresolution algorithm MR will have a
complexity lower than the complexity of A. In particular, we show that if A has
a O(N2/K) complexity (the complexity of a reduced search dynamic solution
approach), where N and K are respectively the initial and the final number of
segments, the complexity of MR is in O(N).We experimentally compare the
outcomes of MR with those of the optimal "full search" dynamic programming
solution and of classical merge and split approaches. The experimental
evaluations confirm the theoretical derivations and show that the proposed
approach evaluated on 2D coastal maps either shows a lower complexity or
provides polygonal approximations closer to the initial curves.Comment: 12 pages + figure
Down-Sampling coupled to Elastic Kernel Machines for Efficient Recognition of Isolated Gestures
In the field of gestural action recognition, many studies have focused on
dimensionality reduction along the spatial axis, to reduce both the variability
of gestural sequences expressed in the reduced space, and the computational
complexity of their processing. It is noticeable that very few of these methods
have explicitly addressed the dimensionality reduction along the time axis.
This is however a major issue with regard to the use of elastic distances
characterized by a quadratic complexity. To partially fill this apparent gap,
we present in this paper an approach based on temporal down-sampling associated
to elastic kernel machine learning. We experimentally show, on two data sets
that are widely referenced in the domain of human gesture recognition, and very
different in terms of quality of motion capture, that it is possible to
significantly reduce the number of skeleton frames while maintaining a good
recognition rate. The method proves to give satisfactory results at a level
currently reached by state-of-the-art methods on these data sets. The
computational complexity reduction makes this approach eligible for real-time
applications.Comment: ICPR 2014, International Conference on Pattern Recognition, Stockholm
: Sweden (2014
Collectively Simplifying Trajectories in a Database: A Query Accuracy Driven Approach
Increasing and massive volumes of trajectory data are being accumulated that
may serve a variety of applications, such as mining popular routes or
identifying ridesharing candidates. As storing and querying massive trajectory
data is costly, trajectory simplification techniques have been introduced that
intuitively aim to reduce the sizes of trajectories, thus reducing storage and
speeding up querying, while preserving as much information as possible.
Existing techniques rely mainly on hand-crafted error measures when deciding
which point to drop when simplifying a trajectory. While the hope may be that
such simplification affects the subsequent usability of the data only
minimally, the usability of the simplified data remains largely unexplored.
Instead of using error measures that indirectly may to some extent yield
simplified trajectories with high usability, we adopt a direct approach to
simplification and present the first study of query accuracy driven trajectory
simplification, where the direct objective is to achieve a simplified
trajectory database that preserves the query accuracy of the original database
as much as possible. Specifically, we propose a multi-agent reinforcement
learning based solution with two agents working cooperatively to collectively
simplify trajectories in a database while optimizing query usability. Extensive
experiments on four real-world trajectory datasets show that the solution is
capable of consistently outperforming baseline solutions over various query
types and dynamics.Comment: This paper has been accepted by ICDE 202
Investigations sur la fréquence d’échantillonnage de la mobilité
Recent studies have leveraged tracking techniques based on positioning technologiesto discover new knowledge about human mobility. These investigations have revealed, amongothers, a high spatiotemporal regularity of individual movement patterns. Building on these findings,we aim at answering the question “at what frequency should one sample individual humanmovements so that they can be reconstructed from the collected samples with minimum loss of information?”.Our quest for a response leads to the discovery of (i) seemingly universal spectralproperties of human mobility, and (ii) a linear scaling law of the localization error with respectto the sampling interval. Our findings are based on the analysis of fine-grained GPS trajectoriesof 119 users worldwide. The applications of our findings are related to a number of fields relevantto ubiquitous computing, such as energy-efficient mobile computing, location-based service operations,active probing of subscribers’ positions in mobile networks and trajectory data compression.Des études récentes ont mis à profit des techniques de suivi basées sur des technologiesde positionnement pour étuder la mobilité humaine. Ces recherches ont révélé, entreautres, une grande régularité spatio-temporelle des mouvements individuels. Sur la base de cesrésultats, nous visons à répondre à la question «à quelle fréquence doit-on échantillonner lesmouvements humains individuels afin qu’ils puissent être reconstruits à partir des échantillonsrecueillis avec un minimum de perte d’information? Notre recherche d’une réponse à cette questionnous a conduit à la découverte de (i) propriétés spectrales apparemment universelles de lamobilité humaine, et (ii) une loi de mise à l’échelle linéaire de l’erreur de localisation par rapportà l’intervalle d’échantillonnage. Nos résultats sont basés sur l’analyse des trajectoires GPS de119 utilisateurs dans le monde entier. Les applications de nos résultats sont liées à un certainnombre de domaines pertinents pour l’informatique omniprésente, tels que l’informatique mobileéconome en énergie, les opérations de service basées sur l’emplacement, le sondage actif despositions des abonnés dans les réseaux mobiles et la compression des données de trajectoire
Multiresolutional Fault-Tolerant Sensor Integration and Object Recognition in Images.
This dissertation applies multiresolution methods to two important problems in signal analysis. The problem of fault-tolerant sensor integration in distributed sensor networks is addressed, and an efficient multiresolutional algorithm for estimating the sensors\u27 effective output is proposed. The problem of object/shape recognition in images is addressed in a multiresolutional setting using pyramidal decomposition of images with respect to an orthonormal wavelet basis. A new approach to efficient template matching to detect objects using computational geometric methods is put forward. An efficient paradigm for object recognition is described
Management of spatial data for visualization on mobile devices
Vector-based mapping is emerging as a preferred format in Location-based
Services(LBS), because it can deliver an up-to-date and interactive map visualization.
The Progressive Transmission(PT) technique has been developed to
enable the ecient transmission of vector data over the internet by delivering
various incremental levels of detail(LoD). However, it is still challenging to apply
this technique in a mobile context due to many inherent limitations of mobile
devices, such as small screen size, slow processors and limited memory. Taking
account of these limitations, PT has been extended by developing a framework of
ecient data management for the visualization of spatial data on mobile devices.
A data generalization framework is proposed and implemented in a software
application. This application can signicantly reduce the volume of data for
transmission and enable quick access to a simplied version of data while preserving
appropriate visualization quality. Using volunteered geographic information
as a case-study, the framework shows
exibility in delivering up-to-date spatial
information from dynamic data sources.
Three models of PT are designed and implemented to transmit the additional
LoD renements: a full scale PT as an inverse of generalisation, a viewdependent
PT, and a heuristic optimised view-dependent PT. These models are
evaluated with user trials and application examples. The heuristic optimised
view-dependent PT has shown a signicant enhancement over the traditional PT
in terms of bandwidth-saving and smoothness of transitions.
A parallel data management strategy associated with three corresponding
algorithms has been developed to handle LoD spatial data on mobile clients.
This strategy enables the map rendering to be performed in parallel with a process
which retrieves the data for the next map location the user will require. A viewdependent
approach has been integrated to monitor the volume of each LoD for
visible area. The demonstration of a
exible rendering style shows its potential
use in visualizing dynamic geoprocessed data. Future work may extend this
to integrate topological constraints and semantic constraints for enhancing the
vector map visualization
Human perception-oriented segmentation for triangle meshes
A segmentação de malhas é um tópico importante de investigação em computação gráfica, em particular em modelação geométrica. Isto deve-se ao facto de as técnicas de segmentaçãodemalhasteremváriasaplicações,nomeadamentenaproduçãodefilmes, animaçãoporcomputador, realidadevirtual, compressãodemalhas, assimcomoemjogosdigitais. Emconcreto, asmalhastriangularessãoamplamenteusadasemaplicações interativas, visto que sua segmentação em partes significativas (também designada por segmentação significativa, segmentação perceptiva ou segmentação perceptualmente significativa ) é muitas vezes vista como uma forma de acelerar a interação com o utilizador ou a deteção de colisões entre esses objetos 3D definidos por uma malha, bem como animar uma ou mais partes significativas (por exemplo, a cabeça de uma personagem) de um dado objeto, independentemente das restantes partes.
Acontece que não se conhece nenhuma técnica capaz de segmentar correctamente malhas arbitrárias −ainda que restritas aos domínios de formas livres e não-livres− em partes significativas. Algumas técnicas são mais adequadas para objetos de forma não-livre (por exemplo, peças mecânicas definidas geometricamente por quádricas), enquanto outras são mais talhadas para o domínio dos objectos de forma livre. Só na literatura recente surgem umas poucas técnicas que se aplicam a todo o universo de objetos de forma livre e não-livre. Pior ainda é o facto de que a maioria das técnicas de segmentação não serem totalmente automáticas, no sentido de que quase todas elas exigem algum tipo de pré-requisitos e assistência do utilizador. Resumindo, estes três desafios relacionados com a proximidade perceptual, generalidade e automação estão no cerne do trabalho descrito nesta tese.
Para enfrentar estes desafios, esta tese introduz o primeiro algoritmo de segmentação baseada nos contornos ou fronteiras dos segmentos, cuja técnica se inspira nas técnicas de segmentação baseada em arestas, tão comuns em análise e processamento de imagem,porcontraposiçãoàstécnicasesegmentaçãobaseadaemregiões. Aideiaprincipal é a de encontrar em primeiro lugar a fronteira de cada região para, em seguida, identificar e agrupar todos os seus triângulos internos. As regiões da malha encontradas correspondem a saliências e reentrâncias, que não precisam de ser estritamente convexas, nem estritamente côncavas, respectivamente. Estas regiões, designadas regiões relaxadamenteconvexas(ousaliências)eregiõesrelaxadamentecôncavas(oureentrâncias), produzem segmentações que são menos sensíveis ao ruído e, ao mesmo tempo, são mais intuitivas do ponto de vista da perceção humana; por isso, é designada por segmentação orientada à perceção humana (ou, human perception- oriented (HPO), do inglês). Além disso, e ao contrário do atual estado-da-arte da segmentação de malhas, a existência destas regiões relaxadas torna o algoritmo capaz de segmentar de maneira bastante plausível tanto objectos de forma não-livre como objectos de forma livre.
Nesta tese, enfrentou-se também um quarto desafio, que está relacionado com a fusão de segmentação e multi-resolução de malhas. Em boa verdade, já existe na literatura uma variedade grande de técnicas de segmentação, bem como um número significativo de técnicas de multi-resolução, para malhas triangulares. No entanto, não é assim tão comum encontrar estruturas de dados e algoritmos que façam a fusão ou a simbiose destes dois conceitos, multi-resolução e segmentação, num único esquema multi-resolução que sirva os propósitos das aplicações que lidam com malhas simples e segmentadas, sendo que neste contexto se entende que uma malha simples é uma malha com um único segmento. Sendo assim, nesta tese descreve-se um novo esquema (entenda-seestruturasdedadosealgoritmos)demulti-resoluçãoesegmentação,designado por extended Ghost Cell (xGC). Este esquema preserva a forma das malhas, tanto em termos globais como locais, ou seja, os segmentos da malha e as suas fronteiras, bem como os seus vincos e ápices são preservados, não importa o nível de resolução que usamos durante a/o simplificação/refinamento da malha. Além disso, ao contrário de outros esquemas de segmentação, tornou-se possível ter segmentos adjacentes com dois ou mais níveis de resolução de diferença. Isto é particularmente útil em animação por computador, compressão e transmissão de malhas, operações de modelação geométrica, visualização científica e computação gráfica.
Em suma, esta tese apresenta um esquema genérico, automático, e orientado à percepção humana, que torna possível a simbiose dos conceitos de segmentação e multiresolução de malhas trianguladas que sejam representativas de objectos 3D.The mesh segmentation is an important topic in computer graphics, in particular in geometric computing. This is so because mesh segmentation techniques find many applications in movies, computer animation, virtual reality, mesh compression, and games. Infact, trianglemeshesarewidelyusedininteractiveapplications, sothattheir segmentation in meaningful parts (i.e., human-perceptually segmentation, perceptive segmentationormeaningfulsegmentation)isoftenseenasawayofspeedinguptheuser interaction, detecting collisions between these mesh-covered objects in a 3D scene, as well as animating one or more meaningful parts (e.g., the head of a humanoid) independently of the other parts of a given object.
It happens that there is no known technique capable of correctly segmenting any mesh into meaningful parts. Some techniques are more adequate for non-freeform objects (e.g., quadricmechanicalparts), whileothersperformbetterinthedomainoffreeform objects. Only recently, some techniques have been developed for the entire universe of objects and shapes. Even worse it is the fact that most segmentation techniques are not entirely automated in the sense that almost all techniques require some sort of pre-requisites and user assistance. Summing up, these three challenges related to perceptual proximity, generality and automation are at the core of the work described in this thesis.
In order to face these challenges, we have developed the first contour-based mesh segmentation algorithm that we may find in the literature, which is inspired in the edgebased segmentation techniques used in image analysis, as opposite to region-based segmentation techniques. Its leading idea is to firstly find the contour of each region, and then to identify and collect all of its inner triangles. The encountered mesh regions correspond to ups and downs, which do not need to be strictly convex nor strictly concave, respectively. These regions, called relaxedly convex regions (or saliences) and relaxedly concave regions (or recesses), produce segmentations that are less-sensitive to noise and, at the same time, are more intuitive from the human point of view; hence it is called human perception- oriented (HPO) segmentation. Besides, and unlike the current state-of-the-art in mesh segmentation, the existence of these relaxed regions makes the algorithm suited to both non-freeform and freeform objects.
In this thesis, we have also tackled a fourth challenge, which is related with the fusion of mesh segmentation and multi-resolution. Truly speaking, a plethora of segmentation techniques, as well as a number of multiresolution techniques, for triangle meshes already exist in the literature. However, it is not so common to find algorithms and data structures that fuse these two concepts, multiresolution and segmentation, into a symbiotic multi-resolution scheme for both plain and segmented meshes, in which a plainmeshisunderstoodasameshwithasinglesegment. So, weintroducesuchanovel multiresolution segmentation scheme, called extended Ghost Cell (xGC) scheme. This scheme preserves the shape of the meshes in both global and local terms, i.e., mesh segments and their boundaries, as well as creases and apices are preserved, no matter the level of resolution we use for simplification/refinement of the mesh. Moreover, unlike other segmentation schemes, it was made possible to have adjacent segments with two or more resolution levels of difference. This is particularly useful in computer animation, mesh compression and transmission, geometric computing, scientific visualization, and computer graphics.
In short, this thesis presents a fully automatic, general, and human perception-oriented scheme that symbiotically integrates the concepts of mesh segmentation and multiresolution
- …