917 research outputs found

    Local Measurement and Reconstruction for Noisy Graph Signals

    Full text link
    The emerging field of signal processing on graph plays a more and more important role in processing signals and information related to networks. Existing works have shown that under certain conditions a smooth graph signal can be uniquely reconstructed from its decimation, i.e., data associated with a subset of vertices. However, in some potential applications (e.g., sensor networks with clustering structure), the obtained data may be a combination of signals associated with several vertices, rather than the decimation. In this paper, we propose a new concept of local measurement, which is a generalization of decimation. Using the local measurements, a local-set-based method named iterative local measurement reconstruction (ILMR) is proposed to reconstruct bandlimited graph signals. It is proved that ILMR can reconstruct the original signal perfectly under certain conditions. The performance of ILMR against noise is theoretically analyzed. The optimal choice of local weights and a greedy algorithm of local set partition are given in the sense of minimizing the expected reconstruction error. Compared with decimation, the proposed local measurement sampling and reconstruction scheme is more robust in noise existing scenarios.Comment: 24 pages, 6 figures, 2 tables, journal manuscrip

    A predictive approach for a real-time remote visualization of large meshes

    Get PDF
    DĂ©jĂ  sur HALRemote access to large meshes is the subject of studies since several years. We propose in this paper a contribution to the problem of remote mesh viewing. We work on triangular meshes. After a study of existing methods of remote viewing, we propose a visualization approach based on a client-server architecture, in which almost all operations are performed on the server. Our approach includes three main steps: a first step of partitioning the original mesh, generating several fragments of the original mesh that can be supported by the supposed smaller Transfer Control Protocol (TCP) window size of the network, a second step called pre-simplification of the mesh partitioned, generating simplified models of fragments at different levels of detail, which aims to accelerate the visualization process when a client(that we also call remote user) requests a visualization of a specific area of interest, the final step involves the actual visualization of an area which interest the client, the latter having the possibility to visualize more accurately the area of interest, and less accurately the areas out of context. In this step, the reconstruction of the object taking into account the connectivity of fragments before simplifying a fragment is necessary.Pestiv-3D projec

    Error-driven adaptive resolutions for large scientific data sets

    Get PDF
    The process of making observations and drawing conclusions from large data sets is an essential part of modern scientific research. However, the size of these data sets can easily exceed the available resources of a typical workstation, making visualization and analysis a formidable challenge. Many solutions, including multiresolution and adaptive resolution representations, have been proposed and implemented to address these problems. This thesis describes an error model for calculating and representing localized error from data reduction and a process for constructing error-driven adaptive resolutions from this data, allowing fully-renderable error driven adaptive resolutions to be constructed from a single, high-resolution data set. We evaluated the performance of these adaptive resolutions generated with various parameters compared to the original data set. We found that adaptive resolutions generated with reasonable subdomain sizes and error tolerances show improved performance daring visualization

    3D Mesh Simplification. A survey of algorithms and CAD model simplification tests

    Get PDF
    SimpliïŹcation of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simpliïŹcation, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simpliïŹed models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simpliïŹcation algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simpliïŹcation algorithms that have not been surveyed in previous mesh simpliïŹcation reviews.Siirretty Doriast

    A multi-resolution approach for adapting close character interaction

    Get PDF
    Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots

    Using polyhedral models to automatically sketch idealized geometry for structural analysis

    Get PDF
    Simplification of polyhedral models, which may incorporate large numbers of faces and nodes, is often required to reduce their amount of data, to allow their efficient manipulation, and to speed up computation. Such a simplification process must be adapted to the use of the resulting polyhedral model. Several applications require simplified shapes which have the same topology as the original model (e.g. reverse engineering, medical applications, etc.). Nevertheless, in the fields of structural analysis and computer visualization, for example, several adaptations and idealizations of the initial geometry are often necessary. To this end, within this paper a new approach is proposed to simplify an initial manifold or non-manifold polyhedral model with respect to bounded errors specified by the user, or set up, for example, from a preliminary F.E. analysis. The topological changes which may occur during a simplification because of the bounded error (or tolerance) values specified are performed using specific curvature and topological criteria and operators. Moreover, topological changes, whether or not they kept the manifold of the object, are managed simultaneously with the geometric operations of the simplification process

    Adapted generalized lifting schemes for scalable lossless image coding

    No full text
    International audienceStill image coding occasionally uses linear predictive coding together with multi-resolution decompositions, as may be found in several papers. Those related approaches do not take into account all the information available at the decoder in the prediction stage. In this paper, we introduce an adapted generalized lifting scheme in which the predictor is built upon two filters, leading to taking advantage of all this available information. With this structure included in a multi-resolution decomposition framework, we study two kinds of adaptation based on least squares estimation, according to different assumptions, which are either a global or a local second order stationarity of the image. The efficiency in lossless coding of these decompositions is shown on synthetic images and their performances are compared with those of well-known codecs (S+P, JPEG-LS, JPEG2000, CALIC) on actual images. Four images' families are distinguished: natural, MRI medical, satellite and textures associated with fingerprints. On natural and medical images, the performances of our codecs do not exceed those of classical codecs. Now for satellite images and textures, they present a slightly noticeable (about 0.05 to 0.08 bpp) coding gain compared to the others that permit a progressive coding in resolution, but with a greater coding time

    Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA)

    Full text link
    We propose an algorithm for identifying vertices from three dimensional (3D) meshes that are most important for a geometric shape creation. Extracting such a set of vertices from a 3D mesh is important in applications such as digital watermarking, but also as a component of optimization and triangulation. In the first step, the Ordered Statistics Vertex Extraction and Tracing Algorithm (OSVETA) estimates precisely the local curvature, and most important topological features of mesh geometry. Using the vertex geometric importance ranking, the algorithm traces and extracts a vector of vertices, ordered by decreasing index of importance.Comment: Accepted for publishing and Copyright transfered to Advances in Electrical and Computer Engineering, November 23th 201
    • 

    corecore