79,514 research outputs found
Smooth Interpolation of Curve Networks with Surface Normals
International audienceRecent surface acquisition technologies based on microsensors produce three-space tangential curve data which can be transformed into a network of space curves with surface normals. This paper addresses the problem of surfacing an arbitrary closed 3D curve network with given surface normals.Thanks to the normal vector input, the patch finding problem can be solved unambiguously and an initial piecewise smooth triangle mesh is computed. The input normals are propagated throughout the mesh and used to compute mean curvature vectors. We then introduce a new variational optimization method in which the standard bi-Laplacian is penalized by a term based on the mean curvature vectors. The intuition behind this original approach is to guide the standard Laplacian-based variational methods by the curvature information extracted from the input normals. The normal input increases shape fidelity and allows to achieve globally smooth and visually pleasing shapes
Comparative analysis of text classification algorithms for automated labelling of quranic verses
The ultimate goal of labelling a Quranic verse is to determine its corresponding theme. However, the existing Quranic verse labelling approach is primarily depending on the availability of Quranic scholars who have expertise in Arabic language and Tafseer. In this paper, we propose to automate the labelling task of the Quranic verse using text classification algorithms. We applied three text classification algorithms namely, k-Nearest Neighbour, Support Vector Machine, and Naïve Bayes in automating the labelling procedure. In our experiment with the classification algorithms English translation of the verses are presented as features. The English translation of the verses are then classified as “Shahadah” (the first pillar of Islam) or “Pray” (the second pillar of Islam). It is found that all of the text classification algorithms are capable to achieve more than 70% accuracy in labelling the Quranic verses
3D Point Capsule Networks
In this paper, we propose 3D point-capsule networks, an auto-encoder designed
to process sparse 3D point clouds while preserving spatial arrangements of the
input data. 3D capsule networks arise as a direct consequence of our novel
unified 3D auto-encoder formulation. Their dynamic routing scheme and the
peculiar 2D latent space deployed by our approach bring in improvements for
several common point cloud-related tasks, such as object classification, object
reconstruction and part segmentation as substantiated by our extensive
evaluations. Moreover, it enables new applications such as part interpolation
and replacement.Comment: As published in CVPR 2019 (camera ready version), with supplementary
materia
3D Point Capsule Networks
In this paper, we propose 3D point-capsule networks, an auto-encoder designed
to process sparse 3D point clouds while preserving spatial arrangements of the
input data. 3D capsule networks arise as a direct consequence of our novel
unified 3D auto-encoder formulation. Their dynamic routing scheme and the
peculiar 2D latent space deployed by our approach bring in improvements for
several common point cloud-related tasks, such as object classification, object
reconstruction and part segmentation as substantiated by our extensive
evaluations. Moreover, it enables new applications such as part interpolation
and replacement
Techniques for augmenting the visualisation of dynamic raster surfaces
Despite their aesthetic appeal and condensed nature, dynamic raster surface representations such as a temporal series of a landform and an attribute series of a socio-economic attribute of an area, are often criticised for the lack of an effective information delivery and interactivity.In this work, we readdress some of the earlier raised reasons for these limitations -information-laden quality of surface datasets, lack of spatial and temporal continuity in the original data, and a limited scope for a real-time interactivity. We demonstrate with examples that the use of four techniques namely the re-expression of the surfaces as a framework of morphometric features, spatial generalisation, morphing, graphic lag and brushing can augment the visualisation of dynamic raster surfaces in temporal and attribute series
Recommended from our members
Spectral filtering as a method of visualising and removing striped artefacts in digital elevation data
Spectral filtering was compared with traditional mean spatial filters to assess their ability to identify and remove striped artefacts in digital elevation data. The techniques were applied to two datasets: a 100 m contour derived digital elevation model (DEM) of southern Norway and a 2 m LiDAR DSM of the Lake District, UK. Both datasets contained diagonal data artefacts that were found to propagate into subsequent terrain analysis. Spectral filtering used fast Fourier transformation (FFT) frequency data to identify these data artefacts in both datasets. These were removed from the data by applying a cut filter, prior to the inverse transform. Spectral filtering showed considerable advantages over mean spatial filters, when both the absolute and spatial distribution of elevation changes made were examined. Elevation changes from the spectral filtering were restricted to frequencies removed by the cut filter, were small in magnitude and consequently avoided any global smoothing. Spectral filtering was found to avoid the smoothing of kernel based data editing, and provided a more informative measure of data artefacts present in the FFT frequency domain. Artefacts were found to be heterogeneous through the surfaces, a result of their strong correlations with spatially autocorrelated variables: landcover and landsurface geometry. Spectral filtering performed better on the 100 m DEM, where signal and artefact were clearly distinguishable in the frequency data. Spectrally filtered digital elevation datasets were found to provide a superior and more precise representation of the landsurface and be a more appropriate dataset for any subsequent geomorphological applications
The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334
A symbolic sensor for an Antilock brake system of a commercial aircraft
The design of a symbolic sensor that identifies thecondition of the runway surface (dry, wet, icy, etc.) during the braking of a commercial aircraft is discussed. The purpose of such a sensor is to generate a qualitative, real-time information about the runway surface to be integrated into a future aircraft Antilock Braking System (ABS). It can be expected that this information can significantly improve the performance of ABS. For the design of the symbolic sensor different classification techniques based upon fuzzy set theory and neural networks are proposed. To develop and to verify theses classification algorithms data recorded from recent braking tests have been used. The results show that the symbolic sensor is able to correctly identify the surface condition. Overall, the application example considered in this paper demonstrates that symbolic information processing using fuzzy logic and neural networks
has the potential to provide new functions in control system design. This paper is part of a common research project between E.N.S.I.C.A. and Aerospatiale in France to study the role of the fuzzy set theory for potential applications in future aircraft control systems
Deep Reflectance Maps
Undoing the image formation process and therefore decomposing appearance into
its intrinsic properties is a challenging task due to the under-constraint
nature of this inverse problem. While significant progress has been made on
inferring shape, materials and illumination from images only, progress in an
unconstrained setting is still limited. We propose a convolutional neural
architecture to estimate reflectance maps of specular materials in natural
lighting conditions. We achieve this in an end-to-end learning formulation that
directly predicts a reflectance map from the image itself. We show how to
improve estimates by facilitating additional supervision in an indirect scheme
that first predicts surface orientation and afterwards predicts the reflectance
map by a learning-based sparse data interpolation.
In order to analyze performance on this difficult task, we propose a new
challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg)
using both synthetic and real images. Furthermore, we show the application of
our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
- …