884 research outputs found

    3D Spectral Domain Registration-Based Visual Servoing

    Full text link
    This paper presents a spectral domain registration-based visual servoing scheme that works on 3D point clouds. Specifically, we propose a 3D model/point cloud alignment method, which works by finding a global transformation between reference and target point clouds using spectral analysis. A 3D Fast Fourier Transform (FFT) in R3 is used for the translation estimation, and the real spherical harmonics in SO(3) are used for the rotations estimation. Such an approach allows us to derive a decoupled 6 degrees of freedom (DoF) controller, where we use gradient ascent optimisation to minimise translation and rotational costs. We then show how this methodology can be used to regulate a robot arm to perform a positioning task. In contrast to the existing state-of-the-art depth-based visual servoing methods that either require dense depth maps or dense point clouds, our method works well with partial point clouds and can effectively handle larger transformations between the reference and the target positions. Furthermore, the use of spectral data (instead of spatial data) for transformation estimation makes our method robust to sensor-induced noise and partial occlusions. We validate our approach by performing experiments using point clouds acquired by a robot-mounted depth camera. Obtained results demonstrate the effectiveness of our visual servoing approach.Comment: Accepted to 2023 IEEE International Conference on Robotics and Automation (ICRA'23

    Approximation of dynamical systems using S-systems theory : application to biological systems

    Full text link
    In this paper we propose a new symbolic-numeric algorithm to find positive equilibria of a n-dimensional dynamical system. This algorithm implies a symbolic manipulation of ODE in order to give a local approximation of differential equations with power-law dynamics (S-systems). A numerical calculus is then needed to converge towards an equilibrium, giving at the same time a S-system approximating the initial system around this equilibrium. This algorithm is applied to a real biological example in 14 dimensions which is a subsystem of a metabolic pathway in Arabidopsis Thaliana

    Robust signatures for 3D face registration and recognition

    Get PDF
    PhDBiometric authentication through face recognition has been an active area of research for the last few decades, motivated by its application-driven demand. The popularity of face recognition, compared to other biometric methods, is largely due to its minimum requirement of subject co-operation, relative ease of data capture and similarity to the natural way humans distinguish each other. 3D face recognition has recently received particular interest since three-dimensional face scans eliminate or reduce important limitations of 2D face images, such as illumination changes and pose variations. In fact, three-dimensional face scans are usually captured by scanners through the use of a constant structured-light source, making them invariant to environmental changes in illumination. Moreover, a single 3D scan also captures the entire face structure and allows for accurate pose normalisation. However, one of the biggest challenges that still remain in three-dimensional face scans is the sensitivity to large local deformations due to, for example, facial expressions. Due to the nature of the data, deformations bring about large changes in the 3D geometry of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such as spikes and holes, which are uncommon with 2D images and requires a pre-processing stage that is speci c to the scanner used to capture the data. The aim of this thesis is to devise a face signature that is compact in size and overcomes the above mentioned limitations. We investigate the use of facial regions and landmarks towards a robust and compact face signature, and we study, implement and validate a region-based and a landmark-based face signature. Combinations of regions and landmarks are evaluated for their robustness to pose and expressions, while the matching scheme is evaluated for its robustness to noise and data artefacts

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Part-based recognition of 3-D objects with application to shape modeling in hearing aid manufacturing

    Get PDF
    In order to meet the needs of people with hearing loss today hearing aids are custom designed. Increasingly accurate 3-D scanning technology has contributed to the transition from conventional production scenarios to software based processes. Nonetheless, there is a tremendous amount of manual work involved to transform an input 3-D surface mesh of the outer ear into a final hearing aid shape. This manual work is often cumbersome and requires lots of experience which is why automatic solutions are of high practical relevance. This work is concerned with the recognition of 3-D surface meshes of ear implants. In particular we present a semantic part-labeling framework which significantly outperforms existing approaches for this task. We make at least three contributions which may also be found useful for other classes of 3-D meshes. Firstly, we validate the discriminative performance of several local descriptors and show that the majority of them performs poorly on our data except for 3-D shape contexts. The reason for this is that many local descriptor schemas are not rich enough to capture subtle variations in form of bends which is typical for organic shapes. Secondly, based on the observation that the left and the right outer ear of an individual look very similar we raised the question how similar the ear shapes among arbitrary individuals are? In this work, we define a notion of distance between ear shapes as building block of a non-parametric shape model of the ear to better handle the anatomical variability in ear implant labeling. Thirdly, we introduce a conditional random field model with a variety of label priors to facilitate the semantic part-labeling of 3-D meshes of ear implants. In particular we introduce the concept of a global parametric transition prior to enforce transition boundaries between adjacent object parts with an a priori known parametric form. In this way we were able to overcome the issue of inadequate geometric cues (e.g., ridges, bumps, concavities) as natural indicators for the presence of part boundaries. The last part of this work offers an outlook to possible extensions of our methods, in particular the development of 3-D descriptors that are fast to compute whilst at the same time rich enough to capture the characteristic differences between objects residing in the same class

    Pose Invariant 3D Face Authentication based on Gaussian Fields Approach

    Get PDF
    This thesis presents a novel illuminant invariant approach to recognize the identity of an individual from his 3D facial scan in any pose, by matching it with a set of frontal models stored in the gallery. In view of today’s security concerns, 3D face reconstruction and recognition has gained a significant position in computer vision research. The non intrusive nature of facial data acquisition makes face recognition one of the most popular approaches for biometrics-based identity recognition. Depth information of a 3D face can be used to solve the problems of illumination and pose variation associated with face recognition. The proposed method makes use of 3D geometric (point sets) face representations for recognizing faces. The use of 3D point sets to represent human faces in lieu of 2D texture makes this method robust to changes in illumination and pose. The method first automatically registers facial point-sets of the probe with the gallery models through a criterion based on Gaussian force fields. The registration method defines a simple energy function, which is always differentiable and convex in a large neighborhood of the alignment parameters; allowing for the use of powerful standard optimization techniques. The new method overcomes the necessity of close initialization and converges in much less iterations as compared to the Iterative Closest Point algorithm. The use of an optimization method, the Fast Gauss Transform, allows a considerable reduction in the computational complexity of the registration algorithm. Recognition is then performed by using the robust similarity score generated by registering 3D point sets of faces. Our approach has been tested on a large database of 85 individuals with 521 scans at different poses, where the gallery and the probe images have been acquired at significantly different times. The results show the potential of our approach toward a fully pose and illumination invariant system. Our method can be successfully used as a potential biometric system in various applications such as mug shot matching, user verification and access control, and enhanced human computer interaction

    Report on shape analysis and matching and on semantic matching

    No full text
    In GRAVITATE, two disparate specialities will come together in one working platform for the archaeologist: the fields of shape analysis, and of metadata search. These fields are relatively disjoint at the moment, and the research and development challenge of GRAVITATE is precisely to merge them for our chosen tasks. As shown in chapter 7 the small amount of literature that already attempts join 3D geometry and semantics is not related to the cultural heritage domain. Therefore, after the project is done, there should be a clear ‘before-GRAVITATE’ and ‘after-GRAVITATE’ split in how these two aspects of a cultural heritage artefact are treated.This state of the art report (SOTA) is ‘before-GRAVITATE’. Shape analysis and metadata description are described separately, as currently in the literature and we end the report with common recommendations in chapter 8 on possible or plausible cross-connections that suggest themselves. These considerations will be refined for the Roadmap for Research deliverable.Within the project, a jargon is developing in which ‘geometry’ stands for the physical properties of an artefact (not only its shape, but also its colour and material) and ‘metadata’ is used as a general shorthand for the semantic description of the provenance, location, ownership, classification, use etc. of the artefact. As we proceed in the project, we will find a need to refine those broad divisions, and find intermediate classes (such as a semantic description of certain colour patterns), but for now the terminology is convenient – not least because it highlights the interesting area where both aspects meet.On the ‘geometry’ side, the GRAVITATE partners are UVA, Technion, CNR/IMATI; on the metadata side, IT Innovation, British Museum and Cyprus Institute; the latter two of course also playing the role of internal users, and representatives of the Cultural Heritage (CH) data and target user’s group. CNR/IMATI’s experience in shape analysis and similarity will be an important bridge between the two worlds for geometry and metadata. The authorship and styles of this SOTA reflect these specialisms: the first part (chapters 3 and 4) purely by the geometry partners (mostly IMATI and UVA), the second part (chapters 5 and 6) by the metadata partners, especially IT Innovation while the joint overview on 3D geometry and semantics is mainly by IT Innovation and IMATI. The common section on Perspectives was written with the contribution of all

    Vertex-Level Three-Dimensional Shape Deformability Measurement Based on Line Segment Advection

    Get PDF

    Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention

    Full text link
    Spatially selective modulation of alpha power (8–14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention
    corecore