61 research outputs found

    Effect of graphene-oxide enhancement on large-deflection bending performance of thermoplastic polyurethane elastomer

    Get PDF
    This paper was accepted for publication in the journal Composites Part B and the definitive published version is available at http://dx.doi.org/10.1016/j.compositesb.2015.11.033Thermoplastic polyurethane (PU) elastomers are used as shoe-sole materials due to many excellent properties but their inelastic deformation is a serious deficiency for such applications. Hence, graphene oxide (GO) was introduced into the synthesized thermoplastic PU to produce a GO/PU composite material with enhanced properties. Plastic behaviour of this composite was assessed in cyclic tensile tests, demonstrating reduction of irreversible deformations with the addition of GO. Additionally, in order to evaluate mechanical performance of PU and the GO/PU composite under conditions of large-deflection bending typical for shoe soles, finite-element simulations with Abaqus/Standard were conducted. An elastic-plastic finite-element model was developed to obtain detailed mechanical information for PU and the GO/PU composite. The numerical study demonstrated that the plastic area, final specific plastic dissipation energy and residual height for PU specimens were significantly larger than those for the GO/PU composite. Besides, the addition of GO into the PU matrix greatly delayed the onset of plastic deformation in PU in a large-deflection bending process. The average residual height and final specific plastic dissipation energy for PU were approximately 5.6 and 17.7 times as large as those for the studied GO/PU composite. The finite-element analysis provided quantification of the effect of GO enhancement on the large-deflection bending performance of PU for regimes typical for shoe soles and can be used as a basis for optimization of real composite products

    3D hand tracking for human computer interaction

    No full text
    Abstract not availableVictor Adrian Prisacariu, Ian Rei

    PWP3D: Real-time segmentation and tracking of 3D objects

    No full text
    We formulate a probabilistic framework for simultaneous region-based 2D segmentation and 2D to 3D pose tracking, using a known 3D model. Given such a model, we aim to maximise the discrimination between statistical foreground and background appearance models, via direct optimisation of the 3D pose parameters. The foreground region is delineated by the zero-level-set of a signed distance embedding function, and we define an energy over this region and its immediate background surroundings based on pixel-wise posterior membership probabilities (as opposed to likelihoods). We derive the differentials of this energy with respect to the pose parameters of the 3D object, meaning we can conduct a search for the correct pose using standard gradient-based non-linear minimisation techniques. We propose novel enhancements at the pixel level based on temporal consistency and improved online appearance model adaptation. Furthermore, straightforward extensions of our method lead to multi-camera and multi-object tracking as part of the same framework. The parallel nature of much of the processing in our algorithm means it is amenable to GPU acceleration, and we give details of our real-time implementation, which we use to generate experimental results on both real and artificial video sequences, with a number of 3D models. These experiments demonstrate the benefit of using pixel-wise posteriors rather than likelihoods, and showcase the qualities, such as robustness to occlusions and motion blur (and also some failure modes), of our tracker. © Springer Science+Business Media, LLC 2011.Victor A. Prisacariu, Ian D. Rei

    Nonlinear shape manifolds as shape priors in level set segmentation and tracking

    No full text
    We propose a novel nonlinear, probabilistic and variational method for adding shape information to level set-based segmentation and tracking. Unlike previous work, we represent shapes with elliptic Fourier descriptors and learn their lower dimensional latent space using Gaussian Process Latent Variable Models. Segmentation is done by a nonlinear minimisation of an image-driven energy function in the learned latent space. We combine it with a 2D pose recovery stage, yielding a single, one shot, optimisation of both shape and pose. We demonstrate the performance of our method, both qualitatively and quantitatively, with multiple images, video sequences and latent spaces, capturing both shape kinematics and object class variance.Victor Adrian Prisacariu, Ian Reidhttp://cvpr2011.org/index.htm

    Dense decoder shortcut connections for single-pass semantic segmentation

    No full text
    We propose a novel end-to-end trainable, deep, encoder-decoder architecture for single-pass semantic segmentation. Our approach is based on a cascaded architecture with feature-level long-range skip connections. The encoder incorporates the structure of ResNeXt's residual building blocks and adopts the strategy of repeating a building block that aggregates a set of transformations with the same topology. The decoder features a novel architecture, consisting of blocks, that (i) capture context information, (ii) generate semantic features, and (iii) enable fusion between different output resolutions. Crucially, we introduce dense decoder shortcut connections to allow decoder blocks to use semantic feature maps from all previous decoder levels, i.e. from all higher-level feature maps. The dense decoder connections allow for effective information propagation from one decoder block to another, as well as for multi-level feature fusion that significantly improves the accuracy. Importantly, these connections allow our method to obtain state-of-the-art performance on several challenging datasets, without the need of time-consuming multi-scale averaging of previous works

    Shared shape spaces

    No full text
    We propose a method for simultaneous shape-constrained segmentation and parameter recovery. The parameters can describe anything from 3D shape to 3D pose and we place no restriction on the topology of the shapes, i.e. they can have holes or be made of multiple parts. We use Shared Gaussian Process Latent Variable Models to learn multimodal shape-parameter spaces. These allow non-linear embeddings of the high-dimensional shape and parameter spaces in low dimensional spaces in a fully probabilistic manner. We propose a method for exploring the multimodality in the joint space in an efficient manner, by learning a mapping from the latent space to a space that encodes the similarity between shapes. We further extend the SGP-LVM to a model that makes use of a hierarchy of embeddings and show that this yields faster convergence and greater accuracy over the standard non-hierarchical embedding. Shapes are represented implicitly using level sets, and inference is made tractable by compressing the level set embedding functions with discrete cosine transforms. We show state of the art results in various fields, ranging from pose recovery to gaze tracking and to monocular 3D reconstruction.Victor Adrian Prisacariu, Ian Rei

    Robust 3D hand tracking for human computer interaction

    No full text
    We propose a system for human computer interaction via 3D hand movements, based on a combination of visual tracking and a cheap, off-the-shelf, accelerometer. We use a 3D model and region based tracker, resulting in robustness to variations in illumination, motion blur and occlusions. At the same time the accelerometer allows us to deal with the multimodality in the silhouette to pose function. We synchronise the accelerometer and tracker online, by casting the calibration problem as a maximum covariance problem, which we then solve probabilistically. We show the effectiveness of our solution with multiple real-world tests and demonstration scenarios.Victor Adrian Prisacariu, Ian Rei

    Dense decoder shortcut connections for single-pass semantic segmentation

    No full text
    We propose a novel end-to-end trainable, deep, encoder-decoder architecture for single-pass semantic segmentation. Our approach is based on a cascaded architecture with feature-level long-range skip connections. The encoder incorporates the structure of ResNeXt's residual building blocks and adopts the strategy of repeating a building block that aggregates a set of transformations with the same topology. The decoder features a novel architecture, consisting of blocks, that (i) capture context information, (ii) generate semantic features, and (iii) enable fusion between different output resolutions. Crucially, we introduce dense decoder shortcut connections to allow decoder blocks to use semantic feature maps from all previous decoder levels, i.e. from all higher-level feature maps. The dense decoder connections allow for effective information propagation from one decoder block to another, as well as for multi-level feature fusion that significantly improves the accuracy. Importantly, these connections allow our method to obtain state-of-the-art performance on several challenging datasets, without the need of time-consuming multi-scale averaging of previous works

    Regressing local to global shape properties for online segmentation and tracking

    No full text
    We propose a novel regression based framework that uses online learned shape information to reconstruct occluded object contours. Our key insight is to regress the global, coarse, properties of shape from its local properties, i.e. its details. We do this by representing shapes using their 2D discrete cosine transforms and by regressing low frequency from high frequency harmonics. We learn this regression model using Locally Weighted Projection Regression which expedites online, incremental learning. After sufficient observation of a set of unoccluded shapes, the learned model can detect occlusion and recover the full shapes from the occluded ones. We demonstrate the ideas using a level-set based tracking system that provides shape and pose, however, the framework could be embedded in any segmentation-based tracking system. Our experiments demonstrate the efficacy of the method on a variety of objects using both real data and artificial data.Carl Yuheng Ren, Victor Prisacariu, Ian Reid Receive

    Recovering Stable Scale in Monocular SLAM Using Object-Supplemented Bundle Adjustment

    No full text
    Without knowledge of the absolute baseline between images, the scale of a map from a single-camera simultaneous localization and mapping system is subject to calamitous drift over time. We describe a monocular approach that in addition to point measurements also considers object detections to resolve this scale ambiguity and drift. By placing an expectation on the size of the objects, the scale estimation can be seamlessly integrated into a bundle adjustment. When object observations are available, the local scale of the map is then determined jointly with the camera pose in local adjustments. Unlike many previous visual odometry methods, our approach does not impose restrictions such as constant camera height or planar roadways, and is therefore more widely applicable. We evaluate our approach on the KITTI data set and show that it reduces scale drift over long-range outdoor sequences with a total length of 40 km. As the scale of objects is known absolutely, metric accuracy is obtained for all sequences. Qualitative evaluation is also performed on video footage from a hand-held camera
    • …
    corecore