3,628 research outputs found

    Feature extraction of the wear label of carpets by using a novel 3D scanner

    Get PDF
    In the textile industry, the quality of carpets is still determined through visual assessment by human experts. Human assessment is somewhat subjective, so there is a need for a more objective assessment which yields to automated systems. However, existing computer models are at this moment not yet capable of matching the human expertise. Most attempts at automated assessment have focused on image analysis of two dimensional images of worn carpet. These do not adequately capture the three dimensional structure of the carpet that is also evaluated by the experts and the image processing is very dependent on the lighting conditions. One previous attempt however used a laser scanner to obtain three dimensional images of the carpet and process them for carpet assessment. This paper describes the development of a new scanner to acquire wear label characteristics in three dimensions based on a structured light pattern. Now an appropriate technique based on the local binary patterns (LBP) and the Kullback-Leibler divergence has been developed. We show that the new laser scanning system is less dependent on the lighting conditions and color of the carpet and obtains data points on a structured grid instead of sparse points. The new system is also more than five times cheaper, scans more than seven times faster and is specifically designed for scanning carpets instead of 3D objects. Previous attempts to classify the carpet wear were based on several extracted features. Only one of them - the height difference between worn and unworn part - showed a good correlation of 0.70 with the carpet wear label. However, experiments demonstrate that our approach - using the LBP technique - gives rise to promising results, with correlation factors from 0.89 to 0.99 between the Kullback-Leibler divergence and quality labels. This new laser scanner system is a significant step forward in the automated assessment of carpet wear using 3D images

    Texture wear analysis in textile floor coverings by using depth information

    Get PDF
    Considerable industrial and academic interest is addressed to automate the quality inspection of textile floor coverings, mostly using intensity images. Recently, the use of depth information has been explored to better capture the 3D structure of the surface. In this paper, we present a comparison of features extracted from three texture analysis techniques. The evaluation is based on how well the algorithms allow a good linear ranking and a good discriminance of consecutive wear labels. The results show that the use of Local Binary Patterns techniques result in a better ranking of the wear labels as well as in a higher amount of discrimination between features related to consecutive degrees of wear

    Analysing wear in carpets by detecting varying local binary patterns

    Get PDF
    Currently, carpet companies assess the quality of their products based on their appearance retention capabilities. For this, carpet samples with different degrees of wear after a traffic exposure simulation process are rated with wear labels by human experts. Experts compare changes in appearance in the worn samples to samples with original appearance. This process is subjective and humans can make mistakes up to 10% in rating. In search of an objective assessment, research using texture analysis has been conducted to automate the process. Particularly, Local Binary Pattern (LBP) technique combined with a Symmetric adaptation of the Kullback-Leibler divergence (SKL) are successful for extracting texture features related to the wear labels either from intensity and range images. We present in this paper a novel extension of the LBP techniques that improves the representation of the distinct wear labels. The technique consists in detecting those patters that monotonically change with the wear labels while grouping the others. Computing the SKL from these patters considerably increases the discrimination between the consecutive groups even for carpet types where other LBP variations fail. We present results for carpet types representing 72% of the existing references for the EN1471:1996 European standard

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    ICface: Interpretable and Controllable Face Reenactment Using GANs

    Get PDF
    This paper presents a generic face animator that is able to control the pose and expressions of a given face image. The animation is driven by human interpretable control signals consisting of head pose angles and the Action Unit (AU) values. The control information can be obtained from multiple sources including external driving videos and manual controls. Due to the interpretable nature of the driving signal, one can easily mix the information between multiple sources (e.g. pose from one image and expression from another) and apply selective post-production editing. The proposed face animator is implemented as a two-stage neural network model that is learned in a self-supervised manner using a large video collection. The proposed Interpretable and Controllable face reenactment network (ICface) is compared to the state-of-the-art neural network-based face animation techniques in multiple tasks. The results indicate that ICface produces better visual quality while being more versatile than most of the comparison methods. The introduced model could provide a lightweight and easy to use tool for a multitude of advanced image and video editing tasks.Comment: Accepted in WACV-202

    Proof of concept of a workflow methodology for the creation of basic canine head anatomy veterinary education tool using augmented reality

    Get PDF
    Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond
    corecore