121 research outputs found

    REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer\u27s 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs

    Augmented and Virtual Reality techniques for footwear

    Get PDF
    The use of 3D imaging techniques has been early adopted in the footwear industry. In particular, 3D imaging could be used to aid commerce and improve the quality and sales of shoes. Footwear customization is an added value aimed not only to improve product quality, but also consumer comfort. Moreover, customisation implies a new business model that avoids the competition of mass production coming from new manufacturers settled mainly in Asian countries. However, footwear customisation implies a significant effort at different levels. In manufacturing, rapid and virtual prototyping is required; indeed the prototype is intended to become the final product. The whole design procedure must be validated using exclusively virtual techniques to ensure the feasibility of this process, since physical prototypes should be avoided. With regard to commerce, it would be desirable for the consumer to choose any model of shoes from a large 3D database and be able to try them on looking at a magic mirror. This would probably reduce costs and increase sales, since shops would not require storing every shoe model and the process of trying several models on would be easier and faster for the consumer. In this paper, new advances in 3D techniques coming from experience in cinema, TV and games are successfully applied to footwear. Firstly, the characteristics of a high-quality stereoscopic vision system for footwear are presented. Secondly, a system for the interaction with virtual footwear models based on 3D gloves is detailed. Finally, an augmented reality system (magic mirror) is presented, which is implemented with low-cost computational elements that allow a hypothetical customer to check in real time the goodness of a given virtual footwear model from an aesthetical point of view

    Manufacturing Technology Today

    Get PDF
    Manufacturing Technology Today, Manufacturing Technology Abstracts, Vol. 14, No. 4, September 2015, Bangalore, India

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe

    FOUND: Foot Optimization with Uncertain Normals for Surface Deformation Using Synthetic Data

    Full text link
    Surface reconstruction from multi-view images is a challenging task, with solutions often requiring a large number of sampled images with high overlap. We seek to develop a method for few-view reconstruction, for the case of the human foot. To solve this task, we must extract rich geometric cues from RGB images, before carefully fusing them into a final 3D object. Our FOUND approach tackles this, with 4 main contributions: (i) SynFoot, a synthetic dataset of 50,000 photorealistic foot images, paired with ground truth surface normals and keypoints; (ii) an uncertainty-aware surface normal predictor trained on our synthetic dataset; (iii) an optimization scheme for fitting a generative foot model to a series of images; and (iv) a benchmark dataset of calibrated images and high resolution ground truth geometry. We show that our normal predictor outperforms all off-the-shelf equivalents significantly on real images, and our optimization scheme outperforms state-of-the-art photogrammetry pipelines, especially for a few-view setting. We release our synthetic dataset and baseline 3D scans to the research community.Comment: 14 pages, 15 figure

    Deep Learning 3D Scans for Footwear Fit Estimation from a Single Depth Map

    Get PDF
    In clothing and particularly in footwear, the variance in the size and shape of people and of clothing poses a problem of how to match items of clothing to a person. This is specifically important in footwear, as fit is highly dependent on foot shape, which is not fully captured by shoe size. 3D scanning can be used to determine detailed personalized shape information, which can then be used to match against product shape for a more per- sonalized footwear matching experience. In current implementations however, this process is typically expensive and cumbersome. Typical scanning techniques require that a camera capture an object from many views in order to reconstruct shape. This usually requires either many cameras or a moving camera system, both of which being complex engineering tasks to construct. Ideally, in order to reduce the cost and complexity of scanning systems as much as possible, only a single image from a single camera would be needed. With recent techniques, semantics such as knowing the kind of object in view can be leveraged to determine the full 3D shape given incomplete information. Deep learning methods have been shown to be able to reconstruct 3D shape from limited inputs in highly symmetrical objects such as furniture and vehicles. We apply a deep learning approach to the domain of foot scanning, and present meth- ods to reconstruct a 3D point cloud from a single input depth map. Anthropomorphic body parts can be challenging due to their irregular shapes, difficulty for parameterizing and limited symmetries. We present two methods leveraging deep learning models to pro- duce complete foot scans from a single input depth map. We utilize 3D data from MPII Human Shape based on the CAESAR database, and train deep neural networks to learn anthropomorphic shape representations. Our first method attempts to complete the point cloud supplied by the input depth map by simply synthesizing the remaining information. We show that this method is capable of synthesizing the remainder of a point cloud with accuracies of 2.92±0.72 mm, and can be improved to accuracies of 2.55±0.75 mm when using an updated network architecture. Our second method fully synthesizes a complete point cloud foot scan from multiple virtual view points. We show that this method can produce foot scans with accuracies of 1.55±0.41 mm from a single input depth map. We performed additional experiments on real world foot scans captured using Kinect Fusion. We find that despite being trained only on a low resolution representation of foot shape, our models are able to recognize and synthesize reasonable complete point cloud scans. Our results suggest that our methods can be extended to work in the real world, with additional domain specific data

    La realidad aumentada y el marketing digital en el sector calzado: una revisiĂłn de la literatura cientĂ­fica

    Get PDF
    La presente revisión sistemática, titulada como “La realidad aumentada y el marketing digital en el sector calzado”: Una revisión de la literatura científica, tuvo como pregunta de investigación ¿Qué tanto la literatura científica ha estudiado la realidad aumentada y el marketing digital en el sector calzado durante los últimos 10 años?, de la cual se desprendieron objetivos como el estudiar la realidad aumentada y el marketing digital en el sector calzado en los últimos 10 años y clasificar hallazgos en base a sus aportes. Se realizó la búsqueda a través de bases de datos como Google Académico, Microsoft Academic o Redalyc, y se consideraron criterios de inclusión como la temporalidad, tipo de publicación, el idioma, la población y el alcance, mientras que; se descartaron artículos que no cumplían con ellos. Se observó que en los estudios predomina la metodología cuantitativa, provienen mayormente de países latinoamericanos como Colombia o Perú, así como que es un tema relevante en la actualidad, pues la mayoría no tiene una antigüedad mayor a 5 años. Tras la revisión literaria, se concluyó que tanto la realidad aumentada como el marketing digital han sido estudiados regularmente bajo el contexto del sector calzado durante los últimos 10 años. PALABRA

    Robotic manipulation for the shoe-packaging process

    Full text link
    [EN] This paper presents the integration of a robotic system in a human-centered environment, as it can be found in the shoe manufacturing industry. Fashion footwear is nowadays mainly handcrafted due to the big amount of small production tasks. Therefore, the introduction of intelligent robotic systems in this industry may contribute to automate and improve the manual production steps, such us polishing, cleaning, packaging, and visual inspection. Due to the high complexity of the manual tasks in shoe production, cooperative robotic systems (which can work in collaboration with humans) are required. Thus, the focus of the robot lays on grasping, collision detection, and avoidance, as well as on considering the human intervention to supervise the work being performed. For this research, the robot has been equipped with a Kinect camera and a wrist force/ torque sensor so that it is able to detect human interaction and the dynamic environment in order to modify the robot¿s behavior. To illustrate the applicability of the proposed approach, this work presents the experimental results obtained for two actual platforms, which are located at different research laboratories, that share similarities in their morphology, sensor equipment and actuation system.This work has been partly supported by the Ministerio de Economia y Competitividad of the Spanish Government (Key No.: 0201603139 of Invest in Spain program and Grant No. RTC-2016-5408-6) and by the Deutscher Akademischer Austauschdienst (DAAD) of the German Government (Projekt-ID 54368155).Gracia Calandin, LI.; Perez-Vidal, C.; Mronga, D.; Paco, JD.; Azorin, J.; Gea, JD. (2017). Robotic manipulation for the shoe-packaging process. The International Journal of Advanced Manufacturing Technology. 92(1-4):1053-1067. https://doi.org/10.1007/s00170-017-0212-6S10531067921-4Pedrocchi N, Villagrossi E, Cenati C, Tosatti LM (2017) Design of fuzzy logic controller of industrial robot for roughing the uppers of fashion shoes. Int J Adv Manuf Technol 77(5):939–953Hinojo-Perez JJ, Davia-Aracil M, Jimeno-Morenilla A, Sanchez-Romero L, Salas F (2016) Automation of the shoe last grading process according to international sizing systems. Int J Adv Manuf Technol 85(1):455–467Dura-Gil JV, Ballester-Fernandez A, Cavallaro M, Chiodi A, Ballarino A, von Arnim V., Brondi C, Stellmach D (2016) New technologies for customizing products for people with special necessities: project fashion-able. Int J Comput Integr Manuf. In Press, doi: 10.1080/0951192X.2016.1145803Jatta F, Zanoni L, Fassi I, Negri S (2004) A roughing/cementing robotic cell for custom made shoe manufacture. Int J Comput Integr Manuf 17(7):645–652Nemec B, Zlajpah L (2008) Robotic cell for custom finishing operations. Int J Comput Integr Manuf 21(1):33–42Molfino R, et al (2004) Modular, reconfigurable prehensor for grasping and handling limp materials in the shoe industry. In: IMS international forum, CernobbioIntelishoe - integration and linking of shoe and auxiliary industries. 5Th FPSpecial shoes movement. 7th FP, NMP-2008-SME-2-R.229261, http://www.sshoes.euVilaca JL, Fonseca J (2007) A new software application for footwear industry. In: IEEE international symposium on intelligent signal processing WISP 2007, pp 1–6Custom, environment and comfort made shoe. 6TH FP [2004-2008]Framework of integrated technologies for user centred products. Grant agreement no.: CP-TP 229336-2. NMP2-SE-2009-229336 FIT4U -7TH FPRobofoot project website. http://www.robofoot.eu/ . Accessed 2016/ 09/16Montiel E (2007) Customization in the footwear industry. In: proceedings of the MIT congress on mass customizationSucan I, Kavraki LE (2012) A sampling-based tree planner for systems with complex dynamics, vol 28Kuffner JJ Jr, LaValle SM (2000) Rrt-connect: an efficient approach to single-query path planning. In: Proceedings of the IEEE international conference on robotics and automation, 2000. ICRA ’00, vol 2, pp 995–1001Ratliff N, Zucker M, Andrew Bagnell J, Srinivasa S (2009) Chomp: gradient optimization techniques for efficient motion planning. In: IEEE international conference on robotics and automation, 2009. ICRA ’09, pp 489–494Brock O, Khatib O (1997) Elastic strips: real-time path modification for mobile manipulationKroger T (2011) Opening the door to new sensor-based robot applications #x2014;the reflexxes motion libraries. In: 2011 IEEE international conference on robotics and automation (ICRA), pp 1–4Berg J, Ferguson D, Kuffner J (2006) Anytime path planning and replanning in dynamic environments. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), pp 2366–2371Berenson D, Abbeel P, Goldberg K (2012) A robot path planning framework that learns from experience. In: IEEE international conference on robotics and automation. IEEE, pp 3671–3678Bischoff R, Kurth J, Schreiber G, Koeppe R, Albu-Schaeffer A, Beyer A, Eiberger O, Haddadin S, Stemmer A, Grunwald G, Hirzinger G (2010) The kuka-dlr lightweight robot arm — a new reference platform for robotics research and manufacturing. In: Robotics (ISR), 2010 41st international symposium on and 2010 6th German conference on robotics (ROBOTIK), pp 1–8Rooks B (2006) The harmonious robot. Industrial Robot-an International Journal 33:125–130Vahrenkamp N, Wieland S, Azad P, Gonzalez D, Asfour T, Dillmann R (2008) Visual servoing for humanoid grasping and manipulation tasks. In: 8th IEEE-RAS international conference on humanoid robots, 2008, Humanoids 2008, pp 406–412Pieters RS, et al. (2012) Direct trajectory generation for vision-based obstacle avoidance. In: Proceedings of the 2012 IEEE/RSJ international conference on intelligent robots and systemsKinect for windows sensor components and specifications, website. http://msdn.microsoft.com/en-us/library/jj131033.aspx . Accessed 2016/09/16Jatta F, Zanoni L, Fassi I, Negri S (2004) A roughing cementing robotic cell for custom made shoe manufacture. Int J Comput Integr Manuf 17(7):645–652Maurtua I, Ibarguren A, Tellaeche A (2012) Robotics for the benefit of footwear industry. In: International conference on intelligent robotics and applications. Springer, Berlin, pp 235–244Arkin RC (1998) Behavior-based robotics. MIT PressNilsson NJ (1980) Principles of artificial intelligence. Morgan KaufmannAsada H, Slotine J-JE (1986) Robot analysis and control. WileyROS official web page. http://www.ros.org , (Accessed on 2017/ 02/03)Langmann B, Hartmann K, Loffeld O (2012) Depth camera technology comparison and performance evaluation. In: 1st international conference on pattern recognition applications and methods, pp 438–444The player project. free software tools for robot and sensor applications. http://playerstage.sourceforge.net/ , (Accessed on 2017/ 02/03)Yet another robot platform (YARP). http://www.yarp.it/ , (Accessed on 2017/02/03)The OROCOS project. smarter control in robotics and automation. http://www.orocos.org/ , (Accessed on 2017/02/03)CARMEN: Robot navigation toolkit. http://carmen.sourceforge.net/ , (Accessed on 2017/02/03)ORCA: Components for robotics. http://orca-robotics.sourceforge.net/ , (Accessed on 2017/02/03)MOOS: Mission oriented operating suite. http://www.robots.ox.ac.uk/mobile/MOOS/wiki/pmwiki.php/Main/HomePage , (Accessed on 2017/02/03)Microsoft robotics studio. https://www.microsoft.com/en-us/download/details.aspx?id=29081 , (Accessed on 2017/02/03)Pr2 ros website. http://www.ros.org/wiki/Robots/PR2 . Accessed 2016/09/16Care-o-bot 3 ros website. http://www.ros.org/wiki/Robots/Care-O-bot . Accessed 2016/09/16Aila, mobile dual-arm manipulation, website. http://robotik.dfki-bremen.de/de/forschung/robotersysteme/aila.html . Accessed 2016/09/16Package libpcan documentation, website. http://www.ros.org/wiki/libpcan . Accessed 2016/09/16Pcan driver for linux, user manual. http://www.peak-system.com . Document version 7.1 (2011-03-21)Pcan driver for linux, user manual. http://wiki.ros.org/schunk_powercube_chain . Accessed 2016/09/16Ros nodes documentation, website. http://www.ros.org/wiki/Nodes . Accessed 2016/09/16Ros messages documentation, website. http://www.ros.org/wiki/Messages . Accessed 2016/09/16Ros topics documentation, website. http://www.ros.org/wiki/Topics . Accessed 2016/09/16Ros services documentation, website. http://www.ros.org/wiki/Services . Accessed 2016/09/16Yaml files officials website. http://www.yaml.org/ . Accessed 2016/ 09/16Ros robot model (urdf) documentation website. http://www.ros.org/wiki/urdf . Accessed 2016/09/16Point cloud library (pcl), website. http://www.pointclouds.org/ . Accessed 2016/09/16Arm navigation ros stack, website. http://wiki.ros.org/arm_navigation . Accessed 2016/09/16Hornung A, Wurm KM, Bennewitz M, Stachniss C, Burgard W (2013) Octomap: an efficient probabilistic 3d mapping framework based on octrees Autonomous RobotsOrocos kdl documentation, website. http://www.orocos.org/kdl . Accessed 2016/09/16Ioan A, Şucan MM, Kavraki LE (2012) The open motion planning library, vol 19. http://ompl.kavrakilab.orgWaibel M, Beetz M, Civera J, D’Andrea R, Elfring J, Galvez-Lopez D, Haussermann K, Janssen R, Montiel JMM, Perzylo A, Schiessle B, Tenorth M, Zweigle O, van de Molengraft R (2011) Roboearth. IEEE Robot Autom Mag 18(2):69–82Simox toolbox. http://simox.sourceforge.net/ . Accessed 2016/09/16Moreels P, Perona P (2007) Evaluation of features detectors and descriptors based on 3d objects. Int J Comput Vis 73:263–284Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, 2001. CVPR 2001, vol 1Teuliere C, Marchand E, Eck L (2010) Using multiple hypothesis in model-based tracking. In: 2010 IEEE international conference on robotics and automation (ICRA), pp 4559–4565Moulianitis VC, Dentsoras AJ, Aspragathos NA (1999) A knowledge-based system for the conceptual design of grippers for handling fabrics. Artif Intell Eng Des Anal Manuf 13(1):13–25Davis S, Tsagarakis NG, Caldwell DG (2008) The initial design and manufacturing process of a low cost hand for the robot icub. In: 8th IEEE-RAS international conference on humanoid robots, pp 40–45Cerruti G, Chablat D, Gouaillier D, Sakka S (2017) Design method for an anthropomorphic hand able to gesture and grasp. In: IEEE international conference on robotics and automation. IEEE, pp 3671–367
    • …
    corecore