231 research outputs found

    3D Camouflaging Object using RGB-D Sensors

    Full text link
    This paper proposes a new optical camouflage system that uses RGB-D cameras, for acquiring point cloud of background scene, and tracking observers eyes. This system enables a user to conceal an object located behind a display that surrounded by 3D objects. If we considered here the tracked point of observer s eyes is a light source, the system will work on estimating shadow shape of the display device that falls on the objects in background. The system uses the 3d observer s eyes and the locations of display corners to predict their shadow points which have nearest neighbors in the constructed point cloud of background scene.Comment: 6 pages, 12 figures, 2017 IEEE International Conference on SM

    Spatially Aware Computing for Natural Interaction

    Get PDF
    Spatial information refers to the location of an object in a physical or digital world. Besides, it also includes the relative position of an object related to other objects around it. In this dissertation, three systems are designed and developed. All of them apply spatial information in different fields. The ultimate goal is to increase the user friendliness and efficiency in those applications by utilizing spatial information. The first system is a novel Web page data extraction application, which takes advantage of 2D spatial information to discover structured records from a Web page. The extracted information is useful to re-organize the layout of a Web page to fit mobile browsing. The second application utilizes the 3D spatial information of a mobile device within a large paper-based workspace to implement interactive paper that combines the merits of paper documents and mobile devices. This application can overlay digital information on top of a paper document based on the location of a mobile device within a workspace. The third application further integrates 3D space information with sound detection to realize an automatic camera management system. This application automatically controls multiple cameras in a conference room, and creates an engaging video by intelligently switching camera shots among meeting participants based on their activities. Evaluations have been made on all three applications, and the results are promising. In summary, this dissertation comprehensively explores the usage of spatial information in various applications to improve the usability

    Evaluating body tracking interaction in floor projection displays with an elderly population

    Get PDF
    The recent development of affordable full body tracking sensors has made this technology accessible to millions of users and gives the opportunity to develop new natural user interfaces. In this paper we focused on developing 2 natural user interfaces that could easily be used by an elderly population for interaction with a floor projection display. One interface uses feet positions to control a cursor and feet distance to activate interaction. In the second interface, the cursor is controlled by ray casting the forearm into the projection and interaction is activated by hand pose. The interfaces were tested by 19 elderly participants in a point-and-click and a drag-and-drop task using a between-subjects experimental design. The usability and perceived workload for each interface was assessed as well as performance indicators. Results show a clear preference by the participants for the feet controlled interface and also marginal better performance for this method.info:eu-repo/semantics/publishedVersio

    The ASPECTA toolkit : affordable Full Coverage Displays

    Get PDF
    Full Coverage Displays (FCDs) cover the interior surface of an entire room with pixels. FCDs make possible many new kinds of immersive display experiences - but current technology for building FCDs is expensive and complex, and software support for developing full-coverage applications is limited. To address these problems, we introduce ASPECTA, a hardware configuration and software toolkit that provide a low-cost and easy-to-use solution for creating full coverage systems. We outline ASPECTA’s (minimal) hardware requirements and describe the toolkit’s architecture, development API, server implementation, and configuration tool; we also provide a full example of how the toolkit can be used. We performed two evaluations of the toolkit: a case study of a research system built with ASPECTA, and a laboratory study that tested the effectiveness of the API. Our evaluations, as well as multiple examples of ASPECTA in use, show how ASPECTA can simplify configuration and development while still dramatically reducing the cost for creators of applications that take advantage of full-coverage displays.Postprin

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper

    Computational Multimedia for Video Self Modeling

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras

    Mobile and adaptive User interface for human robot collaboration in assembly tasks

    Get PDF
    The manufacturing sector is constantly looking for more efficient ways of production. The Industry 4.0 related technologies such as augmented and mixed reality, connectivity and digitalisation as well as the current trend of robotisation have resulted a number of technical solutions to support the production in factories. The combination of human-robot collaboration and augmented reality shows good promises. The challenges in this case come from the need to reconfigure the physical production layout and how to deliver the digital instructions to the operator. This paper introduces a model for collaborative assembly tasks that uses a mobile user interface based on the depth sensors and a projector. The novelty of this research comes from the adaptivity of the user interface, as it can be freely moved between the tasks around the workstation based on the operator needs and requirements of the tasks. The ability to move projection surface is achieved by detecting the surface position using Aruco markers and computing required transformation of the projector image.acceptedVersionPeer reviewe

    ROBOMIRROR: A SIMULATED MIRROR DISPLAY WITH A ROBOTIC CAMERA

    Get PDF
    Simulated mirror displays have a promising prospect in applications, due to its capability for virtual visualization. In most existing mirror displays, cameras are placed on top of the displays and unable to capture the person in front of the display at the highest possible resolution. The lack of a direct frontal capture of the subject\u27s face and the geometric error introduced by image warping techniques make realistic mirror image rendering a challenging problem. The objective of this thesis is to explore the use of a robotic camera in tracking the face of the subject in front of the display to obtain a high-quality image capture. Our system uses a Bislide system to control a camera for face capture, while using a separate color-depth camera for accurate face tracking. We construct an optical device in which a one-way mirror is used so that the robotic camera behind can capture the subject while the rendered images can be displayed by reflecting off the mirror from an overhead projector. A key challenge of the proposed system is the reduction of light due to the one-way mirror. The optimal 2D Wiener filter is selected to enhance the low contrast images captured by the camera
    • …
    corecore