3 research outputs found

    Kinect-based Universal Range Sensor and its Application in Educational Laboratories

    No full text
    Traditional data acquisition (DAQ) systems for obtaining 3-D range information consist of sensors, DAQ measurement hardware and a processor with software. This kind of DAQ system is adapted and calibrated for various applications, thus imposing significant up-front costs that hamper its broad usage in educational laboratories. The low-cost Microsoft Kinect has become a promising alternative solution that has already been widely adopted by consumers and offers a wide variety of opportunities for being used in many other areas. The Kinect’s cameras are capable of producing high quality synchronized video that consists of both color and depth data. This enables the Kinect to compete with other sophisticated 3-D sensor DAQ systems in terms of performance criteria such as accuracy, stability, reliability and error rates. One of the most noticeable Kinect-based applications is the skeleton-based tracking of humans, which is made possible by its built-in human skeleton recognition functions. Other common usages of the Kinect are 3-D surface/scene reconstruction and object classification/recognition. However, Kinect-based developments that focus on the tracking of arbitrary objects have rarely been reported, mainly due to a lack of mature algorithms. In the first part of this paper, a three-stage approach for capturing general motions of objects will be introduced. This approach consists of point cloud pre-processing with a focus on computational efficiency, object tracking employing recognition and post-processing including motion analysis. This approach can be tailored to special cases, namely the algorithms focus more on computation efficiency when the objects of interest have simple shapes or colors, and they focus more on reliability for objects with complex geometries or textures. The second part of this paper describes the integration of the proposed DAQ system into a multi-player game-based laboratory environment. In this implementation, a physical experiment is triggered by the game avatars and the experimental data acquired by the Kinect and analyzed by the proposed algorithms are then fed back into the game environment and used to animate the experimental device

    Kinect-based Universal Range Sensor and its Application in Educational Laboratories

    No full text

    Object tracking in augmented reality remote access laboratories without fiducial markers

    Get PDF
    Remote Access Laboratories provide students with access to learning resources without the need to be in-situ (with the assets). The technology endows users with access to physical experiments anywhere and anytime, while also minimising or distributing the cost of operation for expensive laboratory equipment. Augmented Reality is a technology which provides interactive sensory feedback to users. The user experiences reality through a computer-based user interface with additional computer-generated information in the form applicable to the targeted senses. Recent advances in high definition video capture devices, video screens and mobile computers have driven resurgence in mainstream Augmented Reality technologies. Lower cost and greater processing power of microprocessors and memory place the resources in the hands of developers and users alike, allowing education institutes to invest in technologies that enhance the delivery of course content. This increase in pedagogical resources has already allowed the phenomenon of education at a distance to reach students from a wide range of demographics, improving access and outcomes in multiple disciplines. Incorporating Augmented Reality into Remote Access Laboratories resources has the benefit of improving overall user immersion into the remote experiment, thus improving student engagement and understanding of the delivered material. Visual implementations of Augmented Reality rely on providing the user with seamless integration of the current environment (through mobile device, desktop PC, or heads up display) with computer generated artificial visual artefacts. Virtual objects must appear in context to the current environment, and respond in a realistic period, or else the user suffers from a disjointed and confusing blend of real and virtual information. Understanding and interacting with the visual scene is controlled through Computer Vision algorithms, and are crucial in ensuring that the AR systems co-operate with the data discovered through the systems. While Augmented Reality has begun to expand in the educational environment, currently, there is still very little overlap of Augmented Reality technologies with Remote Access Laboratories. This research has investigated Computer Vision models that support Augmented Reality technologies such that live video streams from Remote Laboratories are enhanced by synthetic overlays pertinent to the experiments. Orientation of synthetic visual overlays requires knowledge of key reference points, often performed by fiducial markers. Removing the equipment’s need for fiducial markers and a priori knowledge simplifies and accelerates the uptake and expansion of the technology. These works uncover hybrid Computer Vision models which require no prior knowledge of the laboratory environment, including no fiducial markers or tags to track important objects and references. Developed models derive all relevant data from the live video stream and require no previous knowledge regarding the configuration of the physical scene. The new image analysis paradigms, (Two-Dimensional Colour Histograms and Neighbourhood Gradient Signature) improve the current state of markerless tracking through the unique attributes discovered within the sequential video frames. Novel methods are also established, with which to assess and measure the performance of Computer Vision models. Objective ground truth images minimise the level of subjective interference in measuring the efficacy of CV edge and corner detectors. Additionally, locating an effective method to contrast detected attributes associated with an image or object, has provided a means to measure the likelihood of an image match between video frames. In combination with existing material and new contributions, this research demonstrates effective object detection and tracking for Augmented Reality systems within a Remote Access Laboratory environment, with no requirement for fiducial markers, or prior knowledge of the environment. The models that have been proposed in the work can be generalised to be used in any cyber-physical environment that facilitates peripherals such as cameras and other sensors
    corecore