8 research outputs found

    Cloud-to-end rendering and storage management for virtual reality in experimental education

    Get PDF
    Background Real-time 3D rendering and interaction is important for virtual reality (VR) experimental education. Unfortunately, standard end-computing methods prohibitively escalate computational costs. Thus, reducing or distributing these requirements needs urgent attention, especially in light of the COVID-19 pandemic. Methods In this study, we design a cloud-to-end rendering and storage system for VR experimental education comprising two models: background and interactive. The cloud server renders items in the background and sends the results to an end terminal in a video stream. Interactive models are then lightweight-rendered and blended at the end terminal. An improved 3D warping and hole-filling algorithm is also proposed to improve image quality when the user’s viewpoint changes. Results We build three scenes to test image quality and network latency. The results show that our system can render 3D experimental education scenes with higher image quality and lower latency than any other cloud rendering systems. Conclusions Our study is the first to use cloud and lightweight rendering for VR experimental education. The results demonstrate that our system provides good rendering experience without exceeding computation costs

    Object Detection and Tracking Using Uncalibrated Cameras

    Get PDF
    This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object\u27s feature points over frames, tracking the object over frames and analyzing object\u27s motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms

    Fruit Grading based on Deep Learning and Active Vision System

    Get PDF
    This paper presents a low-cost computer vision-based solution to obtain the size of fruits without contact. It consists of a low-cost webcam and a cross-shaped laser beam rigidly assembled. The proposed approach acquires and processes the images in real-time. Due to the low computational cost of the proposed algorithm, a robust solution is obtained using a frame redundancy approach, which consists in processing several frames of the same scene and hence computing a robust estimation of the fruit size. The proposed solution is evaluated with different tropical fruits (e.g., banana, avocado, dragon fruit, mamey, papaya, and taxo). Obtained results show on mean average percentage error (MAPE) below 1.50% in the computed sizes

    Object Detection and Tracking Using Uncalibrated Cameras

    Get PDF
    This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object\u27s feature points over frames, tracking the object over frames and analyzing object\u27s motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms

    Integration of multiple vision systems and toolbox development

    Get PDF
    Depending on the required coverage, multiple cameras with different fields of view, positions and orientations can be employed to form a motion tracking system. Correctly and efficiently designing and setting up a multi-camera vision system presents a technical challenge. This thesis describes the development and application of a toolbox that can help the user to design a multi-camera vision system. Using the parameters of cameras, including their positions and orientations, the toolbox can calculate the volume covered by the system and generate its visualization for a given tracking area. The cameras can be repositioned and reoriented using toolbox to generate the visualization of the volume covered. Finally, this thesis describes how to practically implement and achieve a proper multi-camera setup. This thesis describes the integration of multiple cameras for vision system development based on Svoboda\u27s and Horn\u27s algorithms. Also, Dijkstra\u27s algorithm is implemented to estimate the tracking error between the master vision system and any of the slave vision systems. The toolbox is evaluated by comparing the calculated and actual covered volumes of a multi-camera system. The toolbox also is evaluated for its error estimation. The multi-camera vision system design is implemented using the developed toolbox for a virtual fastening operation of an aircraft fuselage in a computer-automated virtual environment (CAVE) --Abstract, page iii

    Investigating deep-learning-based solutions for flexible and robust hand-eye calibration in robotics

    Get PDF
    The cameras are the main sensor for robots to perceive their environments because they provide high-quality information and their low-cost. However, transforming the information obtained from cameras into robotic actions can be challenging. To manipulate objects in camera scenes, robots need to establish a transformation between the camera and the robot base, which is known as hand-eye calibration. Achieving accurate hand-eye calibration is critical for precise robotic manipulation, yet traditional approaches can be time-consuming, error-prone, and fail to account for changes in the camera or robot base over time. This thesis proposes a novel approach that leverages the power of deep learning to automatically learn the mapping between the robot’s joint angles and the camera’s images, enabling real-time calibration updates. The approach samples the robot and camera spaces discretely and represents them continuously, enabling efficient and accurate computation of calibration parameters. By automating the calibration process and using deep learning algorithms, a more robust and efficient solution for hand-eye calibration in robotics is offered. To develop a robust and flexible hand-eye calibration approach, three main studies were conducted. In the first study, a deep learning-based regression architecture was developed that processes RGB and depth images, as well as the poses of a single reference point selected on the robot end-effector with respect to the robot base acquired through the robot kinematic chain. The success of this architecture was tested in a simulated environment and two real robotic environments, evaluating the metric error and precision. In the second approach, the success of the developed approach was evaluated by transferring from metric error to task error by performing a real robotic manipulation task, specifically a pick-and-place. Additionally, the performance of the developed approach was compared with a classic hand-eye calibration approach, using three evaluation criteria: real robotic manipulation task, computational complexity, and repeatability. Finally, the learned calibration space of the developed deep learning-based hand-eye calibration approach was extended with new observations over time using Continual learning, making the approach more robust and flexible in handling environmental changes. Two buffer-based approaches were developed to eliminate the catastrophic forgetting problem, which is forgetting learned information over time by considering new observations. The performance and comparison of these approaches with the training of the developed approach in the first study using all datasets from scratch were tested on a simulated and a real-world environment. Experimental results of this thesis reveal that: 1) a deep learning-based hand-eye calibration approach has competitive results with the classical approaches in terms of metric error (positional and rotational error deviation from the ground-truth) while eliminating data re-collection and re-training camera pose changes over time, and has 96 times better repeatability (precision) than the classic approach as well as it has the state-of-the-art result for it in comparison to the other deep learning-based hand-eye calibration approaches; 2) it also has competitive results with the classic approaches for performing a real-robotic manipulation task and reduces the computational complexity; 3) the leveraging deep-learning based hand-eye calibration approach with Continual Learning, it is possible to extend the learned calibration space over new observations without training the network from scratch with a lower accuracy gap (less than 1.5 mm and 2.5 degrees in the simulations and real-world environments for the translation and orientation components). Overall, the proposed approach offers a more efficient and robust solution for hand-eye calibration in robotics, providing greater accuracy and flexibility to adapt to environments where the poses of the robot and camera base change according to each other over time. These changes may come from either robot or camera movement. The results of the studies demonstrate the effectiveness of the approach in achieving precise and reliable robotic manipulation, making it a promising solution for robotics applications

    Characterisation of concentrating solar optics by Light Field Method

    Get PDF
    Abstract: This dissertation develops ideas and techniques for the measurement of the light field produced by the concentrating optics that are used in solar thermal power systems. The research focussed on developing a framework and the principles for the implementation of a scalable technology that is suitable, in principle, for cost effective industrial implementation in the field. Investigation from first principles and technological surveys resulted in formulation of a number of model techniques, from which one was developed. A key component of the proposed model was evaluated using a novel reformulation and application of electrical impedance tomography (EIT). This was to implement an information transform effecting a highly non-linear compressive sensing mechanism, offsetting manufacturing and material complexity in the measurement of high solar flux levels. The technique allows sensing of a wide range of phenomena over arbitrary manifolds in three-dimensional space by utilizing passive transducers. An inverse reconstruction method particular to the structure of the device was proposed, implemented, and tested in a full simulation of intended operation. The parameter space of internal configurations of the method were the subject of a uniform, statistical search, with results also indicating geometrical properties of the transform used. A variety of design guides were developed to better optimize the implementation of the techniques in a range of applications.M.Ing. (Mechanical Engineering Science

    Simple pinhole camera calibration

    No full text
    corecore