295 research outputs found

    Система оцінки глибини зображення за потоковим відео

    Get PDF
    Робота публікується згідно наказу ректора від 27.05.2021 р. №311/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії університету". Керівник дипломної роботи: к.т.н., старший викладач кафедри авіаційних комп’ютерно-інтегрованих комплексів, Василенко Микола ПавловичToday, the tasks of computer vision are becoming very relevant, more and more people are automating work in production due to some kind of software processes and machine devices, which can make job easier or more accurate. Based on this, it was decided to consider in detail the problem of stereo vision without using neural networks, or other more complex methods, since their use required costly methods of training, setting and controlling parameters. The main task was to create a mechanism taking into account the price and quality, due to the fact that there is no cheap analogue on the internet market, which was suitable for the task of simple recognition of 3D scenes and made it possible to analyze the environment in which it is located, namely, to find out at what distance objects are located, what is their size, and so on. In the course of the work, the method of using two web cameras was chosen, which were configured and calibrated for the task of stereo vision. The conditions of projective geometry and the relationship between the two cameras are also considered, since without this, the operation of the main algorithm of the work could not be successful at all. An algorithm and program have been created for the device to operate in streaming mode, which allows directly know the exact characteristics in LIVE video mode.Сьогодні завдання комп’ютерного зору стають дуже актуальними, все більше людей автоматизують роботу на виробництві завдяки якимсь програмним процесам та машинним пристроям, які можуть полегшити роботу або зробити її більш точною. З цього приводу було вирішено детально розглянути проблему стереозору без використання нейронних мереж або інших більш складних методів, оскільки їх використання вимагало дорогих методів навчання, встановлення та контролю параметрів. Основним завданням було створити механізм з урахуванням ціни та якості, завдяки тому, що на Інтернет-ринку немає дешевого аналога, який був би придатним для завдання простого розпізнавання тривимірних сцен і дав можливість аналізувати середовище, в якому він знаходиться, а саме з’ясувати, на якій відстані знаходяться об’єкти, який їх розмір тощо. В ході роботи було обрано метод використання двох веб-камер, які були налаштовані та відкалібровані для завдання стерео зору. Також розглядаються умови проективної геометрії та взаємозв'язок між двома камерами, оскільки без цього робота основного алгоритму роботи взагалі не могла б бути успішною. Створено алгоритм та програму для роботи пристрою в потоковому режимі, що дозволяє безпосередньо знати точні характеристики в режимі LIVE video

    Generation of 360 Degree Point Cloud for Characterization of Morphological and Chemical Properties of Maize and Sorghum

    Get PDF
    Recently, imaged-based high-throughput phenotyping methods have gained popularity in plant phenotyping. Imaging projects the 3D space into a 2D grid causing the loss of depth information and thus causes the retrieval of plant morphological traits challenging. In this study, LiDAR was used along with a turntable to generate a 360-degree point cloud of single plants. A LABVIEW program was developed to control and synchronize both the devices. A data processing pipeline was built to recover the digital surface models of the plants. The system was tested with maize and sorghum plants to derive the morphological properties including leaf area, leaf angle and leaf angular distribution. The results showed a high correlation between the manual measurement and the LiDAR measurements of the leaf area (R2\u3e0.91). Also, Structure from Motion (SFM) was used to generate 3D spectral point clouds of single plants at different narrow spectral bands using 2D images acquired by moving the camera completely around the plants. Seven narrow band (band width of 10 nm) optical filters, with center wavelengths at 530 nm, 570 nm, 660 nm, 680 nm, 720 nm, 770 nm and 970 nm were used to obtain the images for generating a spectral point cloud. The possibility of deriving the biochemical properties of the plants: nitrogen, phosphorous, potassium and moisture content using the multispectral information from the 3D point cloud was tested through statistical modeling techniques. The results were optimistic and thus indicated the possibility of generating a 3D spectral point cloud for deriving both the morphological and biochemical properties of the plants in the future. Advisor: Yufeng G

    SLM Microscopy: Scanless Two-Photon Imaging and Photostimulation with Spatial Light Modulators

    Get PDF
    Laser microscopy has generally poor temporal resolution, caused by the serial scanning of each pixel. This is a significant problem for imaging or optically manipulating neural circuits, since neuronal activity is fast. To help surmount this limitation, we have developed a “scanless” microscope that does not contain mechanically moving parts. This microscope uses a diffractive spatial light modulator (SLM) to shape an incoming two-photon laser beam into any arbitrary light pattern. This allows the simultaneous imaging or photostimulation of different regions of a sample with three-dimensional precision. To demonstrate the usefulness of this microscope, we perform two-photon uncaging of glutamate to activate dendritic spines and cortical neurons in brain slices. We also use it to carry out fast (60 Hz) two-photon calcium imaging of action potentials in neuronal populations. Thus, SLM microscopy appears to be a powerful tool for imaging and optically manipulating neurons and neuronal circuits. Moreover, the use of SLMs expands the flexibility of laser microscopy, as it can substitute traditional simple fixed lenses with any calculated lens function

    Water and Wastewater Pipe Nondestructive Evaluation and Health Monitoring: A Review

    Get PDF
    Civil infrastructures such as bridges, buildings, and pipelines ensure society's economic and industrial prosperity. Specifically, pipe networks assure the transportation of primary commodities such as water, oil, and natural gas. The quantitative and early detection of defects in pipes is critical in order to avoid severe consequences. As a result of high-profile accidents and economic downturn, research and development in the area of pipeline inspection has focused mainly on gas and oil pipelines. Due to the low cost of water, the development of nondestructive inspection (NDI) and structural health monitoring (SHM) technologies for fresh water mains and sewers has received the least attention. Moreover, the technical challenges associated with the practical deployment of monitoring system demand synergistic interaction across several disciplines, which may limit the transition from laboratory to real structures. This paper presents an overview of the most used NDI/SHM technologies for freshwater pipes and sewers. The challenges that said infrastructures pose with respect to oil and natural gas pipeline networks will be discussed. Finally, the methodologies that can be translated into SHM approaches are highlighted

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications
    corecore