499,300 research outputs found

    Computer Vision Based Structural Identification Framework for Bridge Health Mornitoring

    Get PDF
    The objective of this dissertation is to develop a comprehensive Structural Identification (St-Id) framework with damage for bridge type structures by using cameras and computer vision technologies. The traditional St-Id frameworks rely on using conventional sensors. In this study, the collected input and output data employed in the St-Id system are acquired by series of vision-based measurements. The following novelties are proposed, developed and demonstrated in this project: a) vehicle load (input) modeling using computer vision, b) bridge response (output) using full non-contact approach using video/image processing, c) image-based structural identification using input-output measurements and new damage indicators. The input (loading) data due vehicles such as vehicle weights and vehicle locations on the bridges, are estimated by employing computer vision algorithms (detection, classification, and localization of objects) based on the video images of vehicles. Meanwhile, the output data as structural displacements are also obtained by defining and tracking image key-points of measurement locations. Subsequently, the input and output data sets are analyzed to construct novel types of damage indicators, named Unit Influence Surface (UIS). Finally, the new damage detection and localization framework is introduced that does not require a network of sensors, but much less number of sensors. The main research significance is the first time development of algorithms that transform the measured video images into a form that is highly damage-sensitive/change-sensitive for bridge assessment within the context of Structural Identification with input and output characterization. The study exploits the unique attributes of computer vision systems, where the signal is continuous in space. This requires new adaptations and transformations that can handle computer vision data/signals for structural engineering applications. This research will significantly advance current sensor-based structural health monitoring with computer-vision techniques, leading to practical applications for damage detection of complex structures with a novel approach. By using computer vision algorithms and cameras as special sensors for structural health monitoring, this study proposes an advance approach in bridge monitoring through which certain type of data that could not be collected by conventional sensors such as vehicle loads and location, can be obtained practically and accurately

    A Computational Paradigm on Network-Based Models of Computation

    Get PDF
    The maturation of computer science has strengthened the need to consolidate isolated algorithms and techniques into general computational paradigms. The main goal of this dissertation is to provide a unifying framework which captures the essence of a number of problems in seemingly unrelated contexts in database design, pattern recognition, image processing, VLSI design, computer vision, and robot navigation. The main contribution of this work is to provide a computational paradigm which involves the unifying framework, referred to as the multiple Query problem, along with a generic solution to the Multiple Query problem. To demonstrate the applicability of the paradigm, a number of problems from different areas of computer science are solved by formulating them in this framework. Also, to show practical relevance, two fundamental problems were implemented in the C language using MPI. The code can be ported onto many commercially available parallel computers; in particular, the code was tested on an IBM-SP2 and on a network of workstations

    Open tools for electromagnetic simulation programs

    Get PDF
    Purpose The aim of the paper is to propose three computer tools to create electromagnetic simulation programs: GiD, Kratos and EMANT. Design/methodology/approach The paper presents a review of numerical methods for solving electromagnetic problems and presentation of the main features of GiD, Kratos and EMANT. Findings The paper provides information about three computer tools to create electromagnetic simulation packages: GiD (geometrical modeling, data input, visualisation of results), Kratos (C++ library) and EMANT (finite element software for solving Maxwell equations). Research limitations/implications The proposed platforms are in development and future work should be done to validate the codes for expecific problems and to provide extensive manual and tutorial information. Practical implications The tools could be easily learnt by different user profiles: from end‐users interested in simulation programs to developers of simulation packages. Originality/value This paper offers an integrated vision of open and easily customisable tools for the demands of different users profiles.   &nbsp

    Improving skeleton algorithm for helping Caenorhabditis elegans trackers

    Full text link
    [EN] One of the main problems when monitoring Caenorhabditis elegans nematodes (C. elegans) is tracking their poses by automatic computer vision systems. This is a challenge given the marked flexibility that their bodies present and the different poses that can be performed during their behaviour individually, which become even more complicated when worms aggregate with others while moving. This work proposes a simple solution by combining some computer vision techniques to help to determine certain worm poses and to identify each one during aggregation or in coiled shapes. This new method is based on the distance transformation function to obtain better worm skeletons. Experiments were performed with 205 plates, each with 10, 15, 30, 60 or 100 worms, which totals 100,000 worm poses approximately. A comparison of the proposed method was made to a classic skeletonisation method to find that 2196 problematic poses had improved by between 22% and 1% on average in the pose predictions of each worm.This study was supported by the Plan Nacional de I+D with Project RTI2018-094312-B-I00 and by European FEDER funds. ADM Nutrition, Biopolis S.L. and Archer Daniels Midland supplied the C. elegans plates. Some strains were provided by the CGC, which is funded by NIH Office of Research Infrastructure Programs (P40 OD010440). Mrs. Maria-Gabriela Salazar-Secada developed the skeleton annotation application. Mr. Jordi Tortosa-Grau annotated worm skeletons.Layana-Castro, PE.; Puchalt-Rodríguez, JC.; Sánchez Salmerón, AJ. (2020). Improving skeleton algorithm for helping Caenorhabditis elegans trackers. Scientific Reports. 10(1):1-12. https://doi.org/10.1038/s41598-020-79430-8S112101Teo, E. et al. A high throughput drug screening paradigm using transgenic Caenorhabditis elegans model of Alzheimer’s disease. Transl. Med. Aging 4, 11–21. https://doi.org/10.1016/j.tma.2019.12.002 (2020).Kim, M., Knoefler, D., Quarles, E., Jakob, U. & Bazopoulou, D. Automated phenotyping and lifespan assessment of a C. elegans model of Parkinson’s disease. Transl. Med. Aging 4, 38–44. https://doi.org/10.1016/j.tma.2020.04.001 (2020).Olsen, A. & Gill, M. S. (eds) Ageing: Lessons from C. elegans (Springer, Berlin, 2017).Wählby, C. et al. An image analysis toolbox for high-throughput C. elegans assays. Nat. Methods 9, 714–6. https://doi.org/10.1038/nmeth.1984 (2012).Rizvandi, N. B., Pižurica, A., Rooms, F. & Philips, W. Skeleton analysis of population images for detection of isolated and overlapped nematode C. elegans. In 2008 16th European Signal Processing Conference, 1–5 (2008).Rizvandi, N. B., Pizurica, A. & Philips, W. Machine vision detection of isolated and overlapped nematode worms using skeleton analysis. In 2008 15th IEEE International Conference on Image Processing, 2972–2975. https://doi.org/10.1109/ICIP.2008.4712419 (2008).Uhlmann, V. & Unser, M. Tip-seeking active contours for bioimage segmentation. In 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), 544–547 (2015).Nagy, S., Goessling, M., Amit, Y. & Biron, D. A generative statistical algorithm for automatic detection of complex postures. PLOS Comput. Biol. 11, 1–23. https://doi.org/10.1371/journal.pcbi.1004517 (2015).Huang, K.-M., Cosman, P. & Schafer, W. R. Machine vision based detection of omega bends and reversals in C. elegans. J. Neurosci. Methods 158, 323–336. https://doi.org/10.1016/j.jneumeth.2006.06.007 (2006).Kiel, M. et al. A multi-purpose worm tracker based on FIM. https://doi.org/10.1101/352948 (2018).Winter, P. B. et al. A network approach to discerning the identities of C. elegans in a free moving population. Sci. Rep. 6, 34859. https://doi.org/10.1038/srep34859 (2016).Fontaine, E., Burdick, J. & Barr, A. Automated tracking of multiple C. Elegans. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, 3716–3719. https://doi.org/10.1109/IEMBS.2006.260657 (2006).Roussel, N., Morton, C. A., Finger, F. P. & Roysam, B. A computational model for C. elegans locomotory behavior: application to multiworm tracking. IEEE Trans. Biomed. Eng. 54, 1786–1797. https://doi.org/10.1109/TBME.2007.894981 (2007).Hebert, L., Ahamed, T., Costa, A. C., O’Shaugnessy, L. & Stephens, G. J. Wormpose: image synthesis and convolutional networks for pose estimation in C. elegans. bioRxiv. https://doi.org/10.1101/2020.07.09.193755 (2020).Chen, L. et al. A CNN framework based on line annotations for detecting nematodes in microscopic images. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 508–512. https://doi.org/10.1109/ISBI45749.2020.9098465 (2020).Li, S. et al. Deformation-aware unpaired image translation for pose estimation on laboratory animals. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13155–13165. https://doi.org/10.1109/CVPR42600.2020.01317 (2020).Puchalt, J. C., Sánchez-Salmerón, A.-J., Martorell Guerola, P. & Genovés Martínez, S. Active backlight for automating visual monitoring: an analysis of a lighting control technique for Caenorhabditis elegans cultured on standard petri plates. PLOS ONE 14, 1–18. https://doi.org/10.1371/journal.pone.0215548 (2019).Stiernagle, T. Maintenance of C. elegans. https://doi.org/10.1895/wormbook.1.101.1 (2006).Russ, J. C. & Neal, F. B. The Image Processing Handbook 7th edn, 479–480 (CRC Press, Boca Raton, 2015).Swierczek, N. A., Giles, A. C., Rankin, C. H. & Kerr, R. A. High-throughput behavioral analysis in C. elegans. Nat. Methods 8, 592–598. https://doi.org/10.1038/nmeth.1625 (2011).Restif, C. et al. CELEST: computer vision software for quantitative analysis of C. elegans swim behavior reveals novel features of locomotion. PLOS Comput. Biol. 10, 1–12. https://doi.org/10.1371/journal.pcbi.1003702 (2014).Javer, A. et al. An open-source platform for analyzing and sharing worm-behavior data. Nat. Methods 15, 645–646. https://doi.org/10.1038/s41592-018-0112-1 (2018).Dusenbery, D. B. Using a microcomputer and video camera to simultaneously track 25 animals. Comput. Biol. Med. 15, 169–175. https://doi.org/10.1016/0010-4825(85)90058-7 (1985).Ramot, D., Johnson, B. E., Berry, T. L. Jr., Carnell, L. & Goodman, M. B. The parallel worm tracker: a platform for measuring average speed and drug-induced paralysis in nematodes. PLOS ONE 3, 1–7. https://doi.org/10.1371/journal.pone.0002208 (2008).Puchalt, J. C. et al. Improving lifespan automation for Caenorhabditis elegans by using image processing and a post-processing adaptive data filter. Sci. Rep. 10, 8729. https://doi.org/10.1038/s41598-020-65619-4 (2020).Rezatofighi, H. et al. Generalized intersection over union: a metric and a loss for bounding box regression. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 658–666. https://doi.org/10.1109/CVPR.2019.00075 (2019).Koul, A., Ganju, S. & Kasam, M. Practical Deep Learning for Cloud, Mobile, and Edge: Real-World AI & Computer-Vision Projects Using Python, Keras & TensorFlow, 679–680 (O’Reilly Media, 2019)

    Smart-Insect Monitoring System Integration and Interaction via AI Cloud Deployment and GPT

    Get PDF
    The Insect Detection Server was developed to explore the deployment and integration of an Artificial Intelligence model for Computer Vision in the context of insect detection. The model was developed to accurately identify insects from images taken by camera systems installed on farms. The goal is to integrate the model into an easily accessible, cloud-based application that allows farmers to analyze automatically uploaded images containing groups of insects found on their farms. The application returns the bounding boxes and the detected classes of insects whenever an image is captured on-site, enabling farmers to take appropriate actions to address the issue of the insects\u27 presence. To extend the capabilities of the application, the server is linked to a GPT-3.5 API. This will allow users to ask questions about the bugs detected on their farms, creating an online expert -like feature. Python, C++, and Computer Vision libraries were used for the detection model, while the OpenAI API was used for GPT-3.5\u27s integration. By combining these technologies, farmers can more effectively and efficiently manage pests on their farms than current alternatives. This Generative Pre-trained Transformer (GPT) aspect of the project can be leveraged to enable the emulation of agricultural experts for users/farmers. The large language model (LLM) neural network can be fine-tuned using prompt engineering to generate natural language responses to user queries. This will enable farmers to get expert advice and guidance on pest management without having to consult with a human expert. The integration of GPT-3.5 API will also allow the application to provide personalized recommendations based on each farm\u27s specific needs and circumstances. This added feature will give the farmers a more comprehensive and tailored approach to pest management, further increasing the efficiency and effectiveness of their pest control strategies. The significance of this research lies in the development of a practical and accessible tool for farmers to manage pests on their farms. Using Computer Vision and Artificial Intelligence, farmers can quickly and accurately identify insects, leading to more efficient and effective pest management. This could help farmers reduce the use of pesticides and other forms of pest management, leading to improved crop yields and reduced environmental impacts. The potential benefits of this technology extend beyond the agricultural industry, as the techniques used in this research can be applied to a wide range of computer vision and user-facing data analytic applications. For example, the developed techniques could be applied to other fields, such as surveillance, security, and medical imaging

    Sistema de visión estereoscópica para navegación autónoma de vehículos no tripulados

    Get PDF
    La visión estereoscópica artificial es un campo muy amplio que forma parte de lo que se conoce como visión por computador. Técnicamente consiste en el procesamiento de dos imágenes obtenidas mediante sendas cámaras, a partir de una escena tridimensional 3D. Este procesamiento está orientado a reconstruir la escena en 3D a partir de las dos imágenes, izquierda y derecha. Un aspecto que cabe destacar es que las cámaras están desplazadas una cierta distancia, tal y como ocurre con nuestros ojos. El trabajo del computador consiste en identificar en ambas imágenes aquellos píxeles en las dos imágenes que se corresponden con la misma entidad física en la escena 3D, usando para ello algoritmos especializados. La distancia que separa estos píxeles se conoce como disparidad. La medida de la disparidad sirve para obtener la distancia a la que se sitúa físicamente ese objeto en la escena con respecto a las dos cámaras. La visión estereoscópica es un campo que tiene numerosas aplicaciones y en el que a día de hoy se están invirtiendo numerosos recursos en investigación. Concretamente, una de esas aplicaciones es la detección de obstáculos por parte de robots. Nuestro proyecto está orientado hacia esa aplicación práctica, si bien se centra exclusivamente en los aspectos relacionados con la correspondencia de los elementos homólogos en las imágenes. Para ello hemos implementado diversas técnicas y algoritmos de visión estereoscópica usando el lenguaje de programación C#. [ABSTRACT] Stereo vision is a broad eld that is part of computer vision. Technically, it consists of the processing of two images adquired by two cameras, from a given scenario. This processing is aimed to reconstruct the 3D scene from both images, namely left and right images. One thing that is worth mentioning is that the two cameras are shifted a certain distance, as it happens with our eyes. The computer basically identies in both images those pixels that match, using specialized algorithms. The distance that separates those pixels is known as disparity. Disparity is next used in the calculation of the distance between the object in the scene and the cameras. Stereo vision (also known as stereopsis) is a eld with multiple applications and in which it is invested many resources in research. One of those applications is the detection of obstacles by robots. Our project is oriented towards this practical application; although this work is focused only on the computation of the disparities, i.e. the correspondence between pixels in the images. We have implemented several stereo vision techniques and algorithms using the C# programming language

    Integrating mobile robotics and vision with undergraduate computer science

    Get PDF
    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision, and is directly linked to the research conducted at the authors’ institution. The paper describes the most relevant details of the module content and assessment strategy, paying particular attention to the practical sessions using Rovio mobile robots. The specific choices are discussed that were made with regard to the mobile platform, software libraries and lab environment. The paper also presents a detailed qualitative and quantitative analysis of student results, including the correlation between student engagement and performance, and discusses the outcomes of this experience

    Objective-Based Hierarchical Clustering of Deep Embedding Vectors

    Full text link
    We initiate a comprehensive experimental study of objective-based hierarchical clustering methods on massive datasets consisting of deep embedding vectors from computer vision and NLP applications. This includes a large variety of image embedding (ImageNet, ImageNetV2, NaBirds), word embedding (Twitter, Wikipedia), and sentence embedding (SST-2) vectors from several popular recent models (e.g. ResNet, ResNext, Inception V3, SBERT). Our study includes datasets with up to 4.54.5 million entries with embedding dimensions up to 20482048. In order to address the challenge of scaling up hierarchical clustering to such large datasets we propose a new practical hierarchical clustering algorithm B++&C. It gives a 5%/20% improvement on average for the popular Moseley-Wang (MW) / Cohen-Addad et al. (CKMM) objectives (normalized) compared to a wide range of classic methods and recent heuristics. We also introduce a theoretical algorithm B2SAT&C which achieves a 0.740.74-approximation for the CKMM objective in polynomial time. This is the first substantial improvement over the trivial 2/32/3-approximation achieved by a random binary tree. Prior to this work, the best poly-time approximation of 2/3+0.0004\approx 2/3 + 0.0004 was due to Charikar et al. (SODA'19)
    corecore