1,570 research outputs found

    Automatic Renal Segmentation in DCE-MRI using Convolutional Neural Networks

    Full text link
    Kidney function evaluation using dynamic contrast-enhanced MRI (DCE-MRI) images could help in diagnosis and treatment of kidney diseases of children. Automatic segmentation of renal parenchyma is an important step in this process. In this paper, we propose a time and memory efficient fully automated segmentation method which achieves high segmentation accuracy with running time in the order of seconds in both normal kidneys and kidneys with hydronephrosis. The proposed method is based on a cascaded application of two 3D convolutional neural networks that employs spatial and temporal information at the same time in order to learn the tasks of localization and segmentation of kidneys, respectively. Segmentation performance is evaluated on both normal and abnormal kidneys with varying levels of hydronephrosis. We achieved a mean dice coefficient of 91.4 and 83.6 for normal and abnormal kidneys of pediatric patients, respectively

    A Training Framework of Robotic Operation and Image Analysis for Decision-Making in Bridge Inspection and Preservation

    Get PDF
    This project aims to create a framework of training engineers and policy makers on robotic operation and image analysis for the inspection and preservation of transportation infrastructure. Specifically, it develops the method for collecting camera-based bridge inspection data and the algorithms for data processing and pattern recognitions; and it creates tools for assisting users on visually analyzing the processed image data and recognized patterns for inspection and preservation decision-making. The project first developed a Siamese Neural Network to support bridge engineers in analyzing big video data. The network was initially trained by one-shot learning and is fine-tuned iteratively with human in the loop. Bridge engineers define the region of interest initially, then the algorithm retrieves all related regions in the video, which facilitates the engineers to inspect the bridge rather than exhaustively check every frame of the video. Our neural network was evaluated on three bridge inspection videos with promising performances. Then, the project developed an assistive intelligence system to facilitate inspectors efficiently and accurately detect and segment multiclass bridge elements from inspection videos. A Mask Region-based Convolutional Neural Network was transferred in the studied problem with a small initial training dataset labeled by the inspector. Then, the temporal coherence analysis was used to recover false negative detections of the transferred network. Finally, self-training with a guidance from experienced inspectors was used to iteratively refine the network. Results from a case study have demonstrated that the proposed method uses just a small amount of time and guidance from experienced inspectors to successfully build the assistive intelligence system with an excellent performance

    Intelligent Debris Mass Estimation Model for Autonomous Underwater Vehicle

    Full text link
    Marine debris poses a significant threat to the survival of marine wildlife, often leading to entanglement and starvation, ultimately resulting in death. Therefore, removing debris from the ocean is crucial to restore the natural balance and allow marine life to thrive. Instance segmentation is an advanced form of object detection that identifies objects and precisely locates and separates them, making it an essential tool for autonomous underwater vehicles (AUVs) to navigate and interact with their underwater environment effectively. AUVs use image segmentation to analyze images captured by their cameras to navigate underwater environments. In this paper, we use instance segmentation to calculate the area of individual objects within an image, we use YOLOV7 in Roboflow to generate a set of bounding boxes for each object in the image with a class label and a confidence score for every detection. A segmentation mask is then created for each object by applying a binary mask to the object's bounding box. The masks are generated by applying a binary threshold to the output of a convolutional neural network trained to segment objects from the background. Finally, refining the segmentation mask for each object is done by applying post-processing techniques such as morphological operations and contour detection, to improve the accuracy and quality of the mask. The process of estimating the area of instance segmentation involves calculating the area of each segmented instance separately and then summing up the areas of all instances to obtain the total area. The calculation is carried out using standard formulas based on the shape of the object, such as rectangles and circles. In cases where the object is complex, the Monte Carlo method is used to estimate the area. This method provides a higher degree of accuracy than traditional methods, especially when using a large number of samples

    Predicting and identifying antimicrobial resistance in the marine environment using AI and machine learning algorithms.

    Get PDF
    Antimicrobial resistance (AMR) is an increasingly critical public health issue necessitating precise and efficient methodologies to achieve prompt results. The accurate and early detection of AMR is crucial, as its absence can pose life-threatening risks to diverse ecosystems, including the marine environment. The spread of AMR among microorganisms in the marine environment can have significant consequences, potentially impacting human life directly. This study focuses on evaluating the diameters of the disc diffusion zone and employs artificial intelligence and machine learning techniques such as image segmentation, data augmentation, and deep learning methods to enhance accuracy and predict microbial resistance

    Human-Robot Collaboration for Effective Bridge Inspection in the Artificial Intelligence Era

    Get PDF
    Advancements in sensor, Artificial Intelligence (AI), and robotic technologies have formed a foundation to enable a transformation from traditional engineering systems to complex adaptive systems. This paradigm shift will bring exciting changes to civil infrastructure systems and their builders, operators and managers. Funded by the INSPIRE University Transportation Center (UTC), Dr. Qin’s group investigated the holism of an AI-robot-inspector system for bridge inspection. Dr. Qin will discuss the need for close collaboration among the constituent components of the AI-robot-inspector system. In the workplace of bridge inspection using drones, the mobile robotic inspection platform rapidly collected big inspection video data that need to be processed prior to element-level inspections. She will illustrate how human intelligence and artificial intelligence can collaborate in creating an AI model both efficiently and effectively. Obtaining a large amount of expert-annotated data for model training is less desirable, if not unrealistic, in bridge inspection. This INSPIRE project addressed this annotation challenge by developing a semi-supervised self-learning (S3T) algorithm that utilizes a small amount of time and guidance from inspectors to help the model achieve an excellent performance. The project evaluated the improvement in job efficacy produced by the developed AI model. This presentation will conclude by introducing some of the on-going work to achieve the desired adaptability of AI models to new or revised tasks in bridge inspection as the National Bridge Inventory includes over 600,000 bridges of various types in material, shape, and age

    Interactive Segmentation for Diverse Gesture Types Without Context

    Full text link
    Interactive segmentation entails a human marking an image to guide how a model either creates or edits a segmentation. Our work addresses limitations of existing methods: they either only support one gesture type for marking an image (e.g., either clicks or scribbles) or require knowledge of the gesture type being employed, and require specifying whether marked regions should be included versus excluded in the final segmentation. We instead propose a simplified interactive segmentation task where a user only must mark an image, where the input can be of any gesture type without specifying the gesture type. We support this new task by introducing the first interactive segmentation dataset with multiple gesture types as well as a new evaluation metric capable of holistically evaluating interactive segmentation algorithms. We then analyze numerous interactive segmentation algorithms, including ones adapted for our novel task. While we observe promising performance overall, we also highlight areas for future improvement. To facilitate further extensions of this work, we publicly share our new dataset at https://github.com/joshmyersdean/dig
    • …
    corecore