5,143 research outputs found
Damage volumetric assessment and digital twin synchronization based on LiDAR point clouds
Point clouds are widely used for structure inspection and can provide damage spatial information. However, how to update a digital twin (DT) with local damage based on point clouds has not been sufficiently studied. This research presents an efficient framework for assessing and DT synchronizing local damage on a planar surface using point clouds. The pipeline starts from damage detection via DeepLabV3+ on the pseudo grayscale images from the point depth. It avoids the drawbacks of image and point cloud fusion. The target point cloud is separated according to the detected damage. Then, it can be converted into a 3D binary matrix through voxelization and binarization, which is highly lightweight and can be losslessly compressed for DT synchronization. The framework is validated via two case studies, demonstrating that the proposed voxel-based method can be easily applied to real-world damage with non-convex geometry instead of convex-hull fitting; finite-element (FE) models and BIM models can be updated automatically through the framework
Deep ensemble model-based moving object detection and classification using SAR images
In recent decades, image processing and computer vision models have played a vital role in moving object detection on the synthetic aperture radar (SAR) images. Capturing of moving objects in the SAR images is a difficult task. In this study, a new automated model for detecting moving objects is proposed using SAR images. The proposed model has four main steps, namely, preprocessing, segmentation, feature extraction, and classification. Initially, the input SAR image is pre-processed using a histogram equalization technique. Then, the weighted Otsu-based segmentation algorithm is applied for segmenting the object regions from the pre-processed images. When using the weighted Otsu, the segmented grayscale images are not only clear but also retain the detailed features of grayscale images. Next, feature extraction is carried out by gray-level co-occurrence matrix (GLCM), median binary patterns (MBPs), and additive harmonic mean estimated local Gabor binary pattern (AHME-LGBP). The final step is classification using deep ensemble models, where the objects are classified by employing the ensemble deep learning technique, combining the models like the bidirectional long short-term memory (Bi-LSTM), recurrent neural network (RNN), and improved deep belief network (IDBN), which is trained with the features extracted previously. The combined models increase the accuracy of the results significantly. Furthermore, ensemble modeling reduces the variance and modeling method bias, which decreases the chances of overfitting. Compared to a single contributing model, ensemble models perform better and make better predictions. Additionally, an ensemble lessens the spread or dispersion of the model performance and prediction accuracy. Finally, the performance of the proposed model is related to the conventional models with respect to different measures. In the mean-case scenario, the proposed ensemble model has a minimum error value of 0.032, which is better related to other models. In both median- and best-case scenario studies, the ensemble model has a lower error value of 0.029 and 0.015
Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge
Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures
Vision-enhanced Peg-in-Hole for automotive body parts using semantic image segmentation and object detection
Artificial Intelligence (AI) is an enabling technology in the context of Industry 4.0. In particular, the automotive sector is among those who can benefit most of the use of AI in conjunction with advanced vision techniques. The scope of this work is to integrate deep learning algorithms in an industrial scenario involving a robotic Peg-in-Hole task. More in detail, we focus on a scenario where a human operator manually positions a carbon fiber automotive part in the workspace of a 7 Degrees of Freedom (DOF) manipulator. To cope with the uncertainty on the relative position between the robot and the workpiece, we adopt a three stage strategy. The first stage concerns the Three-Dimensional (3D) reconstruction of the workpiece using a registration algorithm based on the Iterative Closest Point (ICP) paradigm. Such a procedure is integrated with a semantic image segmentation neural network, which is in charge of removing the background of the scene to improve the registration. The adoption of such network allows to reduce the registration time of about 28.8%. In the second stage, the reconstructed surface is compared with a Computer Aided Design (CAD) model of the workpiece to locate the holes and their axes. In this stage, the adoption of a Convolutional Neural Network (CNN) allows to improve the holesâ position estimation of about 57.3%. The third stage concerns the insertion of the peg by implementing a search phase to handle the remaining estimation errors. Also in this case, the use of the CNN reduces the search phase duration of about 71.3%. Quantitative experiments, including a comparison with a previous approach without both the segmentation network and the CNN, have been conducted in a realistic scenario. The results show the effectiveness of the proposed approach and how the integration of AI techniques improves the success rate from 84.5% to 99.0%
A novel segmentation approach for crop modeling using a plenoptic light-field camera : going from 2D to 3D
OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer
to make early decisions, and therefore be more productive.
This research is motivated by the generation of tools for rice crop phenotyping within
the OMICAS research ecosystem framework. It proposes implementing the image process-
ing technologies and artificial intelligence technics through a multisensory approach with
multispectral information. Three main stages are covered: (i) A segmentation approach
that allows identifying the biological material associated with plants, and the main contri-
bution is the GFKuts segmentation approach; (ii) a strategy that allows the development
of sensory fusion between three different cameras, a 3D camera, an infrared multispectral
camera, and a thermal multispectral camera, this stage is developed through a complex
object detection approach; and (iii) the characterization of a 4D model that generates
topological relationships with the information of the point cloud, the main contribution
of this strategy is the improvement of the point cloud captured by the 3D sensor, in this
sense, this stage improves the acquisition of any 3D sensor.
This research presents a development that receives information from multiple sensors,
especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This
model integrates the color information of 5 channels and topological information, relating
the points in space. Overall, the research allows the integration of the 3D information from
any sensor\technology and the multispectral channels from any multispectral camera, to
generate direct non-invasive measurements on the plant.MagĂster en IngenierĂa ElectrĂłnicaMagĂster en Inteligencia ArtificialMaestrĂahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=000155691
Assessing the Alterations In Berea Sandstone Mechanical Properties Induced by the Permeation of Bentonite Gels of Varying NaCl Concentrations: A Wellbore Strengthening Perspective
Lost circulation, which is the loss of drilling fluid into a formation during drilling, is commonly induced by wellbore pressure exceeding that of the fracture initiation pressure (FIP) and fracture propagation pressure (FPP) of the formation. This expensive occurrence can be mitigated by increasing the FIP and FPP by altering the drilling fluid composition, which has a large effect on the mechanical behaviour of rock and subsequently on the FIP and FPP. Bentonite is used both as a base fluid, as well as a lost circulation material. It is therefore imperative to understand the effect that bentonite, at different swelling capacities, has on the mechanical properties of rock to optimise its use in drilling operations to better attenuate lost circulation. In this study, Berea sandstone cores, of four permeability ranges, were permeated with bentonite gels of four NaCl concentrations, using a novel apparatus. Two batches of gel-permeated cores were prepared, one set was allowed to dry, to observe an ageing effect, and the other was kept at 4°C to preserve moisture. The cores were then indented to obtain stress-strain curves. The results of the indentation testing showed a statistically significant increase in peak strength and Youngâs modulus in the wet gel-permeated cores relative to the non-permeated samples, whereas the cores with the dried gel displayed a decrease in these properties compared to the control, The dry gel-permeated cores, however, exhibited a significantly longer displacement distance compared to the control, implying these cores take longer to fully fracture apart. In addition to indentation, viscometry experiments were carried out to assess the rheological properties of the gels, this, alongside SEM and CT imaging of the samples, was done in an attempt to understand potential mechanisms behind the alterations in mechanical properties caused by gel permeation. The research carried out shows an initial set of promising results for the use of bentonite gels for wellbore-strengthening applications during drilling. The work undertaken also highlights the need for future research into interactions between non-Newtonian fluids and solids, and the potential that these interactions provide in altering the physicochemical properties of materials
DECISION-BASED FUSION OF PANSHARPENED VHR SATELLITE IMAGES USING TWO-LEVEL ROLLING SELF-GUIDANCE FILTERING AND EDGE INFORMATION
Pan-sharpening (PS) fuses low-resolution multispectral (LR MS) images with high-resolution panchromatic (HR PAN) bands to produce HR MS data. Current PS methods either better maintain the spectral information of MS images, or better transfer the PAN spatial details to the MS bands. In this study, we propose a decision-based fusion method that integrates two basic pan-sharpened very-high-resolution (VHR) satellite imageries taking advantage of both images simultaneously. It uses two-level rolling self-guidance filtering (RSGF) and Canny edge detection. The method is tested on Worldview (WV)-2 and WV-4 VHR satellite images on the San Fransisco and New York areas, using four PS algorithms. Results indicate that the proposed method increased the overall spectral-spatial quality of the base pan-sharpened images by 7.2% and 9.8% for the San Fransisco and New York areas, respectively. Our method therefore effectively addresses decision-level fusion of different base pan-sharpened images
A bright spot detection and analysis method for infrared photovoltaic panels based on image processing
The energy crisis and environmental problems have attracted global attention, thus the photovoltaic (PV) power generation technology comes to peopleâs mind. The application of unmanned aerial vehicle (UAV) inspection technology can overcome the disadvantages of large scale and high risk of this project. The application of unmanned aerial vehicle (UAV) infrared detection technology in PV power generation can not only improve work efficiency, but also have high economic benefits. This paper based on U-Net network and HSV space, proposes a method of PV infrared image segmentation and location detection of hot spots, which is used to detect and analyze the shielding of PV panels. Firstly, the main PV modules are automatically split from the different infrared image background based on U-Net. In order to quickly locate the defection location, the mask image is multiplied by the original image and then converted to HSV. The discriminant of bright spot features is introduced, and the discriminant mechanism is summarized according to the experiment, and the formation reason is analyzed. The experiment result shows that the method is not affected by the infrared image under the different background, provides data for the maintenance of power station and improves the detection accuracy. The accuracy rate of analyzing the causes of defects is 92.5%
- âŠ