9 research outputs found

    A Sensor for Urban Driving Assistance Systems Based on Dense Stereovision

    Get PDF
    Advanced driving assistance systems (ADAS) form a complex multidisciplinary research field, aimed at improving traffic efficiency and safety. A realistic analysis of the requirements and of the possibilities of the traffic environment leads to the establishment of several goals for traffic assistance, to be implemented in the near future (ADASE, INVENT

    Integrated system SPR array sensors based on side glow MMA fibers

    No full text
    The integrated system consisting of a customized number of plasmonic sensors array is presented (N=2). The 1mm diameter MMA polymer core, side emitting fibers are used for plasmonic sensors implementation. Time domain monitoring of the fiber based plasmonic sensors with a spectroscopy application for smartphone is portable and low energy-consuming solution for environment applications

    High accuracy stereo vision system for far distance obstacle detection

    No full text
    This paper presents a high accuracy stereo vision system for obstacle detection and vehicle environment perception in a large variety of traffic scenarios, from highway to urban. The system detects obstacles of all types, even at high distance, outputting them as a list of cuboids having a position in 3D coordinates, size and speed

    Automatic Segmentation of Periodontal Tissue Ultrasound Images with Artificial Intelligence: A Novel Method for Improving Dataset Quality

    No full text
    This research aimed to evaluate Mask R-CNN and U-Net convolutional neural network models for pixel-level classification in order to perform the automatic segmentation of bi-dimensional images of US dental arches, identifying anatomical elements required for periodontal diagnosis. A secondary aim was to evaluate the efficiency of a correction method of the ground truth masks segmented by an operator, for improving the quality of the datasets used for training the neural network models, by 3D ultrasound reconstructions of the examined periodontal tissue. Methods: Ultrasound periodontal investigations were performed for 52 teeth of 11 patients using a 3D ultrasound scanner prototype. The original ultrasound images were segmented by a low experienced operator using region growing-based segmentation algorithms. Three-dimensional ultrasound reconstructions were used for the quality check and correction of the segmentation. Mask R-CNN and U-NET were trained and used for prediction of periodontal tissue’s elements identification. Results: The average Intersection over Union ranged between 10% for the periodontal pocket and 75.6% for gingiva. Even though the original dataset contained 3417 images from 11 patients, and the corrected dataset only 2135 images from 5 patients, the prediction’s accuracy is significantly better for the models trained with the corrected dataset. Conclusions: The proposed quality check and correction method by evaluating in the 3D space the operator’s ground truth segmentation had a positive impact on the quality of the datasets demonstrated through higher IoU after retraining the models using the corrected dataset

    Accuracy Report on a Handheld 3D Ultrasound Scanner Prototype Based on a Standard Ultrasound Machine and a Spatial Pose Reading Sensor

    No full text
    The aim of this study was to develop and evaluate a 3D ultrasound scanning method. The main requirements were the freehand architecture of the scanner and high accuracy of the reconstructions. A quantitative evaluation of a freehand 3D ultrasound scanner prototype was performed, comparing the ultrasonographic reconstructions with the CAD (computer-aided design) model of the scanned object, to determine the accuracy of the result. For six consecutive scans, the 3D ultrasonographic reconstructions were scaled and aligned with the model. The mean distance between the 3D objects ranged between 0.019 and 0.05 mm and the standard deviation between 0.287 mm and 0.565 mm. Despite some inherent limitations of our study, the quantitative evaluation of the 3D ultrasonographic reconstructions showed comparable results to other studies performed on smaller areas of the scanned objects, demonstrating the future potential of the developed prototype

    Comparison of Deep-Learning and Conventional Machine-Learning Methods for the Automatic Recognition of the Hepatocellular Carcinoma Areas from Ultrasound Images

    No full text
    The emergence of deep-learning methods in different computer vision tasks has proved to offer increased detection, recognition or segmentation accuracy when large annotated image datasets are available. In the case of medical image processing and computer-aided diagnosis within ultrasound images, where the amount of available annotated data is smaller, a natural question arises: are deep-learning methods better than conventional machine-learning methods? How do the conventional machine-learning methods behave in comparison with deep-learning methods on the same dataset? Based on the study of various deep-learning architectures, a lightweight multi-resolution Convolutional Neural Network (CNN) architecture is proposed. It is suitable for differentiating, within ultrasound images, between the Hepatocellular Carcinoma (HCC), respectively the cirrhotic parenchyma (PAR) on which HCC had evolved. The proposed deep-learning model is compared with other CNN architectures that have been adapted by transfer learning for the ultrasound binary classification task, but also with conventional machine-learning (ML) solutions trained on textural features. The achieved results show that the deep-learning approach overcomes classical machine-learning solutions, by providing a higher classification performance
    corecore