32 research outputs found

    SwimmerNET: Underwater 2D Swimmer Pose Estimation Exploiting Fully Convolutional Neural Networks

    Get PDF
    Professional swimming coaches make use of videos to evaluate their athletes' performances. Specifically, the videos are manually analyzed in order to observe the movements of all parts of the swimmer's body during the exercise and to give indications for improving swimming technique. This operation is time-consuming, laborious and error prone. In recent years, alternative technologies have been introduced in the literature, but they still have severe limitations that make their correct and effective use impossible. In fact, the currently available techniques based on image analysis only apply to certain swimming styles; moreover, they are strongly influenced by disturbing elements (i.e., the presence of bubbles, splashes and reflections), resulting in poor measurement accuracy. The use of wearable sensors (accelerometers or photoplethysmographic sensors) or optical markers, although they can guarantee high reliability and accuracy, disturb the performance of the athletes, who tend to dislike these solutions. In this work we introduce swimmerNET, a new marker-less 2D swimmer pose estimation approach based on the combined use of computer vision algorithms and fully convolutional neural networks. By using a single 8 Mpixel wide-angle camera, the proposed system is able to estimate the pose of a swimmer during exercise while guaranteeing adequate measurement accuracy. The method has been successfully tested on several athletes (i.e., different physical characteristics and different swimming technique), obtaining an average error and a standard deviation (worst case scenario for the dataset analyzed) of approximately 1 mm and 10 mm, respectively

    Sequential or Concomitant Inhibition of Cyclin-Dependent Kinase 4/6 Before mTOR Pathway in Hormone-Positive HER2 Negative Breast Cancer: Biological Insights and Clinical Implications

    Get PDF
    About 75% of all breast cancers are hormone receptor-positive (HR+). However, the efficacy of endocrine therapy is limited due to the high rate of either pre-existing or acquired resistance. In this work we reconstructed the pathways around estrogen receptor (ER), mTOR, and cyclin D in order to compare the effects of CDK4/6 and PI3K/AKT/mTOR inhibitors. A positive feedback loop links mTOR and ER that support each other. We subsequently considered whether a combined or sequential inhibition of CDK4/6 and PI3K/AKT/mTOR could ensure better results. Studies indicate that inhibition of CDK4/6 activates mTOR as an escape mechanism to ensure cell proliferation. In literature, the little evidence dealing with this topic suggests that pre-treatment with mTOR pathway inhibitors could prevent or delay the onset of CDK4/6 inhibitor resistance. Additional studies are needed in order to find biomarkers that can identify patients who will develop this resistance and in whom the sensitivity to CDK4/6 inhibitors can be restored

    Differential diagnosis of neurodegenerative dementias with the explainable MRI based machine learning algorithm MUQUBIA

    Get PDF
    Biomarker-based differential diagnosis of the most common forms of dementia is becoming increasingly important. Machine learning (ML) may be able to address this challenge. The aim of this study was to develop and interpret a ML algorithm capable of differentiating Alzheimer's dementia, frontotemporal dementia, dementia with Lewy bodies and cognitively normal control subjects based on sociodemographic, clinical, and magnetic resonance imaging (MRI) variables. 506 subjects from 5 databases were included. MRI images were processed with FreeSurfer, LPA, and TRACULA to obtain brain volumes and thicknesses, white matter lesions and diffusion metrics. MRI metrics were used in conjunction with clinical and demographic data to perform differential diagnosis based on a Support Vector Machine model called MUQUBIA (Multimodal Quantification of Brain whIte matter biomArkers). Age, gender, Clinical Dementia Rating (CDR) Dementia Staging Instrument, and 19 imaging features formed the best set of discriminative features. The predictive model performed with an overall Area Under the Curve of 98%, high overall precision (88%), recall (88%), and F1 scores (88%) in the test group, and good Label Ranking Average Precision score (0.95) in a subset of neuropathologically assessed patients. The results of MUQUBIA were explained by the SHapley Additive exPlanations (SHAP) method. The MUQUBIA algorithm successfully classified various dementias with good performance using cost-effective clinical and MRI information, and with independent validation, has the potential to assist physicians in their clinical diagnosis

    Understanding Factors Associated With Psychomotor Subtypes of Delirium in Older Inpatients With Dementia

    Get PDF

    Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm

    Get PDF
    Accurately assessing the geometric features of curvilinear structures on images is of paramount importance in many vision-based measurement systems targeting technological fields such as quality control, defect analysis, biomedical, aerial, and satellite imaging. This paper aims at laying the basis for the development of fully automated vision-based measurement systems targeting the measurement of elements that can be treated as curvilinear structures in the resulting image, such as cracks in concrete elements. In particular, the goal is to overcome the limitation of exploiting the well-known Steger’s ridge detection algorithm in these applications because of the manual identification of the input parameters characterizing the algorithm, which are preventing its extensive use in the measurement field. This paper proposes an approach to make the selection phase of these input parameters fully automated. The metrological performance of the proposed approach is discussed. The method is demonstrated on both synthesized and experimental data

    A neural network based microphone array approach to grid-less noise source localization

    No full text
    Deep learning and Neural Networks strategies have become very popular in the last year as tools for image and data processing. As for acoustics, neural network-based approaches have been typically used to recognize audio patterns or features or to spatially localize a single emitting source like a speaker. More recently, some authors used deep learning to localize multiple-sources exploiting the grid-based approach typical of sound source localization methods or to filter/improve acoustic maps obtained by more traditional techniques like conventional beamforming. This paper wants to propose the use of artificial neural networks (ANNs) for localizing and quantifying multiple sound sources in a grid-less way. The approach uses the microphones Cross-Spectral-Matrix (CSM) as input to the network and provides as output both the location and strength of sources contributing to the acoustic field. The grid-less strategy targets improving spatial resolution and computational efficiency. The proposed solution is discussed on simulated data for assessing its accuracy and sensitivity. Preliminary investigations on real data are also reported

    Correction of Substrate Spectral Distortion in Hyper-Spectral Imaging by Neural Network for Blood Stain Characterization

    No full text
    In the recent past, hyper-spectral imaging has found widespread application in forensic science, performing both geometric characterization of biological traces and trace classification by exploiting their spectral emission. Methods proposed in the literature for blood stain analysis have been shown to be effectively limited to collaborative surfaces. This proves to be restrictive in real-case scenarios. The problem of the substrate material and color is then still an open issue for blood stain analysis. This paper presents a novel method for blood spectra correction when contaminated by the influence of the substrate, exploiting a neural network-based approach. Blood stains hyper-spectral images deposited on 12 different substrates for 12 days at regular intervals were acquired via a hyper-spectral camera. The data collected were used to train and test the developed neural network model. Starting from the spectra of a blood stain deposited in a generic substrate, the algorithm at first recognizes whether it is blood or not, then allows to obtain the spectra that the same blood stain, at the same time, would have on a reference white substrate with a mean absolute percentage error of 1.11%. Uncertainty analysis has also been performed by comparing the ground truth reflectance spectra with the predicted ones by the neural model

    Rivers' Water Level Assessment Using UAV Photogrammetry and RANSAC Method and the Analysis of Sensitivity to Uncertainty Sources

    No full text
    Water-level monitoring systems are fundamental for flood warnings, disaster risk assessment and the periodical analysis of the state of reservoirs. Many advantages can be obtained by performing such investigations without the need for field measurements. In this paper, a specific method for the evaluation of the water level was developed using photogrammetry that is derived from images that were recorded by unmanned aerial vehicles (UAVs). A dense point cloud was retrieved and the plane that better fits the river water surface was found by the use of the random sample consensus (RANSAC) method. A reference point of a known altitude within the image was then exploited in order to compute the distance between it and the fitted plane, in order to monitor the altitude of the free surface of the river. This paper further aims to perform a critical analysis of the sensitivity of these photogrammetric techniques for river water level determination, starting from the effects that are highlighted by the state of the art, such as random noise that is related to the image data quality, reflections and process parameters. In this work, the influences of the plane depth and number of iterations have been investigated, showing that in correspondence to the optimal plane depth (0.5 m) the error is not affected by the number of iterations
    corecore