14 research outputs found

    Salivary Gland Adaptation to Dietary Inclusion of Hydrolysable Tannins in Boars

    Get PDF
    The ingestion of hydrolysable tannins as a potential nutrient to reduce boar odor in entire males results in the significant enlargement of parotid glands (parotidomegaly). The objective of this study was to characterize the effects of different levels of hydrolysable tannins in the diet of fattening boars (n = 24) on salivary gland morphology and proline-rich protein (PRP) expression at the histological level. Four treatment groups of pigs (n = 6 per group) were fed either a control (T0) or experimental diet, where the T0 diet was supplemented with 1% (T1), 2% (T2), or 3% (T3) of the hydrolysable tannin-rich extract Farmatan®. After slaughter, the parotid and mandibular glands of the experimental pigs were harvested and dissected for staining using Goldner’s Trichrome method, and immunohistochemical studies with antibodies against PRPs. Morphometric analysis was performed on microtome sections of both salivary glands, to measure the acinar area, the lobular area, the area of the secretory ductal cells, and the sizes of glandular cells and their nuclei. Histological assessment revealed that significant parotidomegaly was only present in the T3 group, based on the presence of larger glandular lobules, acinar areas, and their higher nucleus to cytoplasm ratio. The immunohistochemical method, supported by color intensity measurements, indicated significant increases in basic PRPs (PRB2) in the T3 and acidic PRPs (PRH1/2) in the T1 groups. Tannin supplementation did not affect the histo-morphological properties of the mandibular gland. This study confirms that pigs can adapt to a tannin-rich diet by making structural changes in their parotid salivary gland, indicating its higher functional activity

    Deeply-Supervised 3D Convolutional Neural Networks for Automated Ovary and Follicle Detection from Ultrasound Volumes

    No full text
    Automated detection of ovarian follicles in ultrasound images is much appreciated when its effectiveness is comparable with the experts’ annotations. Today’s best methods estimate follicles notably worse than the experts. This paper describes the development of two-stage deeply-supervised 3D Convolutional Neural Networks (CNN) based on the established U-Net. Either the entire U-Net or specific parts of the U-Net decoder were replicated in order to integrate the prior knowledge into the detection. Methods were trained end-to-end by follicle detection, while transfer learning was employed for ovary detection. The USOVA3D database of annotated ultrasound volumes, with its verification protocol, was used to verify the effectiveness. In follicle detection, the proposed methods estimate follicles up to 2.9% more accurately than the compared methods. With our two-stage CNNs trained by transfer learning, the effectiveness of ovary detection surpasses the up-to-date automated detection methods by about 7.6%. The obtained results demonstrated that our methods estimate follicles only slightly worse than the experts, while the ovaries are detected almost as accurately as by the experts. Statistical analysis of 50 repetitions of CNN model training proved that the training is stable, and that the effectiveness improvements are not only due to random initialisation. Our deeply-supervised 3D CNNs can be adapted easily to other problem domains

    Automated landmark points detection by using a mixture of approaches

    No full text
    This paper deals with the automated detection of a closed curvećs dominant points. We treat a curve as a 1-D function of the arc length. The problem of detecting dominant points is translated into seeking the extrema of the corresponding 1-D function. Three approaches for automated dominant points detection are presented: (1) an approach based on fitting polynomial, (2) an approach using 1-D computer registration and (3) an innovative approach based on a multi-resolution scheme, zero-crossing and hierarchical clustering. Afterwards, two methods are introduced based on the linearly and non-linearly mixing the results from the three approaches. We then mix the results in a mean-square error sense by using the linear and non-linear fittings, respectively. We experimentally demonstrate the problem of detecting 21 landmarks on 38 vole-teeth that by mixing, the detection accuracy is improved by up to 41.47 % with respect to the results for individual approaches, as applied within the mixture

    Objektivno ocenjevanje slikovnih segmentacijskih algoritmov

    No full text

    UAV Thermal Imaging for Unexploded Ordnance Detection by Using Deep Learning

    No full text
    A few promising solutions for thermal imaging Unexploded Ordnance (UXO) detection were proposed after the start of the military conflict in Ukraine in 2014. At the same time, most of the landmine clearance protocols and practices are based on old, 20th-century technologies. More than 60 countries worldwide are still affected by explosive remnants of war, and new areas are contaminated almost every day. To date, no automated solutions exist for surface UXO detection by using thermal imaging. One of the reasons is also that there are no publicly available data. This research bridges both gaps by introducing an automated UXO detection method, and by publishing thermal imaging data. During a project in Bosnia and Herzegovina in 2019, an organisation, Norwegian People’s Aid, collected data about unexploded ordnances and made them available for this research. Thermal images with a size of 720 × 480 pixels were collected by using an Unmanned Aerial Vehicle at a height of 3 m, thus achieving a very small Ground Sampling Distance (GSD). One of the goals of our research was also to verify if the explosive war remnants’ detection accuracy could be improved further by using Convolutional Neural Networks (CNN). We have experimented with various existing modern CNN architectures for object identification, whereat the YOLOv5 model was selected as the most promising for retraining. An eleven-class object detection problem was solved primarily in this study. Our data were annotated semi-manually. Five versions of the YOLOv5 model, fine-tuned with a grid-search, were trained end-to-end on randomly selected 640 training and 80 validation images from our dataset. The trained models were verified on the remaining 88 images from our dataset. Objects from each of the eleven classes were identified with more than 90% probability, whereat the Mean Average Precision (mAP) at a 0.5 threshold was 99.5%, and the mAP at thresholds from 0.5 to 0.95 was 87.0% up to 90.5%, depending on the model’s complexity. Our results are comparable to the state-of-the-art, whereat these object detection methods have been tested on other similar small datasets with thermal images. Our study is one of the few in the field of Automated UXO detection by using thermal images, and the first that solves the problem of identifying more than one class of objects. On the other hand, publicly available thermal images with a relatively small GSD will enable and stimulate the development of new detection algorithms, where our method and results can serve as a baseline. Only really accurate automatic UXO detection solutions will help to solve one of the least explored worldwide life-threatening problems

    Influence of Temperature Variations on Calibrated Cameras

    No full text
    Abstract—The camera parameters are changed due to temperature variations, which directly influence calibrated cameras accuracy. Robustness of calibration methods were measured and their accuracy was tested. An error ratio due to camera parameters change with respect to total error originated during calibration process was determined. It pointed out that influence of temperature variations decrease by increasing distance of observed objects from cameras. Keywords—camera calibration, perspective projection matrix, epipolar geometry, temperature variation. I

    Deeply-Supervised 3D Convolutional Neural Networks for Automated Ovary and Follicle Detection from Ultrasound Volumes

    No full text
    Automated detection of ovarian follicles in ultrasound images is much appreciated when its effectiveness is comparable with the experts’ annotations. Today’s best methods estimate follicles notably worse than the experts. This paper describes the development of two-stage deeply-supervised 3D Convolutional Neural Networks (CNN) based on the established U-Net. Either the entire U-Net or specific parts of the U-Net decoder were replicated in order to integrate the prior knowledge into the detection. Methods were trained end-to-end by follicle detection, while transfer learning was employed for ovary detection. The USOVA3D database of annotated ultrasound volumes, with its verification protocol, was used to verify the effectiveness. In follicle detection, the proposed methods estimate follicles up to 2.9% more accurately than the compared methods. With our two-stage CNNs trained by transfer learning, the effectiveness of ovary detection surpasses the up-to-date automated detection methods by about 7.6%. The obtained results demonstrated that our methods estimate follicles only slightly worse than the experts, while the ovaries are detected almost as accurately as by the experts. Statistical analysis of 50 repetitions of CNN model training proved that the training is stable, and that the effectiveness improvements are not only due to random initialisation. Our deeply-supervised 3D CNNs can be adapted easily to other problem domains

    Analytical Camera Model Supplemented with Influence of Temperature Variations

    No full text
    Abstract—A camera in the building site is exposed to different weather conditions. Differences between images of the same scene captured with the same camera arise also due to temperature variations. The influence of temperature changes on camera parameters were modelled and integrated into existing analytical camera model. Modified camera model enables quantitatively assessing the influence of temperature variations. Keywords—camera calibration, analytical model, intrinsic parameters, extrinsic parameters, temperature variations. I
    corecore