4,098 research outputs found

    Functional Imaging of Malignant Gliomas with CT Perfusion

    Get PDF
    The overall survival of patients with malignant gliomas remains dismal despite multimodality treatments. Computed tomography (CT) perfusion is a functional imaging tool for assessing tumour hemodynamics. The goals of this thesis are to 1) improve measurements of various CT perfusion parameters and 2) assess treatment outcomes in a rat glioma model and in patients with malignant gliomas. Chapter 2 addressed the effect of scan duration on the measurements of blood flow (BF), blood volume (BV), and permeability-surface area product (PS). Measurement errors of these parameters increased with shorter scan duration. A minimum scan duration of 90 s is recommended. Chapter 3 evaluated the improvement in the measurements of these parameters by filtering the CT perfusion images with principal component analysis (PCA). From computer simulation, measurement errors of BF, BV, and PS were found to be reduced. Experiments showed that CT perfusion image contrast-to-noise ratio was improved. Chapter 4 investigated the efficacy of CT perfusion as an early imaging biomarker of response to stereotactic radiosurgery (SRS). Using the C6 glioma model, we showed that responders to SRS (surviving \u3e 15 days) had lower relative BV and PS on day 7 post-SRS when compared to controls and non-responders (P \u3c 0.05). Relative BV and PS on day 7 post-SRS were predictive of survival with 92% accuracy. Chapter 5 examined the use of multiparametric imaging with CT perfusion and 18F-Fluorodeoxyglucose positron emission tomography (FDG-PET) to identify tumour sites that are likely to correlate with the eventual location of tumour progression. We developed a method to generate probability maps of tumour progression based on these imaging data. Chapter 6 investigated serial changes in tumour volumetric and CT perfusion parameters and their predictive ability in stratifying patients by overall survival. Pre-surgery BF in the non-enhancing lesion and BV in the contrast-enhancing lesion three months after radiotherapy had the highest combination of sensitivities and specificities of ≥ 80% in predicting 24 months overall survival. iv Optimization and standardization of CT perfusion scans were proposed. This thesis also provided corroborating evidence to support the use of CT perfusion as a biomarker of outcomes in patients with malignant gliomas

    Measuring Perceptual Color Differences of Smartphone Photographs

    Full text link
    Measuring perceptual color differences (CDs) is of great importance in modern smartphone photography. Despite the long history, most CD measures have been constrained by psychophysical data of homogeneous color patches or a limited number of simplistic natural photographic images. It is thus questionable whether existing CD measures generalize in the age of smartphone photography characterized by greater content complexities and learning-based image signal processors. In this paper, we put together so far the largest image dataset for perceptual CD assessment, in which the photographic images are 1) captured by six flagship smartphones, 2) altered by Photoshop, 3) post-processed by built-in filters of the smartphones, and 4) reproduced with incorrect color profiles. We then conduct a large-scale psychophysical experiment to gather perceptual CDs of 30,000 image pairs in a carefully controlled laboratory environment. Based on the newly established dataset, we make one of the first attempts to construct an end-to-end learnable CD formula based on a lightweight neural network, as a generalization of several previous metrics. Extensive experiments demonstrate that the optimized formula outperforms 33 existing CD measures by a large margin, offers reasonable local CD maps without the use of dense supervision, generalizes well to homogeneous color patch data, and empirically behaves as a proper metric in the mathematical sense. Our dataset and code are publicly available at https://github.com/hellooks/CDNet.Comment: 10 figures, 8 tables, 14 page

    UTILIZING PRECISION TECHNOLOGIES TO VALIDATE A REAL-TIME LOCATION SYSTEM FOR DAIRY CATTLE AND MONITOR CALF BEHAVIORS DURING HEAT STRESS

    Get PDF
    With the increase in on-farm precision dairy technologies (PDT) utilization, large quantities of information are readily available to producers. A more recently available technology for use in livestock species is the real-time location system. These technologies offer dairy producers the opportunity to monitor and track real-time locations of cows, track locomotion patterns, and summarize specific area usage. However, the usefulness of these insights is heavily dependent on the performance of the technology. Therefore, the first objective of this dissertation was to assess the positioning recording performance and the usefulness of the data recorded of a real-time location system (Smartbow GmbH; Zoetis Services LLC., Parsippany, NJ, USA) for use in freestall-housed dairy cattle on a commercial farm. The first objective evaluated a technology’s positioning abilities under static and dynamic conditions. The system was able to accurately determine locations while under both static and dynamic conditions. Furthermore, PDT are also utilized to monitor the behaviors and activity of dairy calves. The second objective of this dissertation was to investigate the effects of heat stress on the behaviors of dairy calves using information gathered by PDT. Information recorded from automated milk feeders and pedometers were used to investigate the effects of an elevated temperature-humidity index on dairy calf behaviors. The changes in behavior recorded suggest that PDT can detect behavioral patterns changes of calves during heat stress

    Deep Learning Models to Predict Finishing Pig Weight Using Point Clouds

    Get PDF
    The selection of animals to be marketed is largely completed by their visual assessment, solely relying on the skill level of the animal caretaker. Real-time monitoring of the weight of farm animals would provide important information for not only marketing, but also for the assessment of health and well-being issues. The objective of this study was to develop and evaluate a method based on 3D Convolutional Neural Network to predict weight from point clouds. Intel Real Sense D435 stereo depth camera placed at 2.7 m height was used to capture the 3D videos of a single finishing pig freely walking in a holding pen ranging in weight between 20–120 kg. The animal weight and 3D videos were collected from 249 Landrace × Large White pigs in farm facilities of the FZEA-USP (Faculty of Animal Science and Food Engineering, University of Sao Paulo) between 5 August and 9 November 2021. Point clouds were manually extracted from the recorded 3D video and applied for modeling. A total of 1186 point clouds were used for model training and validating using PointNet framework in Python with a 9:1 split and 112 randomly selected point clouds were reserved for testing. The volume between the body surface points and a constant plane resembling the ground was calculated and correlated with weight to make a comparison with results from the PointNet method. The coefficient of determination (R2 = 0.94) was achieved with PointNet regression model on test point clouds compared to the coefficient of determination (R2 = 0.76) achieved from the volume of the same animal. The validation RMSE of the model was 6.79 kg with a test RMSE of 6.88 kg. Further, to analyze model performance based on weight range the pigs were divided into three different weight ranges: below 55 kg, between 55 and 90 kg, and above 90 kg. For different weight groups, pigs weighing below 55 kg were best predicted with the model. The results clearly showed that 3D deep learning on point sets has a good potential for accurate weight prediction even with a limited training dataset. Therefore, this study confirms the usability of 3D deep learning on point sets for farm animals’ weight prediction, while a larger data set needs to be used to ensure the most accurate predictions

    Deep Learning Models to Predict Finishing Pig Weight Using Point Clouds

    Get PDF
    The selection of animals to be marketed is largely completed by their visual assessment, solely relying on the skill level of the animal caretaker. Real-time monitoring of the weight of farm animals would provide important information for not only marketing, but also for the assessment of health and well-being issues. The objective of this study was to develop and evaluate a method based on 3D Convolutional Neural Network to predict weight from point clouds. Intel Real Sense D435 stereo depth camera placed at 2.7 m height was used to capture the 3D videos of a single finishing pig freely walking in a holding pen ranging in weight between 20–120 kg. The animal weight and 3D videos were collected from 249 Landrace × Large White pigs in farm facilities of the FZEA-USP (Faculty of Animal Science and Food Engineering, University of Sao Paulo) between 5 August and 9 November 2021. Point clouds were manually extracted from the recorded 3D video and applied for modeling. A total of 1186 point clouds were used for model training and validating using PointNet framework in Python with a 9:1 split and 112 randomly selected point clouds were reserved for testing. The volume between the body surface points and a constant plane resembling the ground was calculated and correlated with weight to make a comparison with results from the PointNet method. The coefficient of determination (R2 = 0.94) was achieved with PointNet regression model on test point clouds compared to the coefficient of determination (R2 = 0.76) achieved from the volume of the same animal. The validation RMSE of the model was 6.79 kg with a test RMSE of 6.88 kg. Further, to analyze model performance based on weight range the pigs were divided into three different weight ranges: below 55 kg, between 55 and 90 kg, and above 90 kg. For different weight groups, pigs weighing below 55 kg were best predicted with the model. The results clearly showed that 3D deep learning on point sets has a good potential for accurate weight prediction even with a limited training dataset. Therefore, this study confirms the usability of 3D deep learning on point sets for farm animals’ weight prediction, while a larger data set needs to be used to ensure the most accurate predictions

    Response to Uncertain Threat in Acute Trauma Survivors

    Get PDF
    Uncertainty is often associated with subjective distress and a potentiated anxiety response. The heightened response to uncertainty may be a central mechanism via which anxiety-, trauma-, and stressor-related disorders are developed and maintained. The current study compared the neural response to predictable and unpredictable threat in acute trauma survivors to clarify the role of the response to uncertain threat in fear circuitry and further inform the nature of the development of PTSD in the context of uncertain threat. The novel study showed that anticipating unpredictable (primarily negative images) relative to predictable images increased activation in a frontoparietal network and was associated with decreased acute trauma symptoms, suggesting this network may be associated with an adaptive mechanism for responding to unpredictable threat. Results also showed increased PTSD symptoms was associated with more sustained activation during unpredictable vs. predictable blocks in the insula. Additionally, those with more severe PTSD symptoms had greater response to transient relative to sustained unpredictable (vs. predictable) conditions in the superior frontal gyrus. These findings extend previous work highlighting the insula’s role in sustained responsivity to unpredictability in anxiety disorders and PTSD to symptomatology in acute trauma survivors. Finally, widespread sustained activation of predominantly frontocentral and frontoparietal regions in unpredictable relative to predictable blocks was associated with increased intolerance of uncertainty

    Recording behaviour of indoor-housed farm animals automatically using machine vision technology: a systematic review

    Get PDF
    Large-scale phenotyping of animal behaviour traits is time consuming and has led to increased demand for technologies that can automate these procedures. Automated tracking of animals has been successful in controlled laboratory settings, but recording from animals in large groups in highly variable farm settings presents challenges. The aim of this review is to provide a systematic overview of the advances that have occurred in automated, high throughput image detection of farm animal behavioural traits with welfare and production implications. Peer-reviewed publications written in English were reviewed systematically following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. After identification, screening, and assessment for eligibility, 108 publications met these specifications and were included for qualitative synthesis. Data collected from the papers included camera specifications, housing conditions, group size, algorithm details, procedures, and results. Most studies utilized standard digital colour video cameras for data collection, with increasing use of 3D cameras in papers published after 2013. Papers including pigs (across production stages) were the most common (n = 63). The most common behaviours recorded included activity level, area occupancy, aggression, gait scores, resource use, and posture. Our review revealed many overlaps in methods applied to analysing behaviour, and most studies started from scratch instead of building upon previous work. Training and validation sample sizes were generally small (mean±s.d. groups = 3.8±5.8) and in data collection and testing took place in relatively controlled environments. To advance our ability to automatically phenotype behaviour, future research should build upon existing knowledge and validate technology under commercial settings and publications should explicitly describe recording conditions in detail to allow studies to be reproduced

    Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)

    Full text link
    [EN] Inspecting a 3D object which shape has elastic manufacturing tolerances in order to find defects is a challenging and time-consuming task. This task usually involves humans, either in the specification stage followed by some automatic measurements, or in other points along the process. Even when a detailed inspection is performed, the measurements are limited to a few dimensions instead of a complete examination of the object. In this work, a probabilistic method to evaluate 3D surfaces is presented. This algorithm relies on a training stage to learn the shape of the object building a statistical shape model. Making use of this model, any inspected object can be evaluated obtaining a probability that the whole object or any of its dimensions are compatible with the model, thus allowing to easily find defective objects. Results in simulated and real environments are presented and compared to two different alternatives.This work was partially funded by Generalitat Valenciana through IVACE (Valencian Institute of Business Competitiveness) distributed nominatively to Valencian technological innovation centres under project expedient IMAMCN/2020/1.Pérez, J.; Guardiola Garcia, JL.; Pérez Jiménez, AJ.; Perez-Cortes, J. (2020). Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM). Sensors. 20(22):1-16. https://doi.org/10.3390/s20226554S1162022Brosed, F. J., Aguilar, J. J., Guillomía, D., & Santolaria, J. (2010). 3D Geometrical Inspection of Complex Geometry Parts Using a Novel Laser Triangulation Sensor and a Robot. Sensors, 11(1), 90-110. doi:10.3390/s110100090Perez-Cortes, J.-C., Perez, A., Saez-Barona, S., Guardiola, J.-L., & Salvador, I. (2018). A System for In-Line 3D Inspection without Hidden Surfaces. Sensors, 18(9), 2993. doi:10.3390/s18092993Bi, Z. M., & Wang, L. (2010). Advances in 3D data acquisition and processing for industrial applications. Robotics and Computer-Integrated Manufacturing, 26(5), 403-413. doi:10.1016/j.rcim.2010.03.003Fu, K., Peng, J., He, Q., & Zhang, H. (2020). Single image 3D object reconstruction based on deep learning: A review. Multimedia Tools and Applications, 80(1), 463-498. doi:10.1007/s11042-020-09722-8Pichat, J., Iglesias, J. E., Yousry, T., Ourselin, S., & Modat, M. (2018). A Survey of Methods for 3D Histology Reconstruction. Medical Image Analysis, 46, 73-105. doi:10.1016/j.media.2018.02.004Pathak, V. K., Singh, A. K., Sivadasan, M., & Singh, N. K. (2016). Framework for Automated GD&T Inspection Using 3D Scanner. Journal of The Institution of Engineers (India): Series C, 99(2), 197-205. doi:10.1007/s40032-016-0337-7Bustos, B., Keim, D. A., Saupe, D., Schreck, T., & Vranić, D. V. (2005). Feature-based similarity search in 3D object databases. ACM Computing Surveys, 37(4), 345-387. doi:10.1145/1118890.1118893Mian, A., Bennamoun, M., & Owens, R. (2009). On the Repeatability and Quality of Keypoints for Local Feature-based 3D Object Retrieval from Cluttered Scenes. International Journal of Computer Vision, 89(2-3), 348-361. doi:10.1007/s11263-009-0296-zLiu, Z., Zhao, C., Wu, X., & Chen, W. (2017). An Effective 3D Shape Descriptor for Object Recognition with RGB-D Sensors. Sensors, 17(3), 451. doi:10.3390/s17030451Barra, V., & Biasotti, S. (2013). 3D shape retrieval using Kernels on Extended Reeb Graphs. Pattern Recognition, 46(11), 2985-2999. doi:10.1016/j.patcog.2013.03.019Xie, J., Dai, G., Zhu, F., Wong, E. K., & Fang, Y. (2017). DeepShape: Deep-Learned Shape Descriptor for 3D Shape Retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7), 1335-1345. doi:10.1109/tpami.2016.2596722Lague, D., Brodu, N., & Leroux, J. (2013). Accurate 3D comparison of complex topography with terrestrial laser scanner: Application to the Rangitikei canyon (N-Z). ISPRS Journal of Photogrammetry and Remote Sensing, 82, 10-26. doi:10.1016/j.isprsjprs.2013.04.009Cook, K. L. (2017). An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection. Geomorphology, 278, 195-208. doi:10.1016/j.geomorph.2016.11.009Martínez-Carricondo, P., Agüera-Vega, F., Carvajal-Ramírez, F., Mesas-Carrascosa, F.-J., García-Ferrer, A., & Pérez-Porras, F.-J. (2018). Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. International Journal of Applied Earth Observation and Geoinformation, 72, 1-10. doi:10.1016/j.jag.2018.05.015Burdziakowski, P., Specht, C., Dabrowski, P. S., Specht, M., Lewicka, O., & Makar, A. (2020). Using UAV Photogrammetry to Analyse Changes in the Coastal Zone Based on the Sopot Tombolo (Salient) Measurement Project. Sensors, 20(14), 4000. doi:10.3390/s20144000MARDIA, K. V., & DRYDEN, I. L. (1989). The statistical analysis of shape data. Biometrika, 76(2), 271-281. doi:10.1093/biomet/76.2.271Heimann, T., & Meinzer, H.-P. (2009). Statistical shape models for 3D medical image segmentation: A review. Medical Image Analysis, 13(4), 543-563. doi:10.1016/j.media.2009.05.004Ambellan, F., Tack, A., Ehlke, M., & Zachow, S. (2019). Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: Data from the Osteoarthritis Initiative. Medical Image Analysis, 52, 109-118. doi:10.1016/j.media.2018.11.009Avendi, M. R., Kheradvar, A., & Jafarkhani, H. (2016). A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI. Medical Image Analysis, 30, 108-119. doi:10.1016/j.media.2016.01.005Booth, J., Roussos, A., Ponniah, A., Dunaway, D., & Zafeiriou, S. (2017). Large Scale 3D Morphable Models. International Journal of Computer Vision, 126(2-4), 233-254. doi:10.1007/s11263-017-1009-7Erus, G., Zacharaki, E. I., & Davatzikos, C. (2014). Individualized statistical learning from medical image databases: Application to identification of brain lesions. Medical Image Analysis, 18(3), 542-554. doi:10.1016/j.media.2014.02.00
    • …
    corecore