646 research outputs found

    CAD-Based Porous Scaffold Design of Intervertebral Discs in Tissue Engineering

    Get PDF
    With the development and maturity of three-dimensional (3D) printing technology over the past decade, 3D printing has been widely investigated and applied in the field of tissue engineering to repair damaged tissues or organs, such as muscles, skin, and bones, Although a number of automated fabrication methods have been developed to create superior bio-scaffolds with specific surface properties and porosity, the major challenges still focus on how to fabricate 3D natural biodegradable scaffolds that have tailor properties such as intricate architecture, porosity, and interconnectivity in order to provide the needed structural integrity, strength, transport, and ideal microenvironment for cell- and tissue-growth. In this dissertation, a robust pipeline of fabricating bio-functional porous scaffolds of intervertebral discs based on different innovative porous design methodologies is illustrated. Firstly, a triply periodic minimal surface (TPMS) based parameterization method, which has overcome the integrity problem of traditional TPMS method, is presented in Chapter 3. Then, an implicit surface modeling (ISM) approach using tetrahedral implicit surface (TIS) is demonstrated and compared with the TPMS method in Chapter 4. In Chapter 5, we present an advanced porous design method with higher flexibility using anisotropic radial basis function (ARBF) and volumetric meshes. Based on all these advanced porous design methods, the 3D model of a bio-functional porous intervertebral disc scaffold can be easily designed and its physical model can also be manufactured through 3D printing. However, due to the unique shape of each intervertebral disc and the intricate topological relationship between the intervertebral discs and the spine, the accurate localization and segmentation of dysfunctional discs are regarded as another obstacle to fabricating porous 3D disc models. To that end, we discuss in Chapter 6 a segmentation technique of intervertebral discs from CT-scanned medical images by using deep convolutional neural networks. Additionally, some examples of applying different porous designs on the segmented intervertebral disc models are demonstrated in Chapter 6

    Computer-aided Detection of Breast Cancer in Digital Tomosynthesis Imaging Using Deep and Multiple Instance Learning

    Get PDF
    Breast cancer is the most common cancer among women in the world. Nevertheless, early detection of breast cancer improves the chance of successful treatment. Digital breast tomosynthesis (DBT) as a new tomographic technique was developed to minimize the limitations of conventional digital mammography screening. A DBT is a quasi-three-dimensional image that is reconstructed from a small number of two-dimensional (2D) low-dose X-ray images. The 2D X-ray images are acquired over a limited angular around the breast. Our research aims to introduce computer-aided detection (CAD) frameworks to detect early signs of breast cancer in DBTs. In this thesis, we propose three CAD frameworks for detection of breast cancer in DBTs. The first CAD framework is based on hand-crafted feature extraction. Concerning early signs of breast cancer: mass, micro-calcifications, and bilateral asymmetry between left and right breast, the system includes three separate channels to detect each sign. Next two CAD frameworks automatically learn complex patterns of 2D slices using the deep convolutional neural network and the deep cardinality-restricted Boltzmann machines. Finally, the CAD frameworks employ a multiple-instance learning approach with randomized trees algorithm to classify DBT images based on extracted information from 2D slices. The frameworks operate on 2D slices which are generated from DBT volumes. These frameworks are developed and evaluated using 5,040 2D image slices obtained from 87 DBT volumes. We demonstrate the validation and usefulness of the proposed CAD frameworks within empirical experiments for detecting breast cancer in DBTs

    Implementasi Perbaikan Kualitas Citra Tanaman terhadap Perbedaan Kamera untuk Prediksi Pigmen Fotosintesis berbasis Machine Learning

    Get PDF
    Implementation of Plant Image Quality Improvement based on Machine Learning on Camera Variation to Predict Photosynthetic Pigments. Pigments are natural dyes found in plants and animals. In photosynthesis, there are 3 essential pigments: chlorophyll, cartenoid, and anthocyanin. Pigment analysis can be performed with High Performance Liquid Chromatography (HPLC) and a spectrophotometer. However, HPLC and spectrophotometers require high resources and time. Thus, the Fuzzy Piction Android application built using the FP3Net model is the best choice in pigment prediction since it is low on cost and accessible. However, the Fuzzy Piction produces different performance, which is affected by light conditions and camera specifications. The experiment used ten sample images for Jasminum sp., P. betle, Syzygium oleina of green and red variations, and Graptophyllum pictum leaves with three smartphone cameras and three lighting levels. Improvements using 3D-TPS produced the best SSIM values in the range of 0.9191 – 0.9797 for images Syzygium oleina of green and red variations leaves, and the predicted MAE value of pigment was 0.0296 – 0.0492.Keywords: 3D-TPS, plant leaves, pigment, image quality improvement   Pigmen merupakan pewarna alami yang ditemukan pada tumbuhan dan hewan. Dalam proses fotosintesis terdapat tiga pigmen yang penting, yaitu klorofil, kartenoid, dan antosianin. Analisis pigmen dapat dilakukan dengan Kromatorafi Cair Kinerja Tinggi (KCKT) dan spektrofotometer. Namun,KCKT dan spektrofotometer membutuhkan sumber daya dan waktu yang tinggi. Sehingga, aplikasi Android Fuzzy Piction yang dibangun menggunakan model FP3Net mejadi pilihan dalam prediksi pigmen dengan biaya murah dan mudah. Akan tetapi, aplikasi Android Fuzzy Piction menghasilkan kinerja yang berbeda-beda yang dipengaruhi oleh kondisi cahaya dan spesifikasi kamera. Dilakukan percobaan dengan mengambil sepuluh sampel citra daun dari empat varietas tanaman yaitu, pucuk merah, daun ungu, melati, dan sirih. Citra diambil dengan tiga kamera smartphone dan tiga tingkat pencahayaan yang berbeda. Perbaikan yang dilakukan menggunakan algoritma 3D-TPS menghasilkan nilai SSIM terbaik pada rentang 0.9191 –0.9797 untuk citra daun pucuk merahdan nilai MAE prediksi pigmen sebesar 0.0296 –0.0492.Kata Kunci: 3D – TPS, daun tanaman, pigmen, perbaikan kualitas citr

    CHAMMI: A benchmark for channel-adaptive models in microscopy imaging

    Full text link
    Most neural networks assume that input images have a fixed number of channels (three for RGB images). However, there are many settings where the number of channels may vary, such as microscopy images where the number of channels changes depending on instruments and experimental goals. Yet, there has not been a systemic attempt to create and evaluate neural networks that are invariant to the number and type of channels. As a result, trained models remain specific to individual studies and are hardly reusable for other microscopy settings. In this paper, we present a benchmark for investigating channel-adaptive models in microscopy imaging, which consists of 1) a dataset of varied-channel single-cell images, and 2) a biologically relevant evaluation framework. In addition, we adapted several existing techniques to create channel-adaptive models and compared their performance on this benchmark to fixed-channel, baseline models. We find that channel-adaptive models can generalize better to out-of-domain tasks and can be computationally efficient. We contribute a curated dataset (https://doi.org/10.5281/zenodo.7988357) and an evaluation API (https://github.com/broadinstitute/MorphEm.git) to facilitate objective comparisons in future research and applications.Comment: Accepted at NeurIPS Track on Datasets and Benchmarks, 202

    Coping with Data Scarcity in Deep Learning and Applications for Social Good

    Get PDF
    The recent years are experiencing an extremely fast evolution of the Computer Vision and Machine Learning fields: several application domains benefit from the newly developed technologies and industries are investing a growing amount of money in Artificial Intelligence. Convolutional Neural Networks and Deep Learning substantially contributed to the rise and the diffusion of AI-based solutions, creating the potential for many disruptive new businesses. The effectiveness of Deep Learning models is grounded by the availability of a huge amount of training data. Unfortunately, data collection and labeling is an extremely expensive task in terms of both time and costs; moreover, it frequently requires the collaboration of domain experts. In the first part of the thesis, I will investigate some methods for reducing the cost of data acquisition for Deep Learning applications in the relatively constrained industrial scenarios related to visual inspection. I will primarily assess the effectiveness of Deep Neural Networks in comparison with several classical Machine Learning algorithms requiring a smaller amount of data to be trained. Hereafter, I will introduce a hardware-based data augmentation approach, which leads to a considerable performance boost taking advantage of a novel illumination setup designed for this purpose. Finally, I will investigate the situation in which acquiring a sufficient number of training samples is not possible, in particular the most extreme situation: zero-shot learning (ZSL), which is the problem of multi-class classification when no training data is available for some of the classes. Visual features designed for image classification and trained offline have been shown to be useful for ZSL to generalize towards classes not seen during training. Nevertheless, I will show that recognition performances on unseen classes can be sharply improved by learning ad hoc semantic embedding (the pre-defined list of present and absent attributes that represent a class) and visual features, to increase the correlation between the two geometrical spaces and ease the metric learning process for ZSL. In the second part of the thesis, I will present some successful applications of state-of-the- art Computer Vision, Data Analysis and Artificial Intelligence methods. I will illustrate some solutions developed during the 2020 Coronavirus Pandemic for controlling the disease vii evolution and for reducing virus spreading. I will describe the first publicly available dataset for the analysis of face-touching behavior that we annotated and distributed, and I will illustrate an extensive evaluation of several computer vision methods applied to the produced dataset. Moreover, I will describe the privacy-preserving solution we developed for estimating the \u201cSocial Distance\u201d and its violations, given a single uncalibrated image in unconstrained scenarios. I will conclude the thesis with a Computer Vision solution developed in collaboration with the Egyptian Museum of Turin for digitally unwrapping mummies analyzing their CT scan, to support the archaeologists during mummy analysis and avoiding the devastating and irreversible process of physically unwrapping the bandages for removing amulets and jewels from the body
    • …
    corecore