29 research outputs found
Sistem Deteksi Hama Pada Kolam Budidaya Ikan Berbasis Audio dan Video
Sistem deteksi yang dibangun bertujuan untuk mendeteksi hama pada kolam budidaya ikan. Kolam ikan adalah sumber pendapatan dengan mengelola kolam sebagai wadah atau tempat budidaya ikan konsumsi maupun ikan hias. Akan tetapi pada budidaya kolam ikan mempunyai berbagai permasalahan, salah satunya hama pada kolam ikan terutama hama predator ikan budidaya. hal ini di tunjukan dari banyaknya keluhan dari pembudidaya ikan kususnya pada budidaya ikan air tawar. Berdasarkan permasalahan tersebut peneliti bertujuan untuk merancang sebuah sistem deteksi hama pada kolam budidaya ikan berbasis suara dan video. Berdasarkan hasil pengujian hama, akurasi tingkat kemiripan dengan hama adalah diatas 55%
Learnable Mixed-precision and Dimension Reduction Co-design for Low-storage Activation
Recently, deep convolutional neural networks (CNNs) have achieved many
eye-catching results. However, deploying CNNs on resource-constrained edge
devices is constrained by limited memory bandwidth for transmitting large
intermediated data during inference, i.e., activation. Existing research
utilizes mixed-precision and dimension reduction to reduce computational
complexity but pays less attention to its application for activation
compression. To further exploit the redundancy in activation, we propose a
learnable mixed-precision and dimension reduction co-design system, which
separates channels into groups and allocates specific compression policies
according to their importance. In addition, the proposed dynamic searching
technique enlarges search space and finds out the optimal bit-width allocation
automatically. Our experimental results show that the proposed methods improve
3.54%/1.27% in accuracy and save 0.18/2.02 bits per value over existing
mixed-precision methods on ResNet18 and MobileNetv2, respectively
Deep Learning Models for Classification of Lung Diseases
This thesis focuses on the importance of early detection in lung cancer through the use of medical imaging techniques and deep learning models. The current practice of examining nodules larger than 7 mm can delay detection and allow cancerous nodules to grow undetected. The project aims to detect nodules as small as 3 mm to improve the chances of early cancer identification. The use of constrained volume datasets and transfer learning techniques addresses the scarcity of medical data, and deep neural networks are employed for classification and segmentation tasks. Despite the limited dataset, the results demonstrate the effectiveness of the proposed models. Class activation maps and segmentation techniques enhance accuracy and provide insights into the most critical areas for diagnosis. This research contributes to the understanding of lung disease diagnosis and highlights the potential of deep learning in medical imaging. 
Segmentation of Benign and Malign lesions on skin images using U-Net
One of the types of cancer that requires early diagnosis is skin cancer. Melanoma is a deadly type of skin cancer. Computer-aided systems can detect the findings in medical examinations that human perception cannot recognize, and these findings can help the clinicans to make an early diagnosis. Therefore, the need for computer aided systems has increased. In this study, a deep learning-based method that segments melanoma with color images taken from dermoscopy devices is proposed. For this method, ISIC 2017 (International Skin Image Collaboration) database is used. It contains 1403 training and 597 test data. The method is based on preprocessing and U-Net architecture. Gaussian and Difference of Gaussian (DoG) filters are used in the preprocessing stage. It is aimed to make skin images more convenient before U-Net. As a result of the segmentation performed with these data, the education success rate reached 96-95%. A high similarity coefficient obtained. On the other hand, as a result of the training of the preprocessed data, accuracy rate has reached 86-85%
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Computer-assisted diagnosis for an early identification of lung cancer in chest X rays
Lung cancer; X-rays; Computer-assisted diagnosisCáncer de pulmón; Rayos X; Diagnóstico asistido por computadoraCàncer de pulmó; Raigs X; Diagnòstic assistit per ordinadorComputer-assisted diagnosis (CAD) algorithms have shown its usefulness for the identification of pulmonary nodules in chest x-rays, but its capability to diagnose lung cancer (LC) is unknown. A CAD algorithm for the identification of pulmonary nodules was created and used on a retrospective cohort of patients with x-rays performed in 2008 and not examined by a radiologist when obtained. X-rays were sorted according to the probability of pulmonary nodule, read by a radiologist and the evolution for the following three years was assessed. The CAD algorithm sorted 20,303 x-rays and defined four subgroups with 250 images each (percentiles ≥ 98, 66, 33 and 0). Fifty-eight pulmonary nodules were identified in the ≥ 98 percentile (23,2%), while only 64 were found in lower percentiles (8,5%) (p < 0.001). A pulmonary nodule was confirmed by the radiologist in 39 out of 173 patients in the high-probability group who had follow-up information (22.5%), and in 5 of them a LC was diagnosed with a delay of 11 months (12.8%). In one quarter of the chest x-rays considered as high-probability for pulmonary nodule by a CAD algorithm, the finding is confirmed and corresponds to an undiagnosed LC in one tenth of the cases
Suppression of the contrast of ribs in chest radiographs by means of massive training artificial neural network
ABSTRACT We developed a method for suppression of the contrast of ribs in chest radiographs by means of a massive training artificial neural network (MTANN). The MTANN is a trainable highly nonlinear filter that can be trained by using input chest radiographs and the corresponding teacher images. We used either the soft-tissue image or the bone image obtained by use of a dual-energy subtraction technique as the teacher image for suppression of ribs in chest radiographs. When the soft-tissue images were used as the teacher images, the MTANN directly produced a "soft-tissue-image-like" image where the contrast of ribs was suppressed. When the bone images were used as the teacher images, the MTANN was able to produce a "bone-image-like" image, and then was subtracted from the corresponding chest radiograph to produce a bone-subtracted image where ribs are suppressed. Thus, the two kinds of rib-suppressed images, i.e., the soft-tissue-image-like image and the bone-subtracted image, could be produced by use of the MTANNs trained with two different teacher images. We applied each of the two trained MTANNs to non-training chest radiographs to investigate the difference between the processed images. The results showed that the contrast of ribs in chest radiographs almost disappeared, and was reduced to less than 10% in both processed images. The contrast of ribs was reduced slightly better in the soft-tissue-image-like images than in the bone-subtracted images, whereas soft-tissue opacities such as lung vessels and nodules were maintained better in the bone-subtracted images. Therefore, the use of the bone images as the teacher images for training the MTANN has produced better rib-suppressed images where soft-tissue opacities were substantially maintained. A method for rib suppression using the MTANN would be useful for radiologists as well as CAD schemes in detection of lung diseases such as nodules in chest radiographs