11 research outputs found

    Using Machine Learning for Detection of Covid-19

    Get PDF
    Currently, the most widely used diagnostic tool for COVID-19 is the RT-PCR nasal swab test recommended by the CDC. However, some studies have shown that chest CT scans have the potential to be more accurate and are also capable of detecting the virus in its earlier stages. Unfortunately, CT results are not instantaneously available as it may be days before a radiologist can review the scan. This delay is one of the factors preventing the widespread use of CT scans for COVID detection. To address the delay, this project investigated Convolutional Neural Networks, an advanced form of machine learning used for image classification. CNNs have proven very effective at extracting patterns from images and have been used to detect clinical signs of COVID. The goal of this project was to develop an improved CNN that could accurately predict whether a patient is COVID-19 positive based on their CT scan. This could potentially provide a valuable prescreening tool for overwhelmed radiologists

    Research and Development of the Pupil Identification and Warning System using AI-IoT

    Get PDF
    Currently, pupils being left in the classroom, in the house or in the car is happening a lot, causing unintended incidents. The reason is that parents or caregivers of pupils go through busy and tiring working hours, so they accidentally leave pupils in the car, indoors, or forget to pick up students at school. In this paper, we developed an algorithm to recognize students who use neural networks and warn managers, testing on a model integrated Raspberry Pi 4 kit programmed on Python in combination with cameras, sim modules, and actuators to detect and alert abandoned pupils to the manager to take timely remedial measures and avoid unfortunate circumstances. With the ability to manage students, the system collects and processes images and data on student information for artificial intelligence (AI) systems to recognize when operating. The system of executive structures serves to warn when students are left in the car, in the classroom, or in the house to avoid unintended incidents or safety risks

    Automated Facial Recognition for Noonan Syndrome Using Novel Deep Convolutional Neural Network With Additive Angular Margin Loss

    Get PDF
    BackgroundNoonan syndrome (NS), a genetically heterogeneous disorder, presents with hypertelorism, ptosis, dysplastic pulmonary valve stenosis, hypertrophic cardiomyopathy, and small stature. Early detection and assessment of NS are crucial to formulating an individualized treatment protocol. However, the diagnostic rate of pediatricians and pediatric cardiologists is limited. To overcome this challenge, we propose an automated facial recognition model to identify NS using a novel deep convolutional neural network (DCNN) with a loss function called additive angular margin loss (ArcFace).MethodsThe proposed automated facial recognition models were trained on dataset that included 127 NS patients, 163 healthy children, and 130 children with several other dysmorphic syndromes. The photo dataset contained only one frontal face image from each participant. A novel DCNN framework with ArcFace loss function (DCNN-Arcface model) was constructed. Two traditional machine learning models and a DCNN model with cross-entropy loss function (DCNN-CE model) were also constructed. Transfer learning and data augmentation were applied in the training process. The identification performance of facial recognition models was assessed by five-fold cross-validation. Comparison of the DCNN-Arcface model to two traditional machine learning models, the DCNN-CE model, and six physicians were performed.ResultsAt distinguishing NS patients from healthy children, the DCNN-Arcface model achieved an accuracy of 0.9201 ± 0.0138 and an area under the receiver operator characteristic curve (AUC) of 0.9797 ± 0.0055. At distinguishing NS patients from children with several other genetic syndromes, it achieved an accuracy of 0.8171 ± 0.0074 and an AUC of 0.9274 ± 0.0062. In both cases, the DCNN-Arcface model outperformed the two traditional machine learning models, the DCNN-CE model, and six physicians.ConclusionThis study shows that the proposed DCNN-Arcface model is a promising way to screen NS patients and can improve the NS diagnosis rate

    ShortcutFusion: From Tensorflow to FPGA-based accelerator with reuse-aware memory allocation for shortcut data

    Full text link
    Residual block is a very common component in recent state-of-the art CNNs such as EfficientNet or EfficientDet. Shortcut data accounts for nearly 40% of feature-maps access in ResNet152 [8]. Most of the previous DNN compilers, accelerators ignore the shortcut data optimization. This paper presents ShortcutFusion, an optimization tool for FPGA-based accelerator with a reuse-aware static memory allocation for shortcut data, to maximize on-chip data reuse given resource constraints. From TensorFlow DNN models, the proposed design generates instruction sets for a group of nodes which uses an optimized data reuse for each residual block. The accelerator design implemented on the Xilinx KCU1500 FPGA card significantly outperforms NVIDIA RTX 2080 Ti, Titan Xp, and GTX 1080 Ti for the EfficientNet inference. Compared to RTX 2080 Ti, the proposed design is 1.35-2.33x faster and 6.7-7.9x more power efficient. Compared to the result from baseline, in which the weights, inputs, and outputs are accessed from the off-chip memory exactly once per each layer, ShortcutFusion reduces the DRAM access by 47.8-84.8% for RetinaNet, Yolov3, ResNet152, and EfficientNet. Given a similar buffer size to ShortcutMining [8], which also mine the shortcut data in hardware, the proposed work reduces off-chip access for feature-maps 5.27x while accessing weight from off-chip memory exactly once.Comment: 12 page

    Scalable accelerator for nonuniform multi-word log-quantized neural network

    Get PDF
    Department of Electrical EngineeringLogarithmic quantization has many hardware-friendly features, but its lower accuracy in certain conditions has prevented more widespread use. Recently modified schemes have been proposed to solve the accuracy problem without compromising its hardware efficiency by selectively employing multiple words. This however causes variable-latency multiplication, demanding a new hardware architecture to support efficient mapping of large neural network layers as well as various types of convolution layers such as depthwise separable convolution. In this paper we present a novel hardware architecture for nonuniform multi-word log-quantized neural networks that is scalable with the number of processing elements while maximizing data reuse. Our architecture supports depthwise convolution and pointwise convolution as well as 3D convolution, which are important for recent mobile-friendly networks. We also propose a hardware-software cooperative optimization to reduce the impact of variable-latency multiplication on performance. Our experimental results using various convolution layers from MobileNetV2 demonstrate the speed advantage of our architecture and high scalability with the number of PEs, compared with previous architectures for depthwise separable convolution or log quantization. Our results also show that our optimization is very effective in improving the performance of our architecture.clos
    corecore