110 research outputs found

    Investigation of ConViT on COVID-19 Lung Image Classification and the Effects of Image Resolution and Number of Attention Heads

    Get PDF
    COVID-19 has been one of the popular foci in the research community since its first outbreak in China, 2019. Radiological patterns such as ground glass opacity (GGO) and consolidations are often found in CT scan images of moderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Another potential method is the use of vision transformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance. Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the model with 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting  of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. By using 128x128 image pixels resolution, training using 16 attention heads, the ConViT model has achieved an accuracy of 98.01%, sensitivity of 90.83%, specificity of 99.69%, positive predictive value (PPV) of 95.58%, negative predictive value (NPV) of 97.89% and F1-score of 94.55%. The model has also achieved improved performance over other recent studies that used the same dataset. In conclusion, this study has shown that the ConViT model can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients

    Investigation of ConViT on COVID-19 Lung Image Classification and the Effects of Image Resolution and Number of Attention Heads

    Get PDF
    COVID-19 has been one of the popular foci in the research community since its first outbreak in China, 2019. Radiological patterns such as ground glass opacity (GGO) and consolidations are often found in CT scan images of moderate to severe COVID-19 patients. Therefore, a deep learning model can be trained to distinguish COVID-19 patients using their CT scan images. Convolutional Neural Networks (CNNs) has been a popular choice for this type of classification task. Another potential method is the use of vision transformer with convolution, resulting in Convolutional Vision Transformer (ConViT), to possibly produce on par performance using less computational resources. In this study, ConViT is applied to diagnose COVID-19 cases from lung CT scan images. Particularly, we investigated the relationship of the input image pixel resolutions and the number of attention heads used in ConViT and their effects on the model’s performance. Specifically, we used 512x512, 224x224 and 128x128 pixels resolution to train the model with 4 (tiny), 9 (small) and 16 (base) number of attention heads used. An open access dataset consisting  of 2282 COVID-19 CT images and 9776 Normal CT images from Iran is used in this study. By using 128x128 image pixels resolution, training using 16 attention heads, the ConViT model has achieved an accuracy of 98.01%, sensitivity of 90.83%, specificity of 99.69%, positive predictive value (PPV) of 95.58%, negative predictive value (NPV) of 97.89% and F1-score of 94.55%. The model has also achieved improved performance over other recent studies that used the same dataset. In conclusion, this study has shown that the ConViT model can play a meaningful role to complement RT-PCR test on COVID-19 close contacts and patients

    The photometric observation of the quasi-simultaneous mutual eclipse and occultation between Europa and Ganymede on 22 August 2021

    Full text link
    Mutual events (MEs) are eclipses and occultations among planetary natural satellites. Most of the time, eclipses and occultations occur separately. However, the same satellite pair will exhibit an eclipse and an occultation quasi-simultaneously under particular orbital configurations. This kind of rare event is termed as a quasi-simultaneous mutual event (QSME). During the 2021 campaign of mutual events of jovian satellites, we observed a QSME between Europa and Ganymede. The present study aims to describe and study the event in detail. We observed the QSME with a CCD camera attached to a 300-mm telescope at the Hong Kong Space Museum Sai Kung iObservatory. We obtained the combined flux of Europa and Ganymede from aperture photometry. A geometric model was developed to explain the light curve observed. Our results are compared with theoretical predictions (O-C). We found that our simple geometric model can explain the QSME fairly accurately, and the QSME light curve is a superposition of the light curves of an eclipse and an occultation. Notably, the observed flux drops are within 2.6% of the theoretical predictions. The size of the event central time O-Cs ranges from -14.4 to 43.2 s. Both O-Cs of flux drop and timing are comparable to other studies adopting more complicated models. Given the event rarity, model simplicity and accuracy, we encourage more observations and analysis on QSMEs to improve Solar System ephemerides.Comment: 23 pages, 5 appendixes, 16 figures, 7 table

    Automatic classification of flying bird species using computer vision techniques [forthcoming]

    Get PDF
    Bird populations are identified as important biodiversity indicators, so collecting reliable population data is important to ecologists and scientists. However, existing manual monitoring methods are labour-intensive, time-consuming, and potentially error prone. The aim of our work is to develop a reliable automated system, capable of classifying the species of individual birds, during flight, using video data. This is challenging, but appropriate for use in the field, since there is often a requirement to identify in flight, rather than while stationary. We present our work, which uses a new and rich set of appearance features for classification from video. We also introduce motion features including curvature and wing beat frequency. Combined with Normal Bayes classifier and a Support Vector Machine classifier, we present experimental evaluations of our appearance and motion features across a data set comprising 7 species. Using our appearance feature set alone we achieved a classification rate of 92% and 89% (using Normal Bayes and SVM classifiers respectively) which significantly outperforms a recent comparable state-of-the-art system. Using motion features alone we achieved a lower-classification rate, but motivate our on-going work which we seeks to combine these appearance and motion feature to achieve even more robust classification

    Classification of bird species from video using appearance and motion features

    Get PDF
    The monitoring of bird populations can provide important information on the state of sensitive ecosystems; however, the manual collection of reliable population data is labour-intensive, time-consuming, and potentially error prone. Automated monitoring using computer vision is therefore an attractive proposition, which could facilitate the collection of detailed data on a much larger scale than is currently possible. A number of existing algorithms are able to classify bird species from individual high quality detailed images often using manual inputs (such as a priori parts labelling). However, deployment in the field necessitates fully automated in-flight classification, which remains an open challenge due to poor image quality, high and rapid variation in pose, and similar appearance of some species. We address this as a fine-grained classification problem, and have collected a video dataset of thirteen bird classes (ten species and another with three colour variants) for training and evaluation. We present our proposed algorithm, which selects effective features from a large pool of appearance and motion features. We compare our method to others which use appearance features only, including image classification using state-of-the-art Deep Convolutional Neural Networks (CNNs). Using our algorithm we achieved a 90% correct classification rate, and we also show that using effectively selected motion and appearance features together can produce results which outperform state-of-the-art single image classifiers. We also show that the most significant motion features improve correct classification rates by 7% compared to using appearance features alone

    Chronic kidney disease and arrhythmias: conclusions from a Kidney Disease: Improving Global Outcomes (KDIGO) Controversies Conference.

    Get PDF
    Patients with chronic kidney disease (CKD) are predisposed to heart rhythm disorders, including atrial fibrillation (AF)/atrial flutter, supraventricular tachycardias, ventricular arrhythmias, and sudden cardiac death (SCD). While treatment options, including drug, device, and procedural therapies, are available, their use in the setting of CKD is complex and limited. Patients with CKD and end-stage kidney disease (ESKD) have historically been under-represented or excluded from randomized trials of arrhythmia treatment strategies,1 although this situation is changing.2 Cardiovascular society consensus documents have recently identified evidence gaps for treating patients with CKD and heart rhythm disorders [...

    GDNF Secreting Human Neural Progenitor Cells Protect Dying Motor Neurons, but Not Their Projection to Muscle, in a Rat Model of Familial ALS

    Get PDF
    Amyotrophic lateral sclerosis (ALS) is a fatal, progressive neurodegenerative disease characterized by rapid loss of muscle control and eventual paralysis due to the death of large motor neurons in the brain and spinal cord. Growth factors such as glial cell line derived neurotrophic factor (GDNF) are known to protect motor neurons from damage in a range of models. However, penetrance through the blood brain barrier and delivery to the spinal cord remains a serious challenge. Although there may be a primary dysfunction in the motor neuron itself, there is also increasing evidence that excitotoxicity due to glial dysfunction plays a crucial role in disease progression. Clearly it would be of great interest if wild type glial cells could ameliorate motor neuron loss in these models, perhaps in combination with the release of growth factors such as GDNF.Human neural progenitor cells can be expanded in culture for long periods and survive transplantation into the adult rodent central nervous system, in some cases making large numbers of GFAP positive astrocytes. They can also be genetically modified to release GDNF (hNPC(GDNF)) and thus act as long-term 'mini pumps' in specific regions of the rodent and primate brain. In the current study we genetically modified human neural stem cells to release GDNF and transplanted them into the spinal cord of rats over-expressing mutant SOD1 (SOD1(G93A)). Following unilateral transplantation into the spinal cord of SOD1(G93A) rats there was robust cellular migration into degenerating areas, efficient delivery of GDNF and remarkable preservation of motor neurons at early and end stages of the disease within chimeric regions. The progenitors retained immature markers, and those not secreting GDNF had no effect on motor neuron survival. Interestingly, this robust motor neuron survival was not accompanied by continued innervation of muscle end plates and thus resulted in no improvement in ipsilateral limb use.The potential to maintain dying motor neurons by delivering GDNF using neural progenitor cells represents a novel and powerful treatment strategy for ALS. While this approach represents a unique way to prevent motor neuron loss, our data also suggest that additional strategies may also be required for maintenance of neuromuscular connections and full functional recovery. However, simply maintaining motor neurons in patients would be the first step of a therapeutic advance for this devastating and incurable disease, while future strategies focus on the maintenance of the neuromuscular junction

    Fano multiple-symbol differential detectors

    Get PDF
    Multiple-symbol differential detection (MSDD) is a robust maximum-likelihood receiver for frequency-nonselective fast Rayleigh fading channels. However, its complexity grows exponentially with the block size. Recently, multiple-symboldifferential sphere decoder (MSDSD) is developed to alleviate this problem but its complexity at low signal-to-noise ratio (SNR) grows exponentially. This work investigates the possibility of using a Fano decoder as an efficient MSDD. The detector, namely Fano-MSDD, is an "intelligent" decision-feedback detector (DFD) that uses a running threshold and an accumulated path metric as navigation tools when it roams the decoding tree. Our results indicate that Fano-MSDD is more attractive than DFD from the perspectives of error-performance and complexity. When compared to MSDSD, our best Fano-MSDD suffers a small degradation in power efficiency. However, its complexity is a stable function of SNR. Furthermore, with the extension of Fano-MSDD to differential space-time modulation, our receivers become more remarkable when compared to other reduced complexity techniques
    • …
    corecore