59,626 research outputs found

    Discovery of a new supernova remnant G150.3+4.5

    Full text link
    Large-scale radio continuum surveys have good potential for discovering new Galactic supernova remnants (SNRs). Surveys of the Galactic plane are often limited in the Galactic latitude of |b| ~ 5 degree. SNRs at high latitudes, such as the Cygnus Loop or CTA~1, cannot be detected by surveys in such limited latitudes. Using the available Urumqi 6 cm Galactic plane survey data, together with the maps from the extended ongoing 6 cm medium latitude survey, we wish to discover new SNRs in a large sky area. We searched for shell-like structures and calculated radio spectra using the Urumqi 6 cm, Effelsberg 11 cm, and 21 cm survey data. Radio polarized emission and evidence in other wavelengths are also examined for the characteristics of SNRs. We discover an enclosed oval-shaped object G150.3+4.5 in the 6 cm survey map. It is about 2.5 degree wide and 3 degree high. Parts of the shell structures can be identified well in the 11 cm, 21 cm, and 73.5 cm observations. The Effelsberg 21 cm total intensity image resembles most of the structures of G150.3+4.5 seen at 6 cm, but the loop is not closed in the northwest. High resolution images at 21 cm and 73.5 cm from the Canadian Galactic Plane Survey confirm the extended emission from the eastern and western shells of G150.3+4.5. We calculated the radio continuum spectral indices of the eastern and western shells, which are β∼−2.4\beta \sim -2.4 and β∼−2.7\beta \sim -2.7 between 6 cm and 21 cm, respectively. The shell-like structures and their non-thermal nature strongly suggest that G150.3+4.5 is a shell-type SNR. For other objects in the field of view, G151.4+3.0 and G151.2+2.6, we confirm that the shell-like structure G151.4+3.0 very likely has a SNR origin, while the circular-shaped G151.2+2.6 is an HII region with a flat radio spectrum, associated with optical filamentary structure, Hα\alpha, and infrared emission.Comment: 5 pages, 3 figures, accepted for publication of Astronomy and Astrophysic

    Application of deep learning neural network for classification of TB lung CT images based on patches

    Get PDF
    In this work, convolutional neural network (CNN) is applied to classify the five types of Tuberculosis (TB) lung CT images. In doing so, each image has been segmented into rectangular patches with side width and high varying between 20 and 55 pixels, which are later normalised into 30x30 pixels. While classifying TB types, six instead of five categories are distinguished. Group 6 houses those patches/segments that are common to most of the other types, or background. In this way, while each 3D dataset only has less than 10% distinguishable volumes that are applied to perform the training, the rest remains part of the learning cycle by participating to the classification, leading to an automated process to differentiation of five types of TB. When tested against 300 datasets, the Kappa value is 0.2187, ranking 5 among 23 submissions. However, the accuracy value of ACC is 0.4067, the highest in this competition of classification of TB types

    Prediction of multidrug-resistant TB from CT pulmonary images based on deep learning techniques

    Get PDF
    While tuberculosis (TB) disease was discovered more than a century ago, it has not been eradicated yet. Quite contrary, at present, TB constitutes one of top 10 causes of death and has shown signs of increasing. To complement conventional diagnostic procedure of applying microbiological culture that takes several weeks and remains expensive, high resolution computer tomography (CT) of pulmonary images has been resorted to not only for aiding clinicians to expedite the process of diagnosis but also for monitoring prognosis when administrating antibiotic drugs. This research undertakes the investigation of predicting multi-drug resistant (MDR) patients from drug sensitive (DS) ones based on CT lung images to monitor the effectiveness of treatment. To contend with smaller datasets (i.e. in hundreds) and the characteristics of CT TB images with limited regions capturing abnormities, patch-based deep convolutional neural network (CNN) allied to support vector machine (SVM) classifier is implemented on a collection of datasets from 230 patients obtained from ImageCLEF 2017 competition. As a result, the proposed architecture of CNN+SVM+patch performs the best with classification accuracy rate at 91.11% (79.80% in terms of patches). In addition, hand-crafted SIFT based approach accomplishes 88.88% in terms of subject and 83.56% with reference to patches, the highest in this study, which can be explained away by the fact that the datasets are in small numbers. Significantly, during the Tuberculosis Competition at ImageCLEF 2017, the authors took part in the task of classification of 5 types of TB disease and achieved top one with regard to averaged classification accuracy (i.e. ACC = 0.4067), which is also premised on the approach of CNN+SVM+patch. On the other hand, when the whole slices of 3D TB datasets are applied to train a CNN network, the best result is achieved through the application of CNN coupled with orderless pooling and SVM at 64.71% accuracy rate

    Segmentation of brain lesions from CT images based on deep learning techniques

    Get PDF
    While Computerised Tomography (CT) may have been the first clinical tool to study human brains when any suspected abnormality related to the brain occurs, the volumes of CT lesions usually are usually disregarded due to variations among inter-subject measurements. This research responds to this challenge by applying the state of the art deep learning techniques to automatically delineate the boundaries of abnormal features, including tumour, associated edema, head injury, leading to benefiting both patients and clinicians in making timely accurate clinical decisions. The challenge with the application of deep leaning based techniques in medical domain remains that it requires datasets in great abundance, whilst medical data tend to be in small numbers. This work, built on the large field of view of DeepLab convolutional neural network for semantic segmentation, highlights the approaches of both semantics-based and patch-based segmentation to differentiate tumour, lesion and background of the brain. In addition, fusions with a number of other methods to fine tune regional borders are also explored, including conditional random fields (CRF) and multiple scales (MS). With regard to pixel level accuracy, the averaged accuracy rates for segmentation of tumour, lesion and background amount to 82.9%, 85.7%, 85.3% and 81.3% while applying the approaches of DeepLab, DeepLab with MS, DeepLab with MS and CRF, and patch-based pixel-wise classification respectively. In terms of the measurement of intersection over union of two regions, the accuracy rates are of 70.3%, 75.1%, 77.2%, and 63.6% respectively, implying overall DeepLab fused with MS and CRF performs the best

    Patch-based deep learning approaches for artefact detection of endoscopic images

    Get PDF
    This paper constitutes the work in EAD2019 competition. In this competition, for segmentation (task 2) of five types of artefact, patch-based fully convolutional neural network (FCN) allied to support vector machine (SVM) classifier is implemented, aiming to contend with smaller data sets (i.e., hundreds) and the characteristics of endoscopic images with limited regions capturing artefact (e.g. bubbles, specularity). In comparison with conventional CNN and other state of the art approaches (e.g. DeepLab) while processed on whole images, this patch-based FCN appears to achieve the best

    Acoustic detection of air shower cores

    Get PDF
    At an altitude of 1890m, a pre-test with an Air shower (AS) core selector and a small acoustic array set up in an anechoic pool with a volume of 20x7x7 cu m was performed, beginning in Aug. 1984. In analyzing the waveforms recorded during the effective working time of 186 hrs, three acoustic signals which cannot be explained as from any source other than AS cores were obtained, and an estimation of related parameters was made

    Modelling of chromatic contrast for retrieval of wallpaper images

    Get PDF
    Colour remains one of the key factors in presenting an object and consequently has been widely applied in retrieval of images based on their visual contents. However, a colour appearance changes with the change of viewing surroundings, the phenomenon that has not been paid attention yet while performing colour-based image retrieval. To comprehend this effect, in this paper, a chromatic contrast model, CAMcc, is developed for the application of retrieval of colour intensive images, cementing the gap that most of existing colour models lack to fill by taking simultaneous colour contrast into account. Subsequently, the model is applied to the retrieval task on a collection of museum wallpapers of colour-rich images. In comparison with current popular colour models including CIECAM02, HSI, and RGB, with respect to both foreground and background colours, CAMcc appears to outperform the others with retrieved results being closer to query images. In addition, CAMcc focuses more on foreground colours, especially by maintaining the balance between both foreground and background colours, while the rest of existing models take on dominant colours that are perceived the most, usually background tones. Significantly, the contribution of the investigation lies in not only the improvement of the accuracy of colour-based image retrieval, but also the development of colour contrast model that warrants an important place in colour and computer vision theory, leading to deciphering the insight of this age-old topic of chromatic contrast in colour science
    • …
    corecore