16 research outputs found

    Extended Constraint Mask Based One-Bit Transform for Low-Complexity Fast Motion Estimation

    Get PDF
    In this paper, an improved motion estimation (ME) approach based on weighted constrained one-bit transform is proposed for block-based ME employed in video encoders. Binary ME approaches utilize low bit-depth representation of the original image frames with a Boolean exclusive-OR based hardware efficient matching criterion to decrease computational burden of the ME stage. Weighted constrained one-bit transform (WC‑1BT) based approach improves the performance of conventional C-1BT based ME employing 2-bit depth constraint mask instead of a 1-bit depth mask. In this work, the range of constraint mask is further extended to increase ME performance of WC-1BT approach. Experiments reveal that the proposed method provides better ME accuracy compared existing similar ME methods in the literature

    Efficient hardware implementations of low bit depth motion estimation algorithms

    Get PDF
    In this paper, we present efficient hardware implementation of multiplication free one-bit transform (MF1BT) based and constraint one-bit transform (C-1BT) based motion estimation (ME) algorithms, in order to provide low bit-depth representation based full search block ME hardware for real-time video encoding. We used a source pixel based linear array (SPBLA) hardware architecture for low bit depth ME for the first time in the literature. The proposed SPBLA based implementation results in a genuine data flow scheme which significantly reduces the number of data reads from the current block memory, which in turn reduces the power consumption by at least 50% compared to conventional 1BT based ME hardware architecture presented in the literature. Because of the binary nature of low bit-depth ME algorithms, their hardware architectures are more efficient than existing 8 bits/pixel representation based ME architectures

    COVID-19 detection with severity level analysis using the deep features, and wrapper-based selection of ranked features

    Get PDF
    The SARS-COV-2 virus, which causes COVID-19 disease, continues to threaten the whole world with its mutations. Many methods developed for COVID-19 detection are validated on the data sets generally including severe forms of the disease. Since the severe forms of the disease have prominent signatures on X-ray images, the performance to be achieved is high. To slow the spread of the disease, effective computer-assisted screening tools with the ability to detect the mild and the moderate forms of the disease that do not have prominent signatures are needed. In this work, various pretrained networks, namely GoogLeNet, ResNet18, SqueezeNet, ShuffleNet, EfficientNetB0, and Xception, are used as feature extractors for the COVID-19 detection with severity level analysis. The best feature extraction layer for each pre-trained network is determined to optimize the performance. After that, features obtained by the best layer are selected by following a wrapper-based feature selection strategy using the features ranked based on Laplacian scores. The experimental results achieved on two publicly available data sets including all the forms of COVID-19 disease reveal that the method generalized well on unseen data. Moreover, 66.67%, 90.32%, and 100% sensitivity are obtained in the detection of mild, moderate, and severe cases, respectively

    Düşük işlem yüküne sahip hareket kestirimi için tümlev imge temelli ikilileştirme

    No full text
    Günümüzde yüksek çözünürlüklü televizyonlar, kameralar, akıllı telefonların kullanımı ile birlikte yüksek çözünürlüklü video uygulamalarına talep duyulmaktadır. Bu cihazlardaki güç tüketimi ve sınırlı hafıza gibi kısıtlardan dolayı da düşük işlem yüküne sahip video kodlama yöntemlerine ihtiyaç artmaktadır. Video kodlama standartlarında halen en fazla işlem yükü hareket kestirimi kısmındadır. Bu çalışmada düşük işlem yüküne sahip, düşük bit derinliği gösterimi temelli bir hareket kestirimi yöntemi önerilmektedir. Bu yaklaşımda video çerçeveleri tümlev imge kullanılarak etkin bir şekilde ikilileştirilmekte ve video çerçevelerinin iki bit ile gösterimi elde edilmektedir. Elde edilen ikili çerçeveler üzerinden geleneksel mutlak farklar toplamı (SAD) yerine donanıma daha uygun olan dışaran veya (EX-OR) operasyonu kullanılarak uyumlama işlemi yapılmaktadır. Hareket kestiriminde ikilileştirme işlemi gerçekleştirirken tümlev imge kullanılması ilk kez bu çalışmada önerilmektedir. Önerilen yöntem, literatürde mevcut olan 1-bit dönüşüm (1BT) temelli yaklaşımlara kıyasla hareket kestirim doğruluğunu geliştirirken iki‑bit dönüşüm temelli yaklaşımların başarısı ile hemen hemen aynı seviyede olmaktadır. Bunun yanında özellikle ikilileştirme aşamasında bu yöntemlerin işlem yükünü azaltmaktadır

    Fast inter mode decision exploiting intra-block similarity in HEVC

    No full text
    Duvar, Ramazan (Dogus Author)Video coding standards mainly aim to decrease bit-rate to be transmitted while maintaining video quality at a certain level. Mode decision used in inter- and intra-prediction stages is a vital component of video coding and is closely related to video coding efficiency and complexity. Therefore, fast and efficient mode decision algorithms are required to reduce encoding time while keeping the video quality at a certain level. In this paper, a fast inter mode decision method for HEVC is proposed. Firstly, an early selection considering intra-block similarity by making use of integral image at the prediction unit level is introduced. Secondly, a flexible early termination mode at the coding unit level is presented. The performance of the proposed method in these units is evaluated separately and together. Experimental results demonstrate the low complexity of the proposed method without any important loss in coding efficiency. The proposed method provides a good balance between coding efficiency and time savings compared to the state-of-the-art approaches

    Brain tumor classification using the fused features extracted from expanded tumor region

    No full text
    In this study, a brain tumor classification method using the fusion of deep and shallow features is proposed to distinguish between meningioma, glioma, pituitary tumor types and to predict the 1p/19q co-deletion status of LGG tumors. Brain tumors can be located in a different region of the brain, and the texture of the surrounding tissues may also vary. Therefore, the inclusion of surrounding tissues into the tumor region (ROI expansion) can make the features more distinctive. In this work, pre-trained AlexNet, ResNet-18, GoogLeNet, and ShuffleNet networks are used to extract deep features from the tumor regions including its surrounding tissues. Even though the deep features are extremely important in classification, some low-level information regarding tumors may be lost as the network deepens. Accordingly, a shallow network is designed to learn low-level information. Next, in order to compensate the information loss, deep features and shallow features are fused. SVM and k-NN classifiers are trained using the fused feature sets. Experimental results achieved on two publicly available data sets demonstrate that using the feature fusion and the ROI expansion at the same time improves the average sensitivity by about 11.72% (ROI expansion: 8.97%, feature fusion: 2.75%). These results confirm the assumption that the tissues surrounding the tumor region carry distinctive information. Not only that, the missing low-level information can be compensated thanks to the feature fusion. Moreover, competitive results are achieved against state-of-the-art studies when the ResNet-18 is used as the deep feature extractor of our classification framework

    Ensemble-LungMaskNet: Automated lung segmentation using ensembled deep encoders

    No full text
    Kocaeli University;Kocaeli University Technopark2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 -- 25 August 2021 through 27 August 2021 -- -- 172175Automated lung segmentation has importance because it gives clues about several diseases to the experts. It is the step that comes before further detailed analyses of the lungs. However, segmentation of the lungs is a challenging task since the opacities and consolidations are caused by various lung diseases. As a result, the clarity of the borders of the lungs may be lost which makes the segmentation task difficult. The presence of various medical equipment such as cables in the image is another factor that makes segmentation difficult. Therefore, it is a necessity to develop methods that can handle such situations. Learning the most useful patterns related to various diseases is possible with deep learning methods. Unlike conventional methods, learning the patterns improves the generalization ability of the models on unseen data. For this purpose, a deep segmentation framework including ensembles of pre-trained lightweight networks is proposed for lung region segmentation in this work. The experimental results achieved on two publicly available data sets demonstrate the effectiveness of the proposed framework. © 2021 IEEE
    corecore