12 research outputs found
Interpretation and localization of Thorax diseases using DCNN in Chest X-Ray
In recent years, the use of diagnosing images has been increased dramatically. An entry level task of diagnosing and reading Chest X-ray for radiologist but they ought to re-quire a good knowledge and careful observation of anatomical principles, pathology and physiology for this complex reasonings. In many modern hospital’s the tremendous number of x-ray images are stored in PACS (Picture Archiving and Communication Sys-tem). The conditions of plethora been diagnosed by the sustainable number of chest X-Ray. Our aim to predict the thorax disease categories through deep learning using chest x-rays and their first-pass specialist accuracy. In a paper the main application that present a pathology localization framework and multi-label unified weakly supervised image classification that can perceive the occurrence of afterward generation of bound-ing box around the consistent and multiple pathologies. Due to considering of large image capacity we adapt Deep Convolutional Neural Network (DCNN) architecture for weakly-supervised object localization, different pooling strategies, various multi-label CNN losses and measured against a baseline of softmax regression
Mining user's navigation structure by filtering impurity nodes for generating relevant predictions
Web Navigation Prediction (WNP) has been popularly used for finding future probable web pages. Obtaining relevant information from a large web is challenging, as its size is growing with every second. Web data may contain irrelevant noise, and thus cleaning such data is essential. In this paper, we emphasize the identification and elimination of noisy information from user-navigated web pages. The pruning of user browsed sessions by removing noisy webpages and their relations can help in the development of high-performing prediction models with fewer prediction errors. To minimize the prediction errors, we have proposed four pruning models namely PM3ER, PM3EN, PM3GI, and PM3EP which remove noisy web pages and their relations from the model consisting of varied simple and complex navigations. The study reveals that PM3ER is effective in prediction for websites with complicated structures resulting in complex navigations, and PM3EN is effective in predicting websites with a simple tree-like structure resulting in simple navigations. PM3ER has shown an improvement in predictions by up to 3% whereas PM3EN has attained improvement up to 5.45%
Energy-Aware Online Non-Clairvoyant Scheduling Using Speed Scaling with Arbitrary Power Function
Efficient job scheduling reduces energy consumption and enhances the performance of machines in data centers and battery-based computing devices. Practically important online non-clairvoyant job scheduling is studied less extensively than other algorithms. In this paper, an online non-clairvoyant scheduling algorithm Highest Scaled Importance First (HSIF) is proposed, where HSIF selects an active job with the highest scaled importance. The objective considered is to minimize the scaled importance based flow time plus energy. The processor’s speed is proportional to the total scaled importance of all active jobs. The performance of HSIF is evaluated by using the potential analysis against an optimal offline adversary and simulating the execution of a set of jobs by using traditional power function. HSIF is 2-competitive under the arbitrary power function and dynamic speed scaling. The competitive ratio obtained by HSIF is the least to date among non-clairvoyant scheduling. The simulation analysis reflects that the performance of HSIF is best among the online non-clairvoyant job scheduling algorithms
Neural network inspired differential evolution based task scheduling for cloud infrastructure
In recent years, cloud computing has become an essential technology for businesses and individuals alike. Task scheduling is a critical aspect of cloud computing that affects the performance and efficiency of cloud infrastructure. During this pandemic where most of the healthcare services like COVID-19 sampling, vaccination process, patient management and other services are dependent on cloud infrastructure. These services come with huge clients and server load in a small instance of time. These task loads can only be managed at cloud infrastructure where an efficient resource management algorithm plays an important role. The optimal utilization of cloud infrastructure and optimization algorithms plays a vital role. The cloud resources rely on the allocation policy of the tasks on cloud resources. Simple static, dynamic, and meta-heuristic techniques provide a solution but not the optimal solution. In such a scenario machine learning and evolutionary algorithms are only the solution. In this work, a hybrid model based on meta-heuristic technique and neural network is proposed. The presented neural network inspired differential evolution hybrid technique provides an optimal assignment of the tasks on cloud infrastructure. The performance of the DE-ANN hybrid approach is performed using performance metrics, average start time(ms), average finish time(ms), average execution time(ms), total completion time(ms), simulation time(ms), and average resource utilization respectively. The proposed DE-ANN approach is validated against BB-BC, and Genetic approaches. It outperforms the existing meta-heuristic techniques i.e. Genetic approach, and Big-Bang Big-Crunch. The performance is evaluated using two configuration scenarios using 5 virtual machines and 10 virtual machines with varying tasks from 1000 to 4500. Experimental results show that the DE-ANN technique significantly improves task scheduling performance compared to other traditional techniques. The technique achieves an average improvement of 19.15% in total completion time(ms), 32.23% in average finish time(ms), 51.95% in average execution time(ms), and 33.24% in average resource utilization respectively. The DE-ANN technique is also effective in handling dynamic and uncertain environments, making it suitable for real-world cloud infrastructures
An Algorithmic Approach towards Remote Sensing Imagery Data Restoration Using Guided Filters in Real-Time Applications
The images captured from SAR sensors are inherently weakened by speckle noise. The SAR image processing community targeted this problem with many feature-based filters. Since SAR images are low-contrast images, edge retention is the most crucial aspect to consider. This helps in the efficient retrieval of information. This paper provides a two-step edge-preserving homomorphic SAR image despeckling technique that implements a guided filter as the first step, and a modified method of noise thresholding using the bivariate shrinkage rule and canny edge operator in the Discrete Orthonormal Stockwell Transform (DOST) domain as the second step. The use of a canny edge operator improves overall edge preservation after despeckling. The use of noise thresholding delivers the highest level of speckle reduction in the DOST domain. The detected edges are added to the residual part obtained after removing the noise to produce more informative content. According to several qualitative and quantitative criteria, the suggested approach is compared to some of the newest despeckling methods. The execution time of the proposed method is around 7.2679 seconds. Upon conducting qualitative and quantitative analysis, it has been determined that the proposed method surpasses all other despeckling methods that were compared
Machine Learning Assisted Methodology for Multiclass Classification of Malignant Brain Tumors
| openaire: EC/H2020/101016775/EU//INTERVENEAnalysis of malignant and non-malignant brain tumors is done using a computer-aided diagnosis system by practitioners worldwide. Radiologists refer computer-assisted techniques to draw conclusions using image modalities and inferences. Pedagogically, various machine learning approaches have been used, which usually focus on the classification of imaging modality into two categories, either normal and abnormal images or differentiating between benign and malignant tumors. Still, the work requirement is to classify these multi-class malignant tumors into their specific class with better precision. The proposed work focuses on distinguishing between the types of high-grade malignant brain tumors. This study is performed on real-life malignant brain tumor datasets having five classes. The proposed methodology uses the vast feature set from six domains to capture most of the hidden information in the extracted region of interest. Later, relevant features are extracted from the feature set pool using a new proposedfeature selection algorithm named the Cumulative Variance method (CVM). Next, the selected features are used for model training and testing using K-Nearest Neighbour (KNN), multi-class Support Vector Machine (mSVM), and Neural Network (NN) for predicting multi-class classification accuracy. The experiments are performed using the proposed feature selection algorithm with three classifiers. The mean average classification accuracy achieved by using the proposed approach is88.43% (KNN), 92.5% (mSVM), and 95.86% (NN), respectively. The comparative analysis of the proposed approach with other existing algorithms like ICA, and GA suggest that the proposed approach gains an increase of accuracy around 2% (KNN), 3% (SVM), and 4% (NN).The experimentation results concluded that the proposed approach is found better with NN classifier with an accuracy of 95.86% using diversified features.Peer reviewe
A New Approach to Detect Power Quality Disturbances in Smart Cities Using Scaling-Based Chirplet Transform with Strategically Placed Smart Meters
The growth of Internet of Things (IoT)-enabled devices has increased the amount of data created by the distribution network's periphery nodes, requiring more data transfer capacity. Recent applications' real-time requirements have strained standard computing paradigms, and data processing has struggled to keep up. Edge computing is employed in this research to detect distribution network faults, allowing for instant sensing and real-time reaction to the control room for faster investigation of distribution problems and power outages, making the system more reliable. Moreover, to overcome the challenges of fault detection, advanced signal processing methods need to be integrated with the Adaboost classifier. An Adaboost-based edge device, suitable for installation on top of a power pole, is proposed in this research as a means of real-time fault detection. To increase throughput, decrease latency and offload network traffic, data collecting, feature extraction and Adaboost-based problem identification are all performed in an integrated edge node. Enhanced detection accuracy (98.67%) and decreased latency (115.2 ms) verify the effectiveness of the suggested approach. In this research, we enhance the classical chirplets transform to create the scaling-basis chirplet transform (SBCT) for time-frequency (TF) analysis. This approach modulates the TF basis around the relevant time function to modify the chirp rate with frequency and time. By carefully selecting the sampling frequency, it is possible to discriminate between short circuit fault and high-impedance fault (HIF) by calculating spectral entropy. The TF representation obtained with the SBCT provides considerably higher energy concentrations, even for signals with numerous components, closely spaced frequencies and heavy background noise.</p