143 research outputs found

    A Security Scheme for Textual & Graphical Passwords

    Get PDF
    Authentication is the process of identifying an individual, usually based on username and password. Authentication merely ensures that the individual is who he or she claims to be. This forestalls the activities against confidentiality and integrity. Shoulder surfing is the main problem of graphical passwords. To overcome the problem of shoulder surfing we introduced a novel Scheme. This scheme provides the login screen to the user at every time the user logs in, this login image consists of set of characters. User with his password clicks some pass characters which are different for different sessions and explained in proposed scheme. To provide better results Neural Network is used for the authenticatio

    Performance and Comparative Analysis of the Two Contrary Approaches for Detecting Near Duplicate Web Documents in Web Crawling

    Get PDF
    Recent years have witnessed the drastic development of World Wide Web (WWW). Information is being accessible at the finger tip anytime anywhere through the massive web repository. The performance and reliability of web engines thus face huge problems due to the presence of enormous amount of web data. The voluminous amount of web documents has resulted in problems for search engines leading to the fact that the search results are of less relevance to the user. In addition to this, the presence of duplicate and near-duplicate web documents has created an additional overhead for the search engines critically affecting their performance. The demand for integrating data from heterogeneous sources leads to the problem of near-duplicate web pages. The detection of near duplicate documents within a collection has recently become an area of great interest. In this research, we have presented an efficient approach for the detection of near duplicate web pages in web crawling which uses keywords and the distance measure. Besides that, G.S. Manku et al.’s fingerprint based approach proposed in 2007 was considered as one of the “state-of-the-art" algorithms for finding near-duplicate web pages. Then we have implemented both the approaches and conducted an extensive comparative study between our similarity score based approach and G.S. Manku et al.’s fingerprint based approach. We have analyzed our results in terms of time complexity, space complexity, Memory usage and the confusion matrix parameters. After taking into account the above mentioned performance factors for the two approaches, the comparison study clearly portrays our approach the better (less complex) of the two based on the factors considered.DOI:http://dx.doi.org/10.11591/ijece.v2i6.1746

    Predicting Software Reliability Using Ant Colony Optimization Technique with Travelling Salesman Problem for Software Process – A Literature Survey

    Get PDF
    Computer software has become an essential and important foundation in several versatile domains including medicine, engineering, etc. Consequently, with such widespread application of software, there is a need of ensuring software reliability and quality. In order to measure such software reliability and quality, one must wait until the software is implemented, tested and put for usage for a certain time period. Several software metrics have been proposed in the literature to avoid this lengthy and costly process, and they proved to be a good means of estimating software reliability. For this purpose, software reliability prediction models are built. Software reliability is one of the important software quality features. Software reliability is defined as the probability with which the software will operate without any failure for a specific period of time in a specified environment. Software reliability, when estimated in early phases of software development life cycle, saves lot of money and time as it prevents spending huge amount of money on fixing of defects in the software after it has been deployed to the client. Software reliability prediction is very challenging in starting phases of life cycle model. Software reliability estimation has thus become an important research area as every organization aims to produce reliable software, with good quality and error or defect free software. There are many software reliability growth models that are used to assess or predict the reliability of the software. These models help in developing robust and fault tolerant systems. In the past few years many software reliability models have been proposed for assessing reliability of software but developing accurate reliability prediction models is difficult due to the recurrent or frequent changes in data in the domain of software engineering. As a result, the software reliability prediction models built on one dataset show a significant decrease in their accuracy when they are used with new data. The main aim of this paper is to introduce a new approach that optimizes the accuracy of software reliability predictive models when used with raw data. Ant Colony Optimization Technique (ACOT) is proposed to predict software reliability based on data collected from literature. An ant colony system by combining with Travelling Sales Problem (TSP) algorithm has been used, which has been changed by implementing different algorithms and extra functionality, in an attempt to achieve better software reliability results with new data for software process. The intellectual behavior of the ant colony framework by means of a colony of cooperating artificial ants are resulting in very promising results. Keywords: Software Reliability, Reliability predictive Models, Bio-inspired Computing, Ant Colony Optimization technique, Ant Colon

    Utilization of iron values of red mud for metallurgical applications

    Get PDF
    A brief overview on the utilization of iron values of red mud is presented along with the results of some recent investigations conducted at National Metallurgical Laboratory. Red mud from Nalco, characterized by high iron content, is used in the studies. Two different strategies are explored : (a) extraction of iron and other metal values from red mud using a pat-ented process, named as Elgai process, available for the removal of alumina from iron ores; and (b) use of red mud as an additive in the iron ore sintering. The second approach has particularly yielded interesting results. Sinter with acceptable physical properties and reducibility could be produced with red mud addition from 50 to 125 kg/tonne of sinter. Red mud addition leads to the dilution of the iron content of sinter. It is suggested that this problem can be circumvented with addition of blue dust, a waste material, along with red mud

    Classification of Epileptic and Non-Epileptic Electroencephalogram (EEG) Signals Using Fractal Analysis and Support Vector Regression

    Get PDF
    Seizures are a common symptom of this neurological condition, which is caused by the discharge of brain nerve cells at an excessively fast rate. Chaos, nonlinearity, and other nonlinearities are common features of scalp and intracranial Electroencephalogram (EEG) data recorded in clinics. EEG signals that aren't immediately evident are challenging to categories because of their complexity. The Gradient Boost Decision Tree (GBDT) classifier was used to classify the majority of the EEG signal segments automatically. According to this study, the Hurst exponent, in combination with AFA, is an efficient way to identify epileptic signals. As with any fractal analysis approach, there are problems and factors to keep in mind, such as identifying whether or not linear scaling areas are present. These signals were classified as either epileptic or non-epileptic by using a combination of GBDT and a Support Vector Regression (SVR). The combined method's identification accuracy was 98.23%. This study sheds light on the effectiveness of AFA feature extraction and GBDT classifiers in EEG classification. The findings can be utilized to develop theoretical guidance for the clinical identification and prediction of epileptic EEG signals. Doi: 10.28991/ESJ-2022-06-01-011 Full Text: PD

    Effective Brain Tumor Classification Using Deep Residual Network-Based Transfer Learning

    Get PDF
    Brain tumor classification is an essential task in medical image processing that provides assistance to doctors for accurate diagnoses and treatment plans. A Deep Residual Network based Transfer Learning to a fully convoluted Convolutional Neural Network (CNN) is proposed to perform brain tumor classification of Magnetic Resonance Images (MRI) from the BRATS 2020 dataset. The dataset consists of a variety of pre-operative MRI scans to segment integrally varied brain tumors in appearance, shape, and histology, namely gliomas. A Deep Residual Network (ResNet-50) to a fully convoluted CNN is proposed to perform tumor classification from MRI of the BRATS dataset. The 50-layered residual network deeply convolutes the multi-category of tumor images in classification tasks using convolution block and identity block. Limitations such as Limited accuracy and complexity of algorithms in CNN-based ME-Net, and classification issues in YOLOv2 inceptions are resolved by the proposed model in this work. The trained CNN learns boundary and region tasks and extracts successful contextual information from MRI scans with minimal computation cost. The tumor segmentation and classification are performed in one step using a U-Net architecture, which helps retain spatial features of the image. The multimodality fusion is implemented to perform classification and regression tasks by integrating dataset information. The dice scores of the proposed model for Enhanced Tumor (ET), Whole Tumor (WT), and Tumor Core (TC) are 0.88, 0.97, and 0.90 on the BRATS 2020 dataset, and also resulted in 99.94% accuracy, 98.92% sensitivity, 98.63% specificity, and 99.94% precision

    A Novel Algorithm for Discovering Frequent Closures and Generators

    Get PDF
    The Important construction of many association rules needs the calculation of Frequent Closed Item Sets and Frequent Generator Item Sets (FCIS/FGIS). However, these two odd jobs are joined very rarely. Most of the existing methods apply level wise Breadth-First search. Though the Depth-First search depends on different characteristics of data, it is often better than others. Hence, in this paper it is named as FCFG algorithm that combines the Frequent closed item sets and frequent generators. This proposed algorithm (FCFG) extracts frequent itemsets (FIs) in a Depth-First search method. Then this algorithm extracts FCIS and FGIS from FIs by a level wise approach. Then it associates the generators to their closures. In FCFG algorithm, a generic technique is extended from an arbitrary FI-miner algorithm in order to support the generation of minimal non-redundant association rules. Experimental results indicate that FCFG algorithm performs better when compared with other level wise methods in most of the cases

    Projecting Active Contours with Diminutive Sequence Optimality

    Get PDF
    Active contours are widely used in image segmentation. To cope with missing or misleading features in image frames taken in contexts such as spatial and surveillance, researchers have commence various ways to model the preceding of shapes and use the prior to constrict active contours. However, the shape prior is frequently learnt from a large set of annotated data, which is not constantly accessible in practice. In addition, it is often doubted that the existing shapes in the training set will be sufficient to model the new instance in the testing image. In this paper we propose to use the diminutive sequence of image frames to learn the missing contour of the input images. The central median minimization is a simple and effective way to impose the proposed constraint on existing active contour models. Moreover, we extend a fast algorithm to solve the projected model by using the hastened proximal method. The Experiments done using image frames acquired from surveillance, which demonstrated that the proposed method can consistently improve the performance of active contour models and increase the robustness against image defects such as missing boundaries

    Pelletisation of reactant dosed laterite

    Get PDF
    The present paperdeals with process parameters for pelletisation of laterite mixed with gypsum and coal in various proportions with a view to produce pellets for reduction smelting studies. Results are presented for pelletisation of laterite alone with various binders viz. bentonite, starch and sodium silicate. Bentonite and starch are found to enhance the strength of the pellets. It was observed that higher quantities of gypsum (as stoichiometrically required forcon¬verting various metal oxides available in laterite to sulphides) det-rimentally affect the pelletising characteristics of the mix. Various mixing methods were investigated. Pellets could be prepared by alternate charging of laterite coal mix and gypsum giving rise to a concentric layered arrangement of constituents. Premoisteningfol¬lowed by attrition mixing was found suitable for producing pellets with good physical properties. Addition of 6% moisture with attri¬tion mixing in a rod mill for 10 nits produced pellets of smooth spherical morphology with drop nos. 6-171p and air dried strength of more than 4 kg/p
    • …
    corecore