10 research outputs found

    Predictive Data Mining: Promising Future and Applications

    Get PDF
    Predictive analytics is the branch of data mining concerned with the prediction of future probabilities and trends. The central element of predictive analytics is the predictor, a variable that can be measured for an individual or other entity to predict future behavior. For example, an insurance company is likely to take into account potential driving safety predictors such as age, gender, and driving record when issuing car insurance policies. Multiple predictors are combined into a predictive model, which, when subjected to analysis, can be used to forecast future probabilities with an acceptable level of reliability. In predictive modeling, data is collected, a statistical model is formulated, predictions are made and the model is validated (or revised) as additional data becomes available. Predictive analytics are applied to many research areas, including meteorology, security, genetics, economics, and marketing. In this paper, we have done an extensive study on various predictive techniques with all its future directions and applications in various areas are being explaine

    Vibration Analysis for Engine fault Detection

    Get PDF
    In the Vibration analysis for engine fault detection, we use different visualization graph. Today‘s world growing fast and machinery part getting complex so it’s difficult to find out fault in the machine so here means in this paper we explain how we find out the fault of the machine with help of visualization it’s easy to find out a fault here we use angular.js, D3.js for visualization and use MQTT protocol for publishing and subscribe sensor data. In the automobile industries machines are the main part of how we find out fault yes we find out fault with help of sensors using sensors here we analyze the machine

    A Novel PSO-FLANN Framework of Feature Selection and Classification for Microarray Data

    Get PDF
    AbstractFeature selection is a method of finding appropriate features from a given dataset. Last few years a number of feature selection methods have been proposed for handling the curse of dimensionality with microarray data set. Proposed framework has used two feature selection methods: Principal component analysis (PCA) and Factor analysis (FA). Typically microarray data contains number of genes with huge number of conditions. In such case there is a need of good classifier to classify the data. In this paper, particle swarm optimization (PSO) is used for classification because the parameters of PSO can be optimized for a given problem. In recent years PSO has been used increasingly as a novel technique for solving complex problems. To classify the microarray dataset, the functional link artificial neural network (FLANN) used the PSO to tune the parameters of FLANN. This PSO-FLANN classifier has been used to classify three different microarray data sets to achieve the accuracy. The proposed PSO-FLANN model has also been compared with discriminant Analysis (DA). Experiments were performed on the three microarray datasets and the simulation shows that PSO-FLANN gives more than 80% accuracy

    A Coding Theoretic Model for Error-detecting in DNA Sequences

    Get PDF
    AbstractA major problem in communication engineering system is the transmitting of information from source to receiver over a noisy channel. To check the error in information digits many error detecting and correcting codes have been developed. The main aim of these error correcting codes is to encode the information digits and decode these digits to detect and correct the common errors in transmission. This information theory concept helps to study the information transmission in biological systems and extend the field of coding theory into the biological domain. In the cellular level, the information in DNA is transformed into proteins. The sequence of bases like Adenine (A), Thymine (T), Guanine (G) and Cytosine (C) in DNA may be considered as digital codes which transmit genetic information. This paper shows the existence of any form error detecting code in the DNA structure, by encoding the DNA sequences using Hamming code

    Phylogenetic Tree Construction for DNA Sequences using Clustering Methods

    No full text
    AbstractA phylogenetic tree or an evolutionary tree is a graph that shows the evolutionary relationships among various biological species based on their genetic closeness. In the proposed model, initially individual samples are selected and a matrix is generated which shows the genetic distances between individuals. Then by using the distance matrix, samples are divided into clusters. Then phylogenetic trees for each cluster are constructed independently. In order to find the clustering algorithm that gives the most effective clusters for biological data are k-means algorithm, k-medoid algorithm and density-based algorithms are used. By doing a comparative study it is concluded that density-based clustering (DBSCAN) is quite good for biological dataset because; this algorithm performs efficiently for low dimensional data and the algorithm is robust towards outliers and noise points. The phylogenetic tree for every individual clusters are formed and finally joined to create the final phylogenetic tree. From the experimental evaluation it has been found that, the DBSCAN is showing better result showing appropriate information and it is faster than other two methods

    Machine Learning Styles for Diabetic Retinopathy Detection: A Review and Bibliometric Analysis

    No full text
    Diabetic retinopathy (DR) is a medical condition caused by diabetes. The development of retinopathy significantly depends on how long a person has had diabetes. Initially, there may be no symptoms or just a slight vision problem due to impairment of the retinal blood vessels. Later, it may lead to blindness. Recognizing the early clinical signs of DR is very important for intervening in and effectively treating DR. Thus, regular eye check-ups are necessary to direct the person to a doctor for a comprehensive ocular examination and treatment as soon as possible to avoid permanent vision loss. Nevertheless, due to limited resources, it is not feasible for screening. As a result, emerging technologies, such as artificial intelligence, for the automatic detection and classification of DR are alternative screening methodologies and thereby make the system cost-effective. People have been working on artificial-intelligence-based technologies to detect and analyze DR in recent years. This study aimed to investigate different machine learning styles that are chosen for diagnosing retinopathy. Thus, a bibliometric analysis was systematically done to discover different machine learning styles for detecting diabetic retinopathy. The data were exported from popular databases, namely, Web of Science (WoS) and Scopus. These data were analyzed using Biblioshiny and VOSviewer in terms of publications, top countries, sources, subject area, top authors, trend topics, co-occurrences, thematic evolution, factorial map, citation analysis, etc., which form the base for researchers to identify the research gaps in diabetic retinopathy detection and classification

    Question Answer System: A State-of-Art Representation of Quantitative and Qualitative Analysis

    No full text
    Question Answer System (QAS) automatically answers the question asked in natural language. Due to the varying dimensions and approaches that are available, QAS has a very diverse solution space, and a proper bibliometric study is required to paint the entire domain space. This work presents a bibliometric and literature analysis of QAS. Scopus and Web of Science are two well-known research databases used for the study. A systematic analytical study comprising performance analysis and science mapping is performed. Recent research trends, seminal work, and influential authors are identified in performance analysis using statistical tools on research constituents. On the other hand, science mapping is performed using network analysis on a citation and co-citation network graph. Through this analysis, the domain’s conceptual evolution and intellectual structure are shown. We have divided the literature into four important architecture types and have provided the literature analysis of Knowledge Base (KB)-based and GNN-based approaches for QAS

    Deep Learning Approaches for Video Compression: A Bibliometric Analysis

    No full text
    Every data and kind of data need a physical drive to store it. There has been an explosion in the volume of images, video, and other similar data types circulated over the internet. Users using the internet expect intelligible data, even under the pressure of multiple resource constraints such as bandwidth bottleneck and noisy channels. Therefore, data compression is becoming a fundamental problem in wider engineering communities. There has been some related work on data compression using neural networks. Various machine learning approaches are currently applied in data compression techniques and tested to obtain better lossy and lossless compression results. A very efficient and variety of research is already available for image compression. However, this is not the case for video compression. Because of the explosion of big data and the excess use of cameras in various places globally, around 82% of the data generated involve videos. Proposed approaches have used Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs), and various variants of Autoencoders (AEs) are used in their approaches. All newly proposed methods aim to increase performance (reducing bitrate up to 50% at the same data quality and complexity). This paper presents a bibliometric analysis and literature survey of all Deep Learning (DL) methods used in video compression in recent years. Scopus and Web of Science are well-known research databases. The results retrieved from them are used for this analytical study. Two types of analysis are performed on the extracted documents. They include quantitative and qualitative results. In quantitative analysis, records are analyzed based on their citations, keywords, source of publication, and country of publication. The qualitative analysis provides information on DL-based approaches for video compression, as well as the advantages, disadvantages, and challenges of using them

    Deep Learning Approaches for Video Compression: A Bibliometric Analysis

    No full text
    Every data and kind of data need a physical drive to store it. There has been an explosion in the volume of images, video, and other similar data types circulated over the internet. Users using the internet expect intelligible data, even under the pressure of multiple resource constraints such as bandwidth bottleneck and noisy channels. Therefore, data compression is becoming a fundamental problem in wider engineering communities. There has been some related work on data compression using neural networks. Various machine learning approaches are currently applied in data compression techniques and tested to obtain better lossy and lossless compression results. A very efficient and variety of research is already available for image compression. However, this is not the case for video compression. Because of the explosion of big data and the excess use of cameras in various places globally, around 82% of the data generated involve videos. Proposed approaches have used Deep Neural Networks (DNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs), and various variants of Autoencoders (AEs) are used in their approaches. All newly proposed methods aim to increase performance (reducing bitrate up to 50% at the same data quality and complexity). This paper presents a bibliometric analysis and literature survey of all Deep Learning (DL) methods used in video compression in recent years. Scopus and Web of Science are well-known research databases. The results retrieved from them are used for this analytical study. Two types of analysis are performed on the extracted documents. They include quantitative and qualitative results. In quantitative analysis, records are analyzed based on their citations, keywords, source of publication, and country of publication. The qualitative analysis provides information on DL-based approaches for video compression, as well as the advantages, disadvantages, and challenges of using them
    corecore