8 research outputs found

    Content Modelling for unbiased Information Analysis

    Get PDF
    Content is the form through which the information is conveyed as per the requirement of user. A volume of content is huge and expected to grow exponentially hence classification of useful data and not useful data is a very tedious task. Interface between content and user is Search engine. Therefore, the contents are designed considering search engine\u27s perspective. Content designed by the organization, utilizes user’s data for promoting their products and services. This is done mostly using inorganic ways utilized to influence the quality measures of a content, this may mislead the information. There is no correct mechanism available to analyse and disseminate the data. The gap between Actual results displayed to the user and results expected by the user can be minimized by introducing the quality check for the parameter to assess the quality of content. This may help to ensure the quality of content and popularity will not be allowed to precede quality of content. Social networking sites will help in doing the user modelling so that the qualitative dissemination of content can be validated

    Content Modelling for unbiased Information Analysis

    Get PDF
    Content is the form through which the information is conveyed as per the requirement of user. A volume of content is huge and expected to grow exponentially hence classification of useful data and not useful data is a very tedious task. Interface between content and user is Search engine. Therefore, the contents are designed considering search engine\u27s perspective. Content designed by the organization, utilizes user’s data for promoting their products and services. This is done mostly using inorganic ways utilized to influence the quality measures of a content, this may mislead the information. There is no correct mechanism available to analyse and disseminate the data. The gap between Actual results displayed to the user and results expected by the user can be minimized by introducing the quality check for the parameter to assess the quality of content. This may help to ensure the quality of content and popularity will not be allowed to precede quality of content. Social networking sites will help in doing the user modelling so that the qualitative dissemination of content can be validated

    Credibility Evaluation of User-generated Content using Novel Multinomial Classification Technique

    Get PDF
    Awareness about the features of the internet, easy access to data using mobile, and affordable data facilities have caused a lot of traffic on the internet. Digitization came with a lot of opportunities and challenges as well. One of the important advantages of digitization is paperless transactions, and transparency in payment, while data privacy, fake news, and cyber-attacks are the evolving challenges. The extensive use of social media networks and e-commerce websites has caused a lot of user-generated information, misinformation, and disinformation on the Internet. The quality of information depends upon various stages (of information) like generation of information, medium of propagation, and consumption of information. Content being user-generated, information needs a quality assessment before consumption. The loss of information is also necessary to be examined by applying the machine learning approach as the volume of content is extremely huge. This research work focuses on novel multinomial classification (based on multinoulli distribution) techniques to determine the quality of the information in the given content. To evaluate the information content a single algorithm with some processing is not sufficient and various approaches are necessary to evaluate the quality of content.  We propose a novel approach to calculate the bias, for which the Machine Learning model will be fitted appropriately to classify the content correctly. As an empirical study, rotten tomatoes’ movie review data set is used to apply the classification techniques. The accuracy of the system is evaluated using the ROC curve, confusion matrix, and MAP

    Intrusion Detection System using the Hybrid Model of Classification Algorithm and Rule-Based Algorithm

    Get PDF
    Intrusion detection system ID is necessary to secure the system from various intrusions. Analysis of the communication to categorize the data as useful or malicious data is crucial. The cyber security employed using intrusion detection systems should not also cause the extra time to perform the categorization. Nowadays machine learning techniques are used to make the identification of malicious data or an intrusion with the help of classification algorithms. The data set used for experimenting is KDD cup 99. The effect of individual classification algorithms can be improvised with the help of hybrid classification models. This model combines classification algorithms with rule-based algorithms. The blend of classification using machine and human intelligence adds an extra layer of security. An algorithm is validated using precision, recall, F-Measure, and Mean age Precision. The accuracy of the algorithm is 92.35 percent. The accuracy of the model is satisfactory even after the results are acquired by combining our rules inwritten by humans with conventional machine learning classification algorithms. Still, there is scope for improving and accurately classifying the attack precisely

    Automated Video and Audio based Stress Detection using Deep Learning Techniques

    Get PDF
    In today's world, stress has become an undoubtedly severe problem that affects people's health. Stress can modify a person's behavior, ideas, and feelings in addition to having an impact on mental health. Unchecked stress can contribute to chronic illnesses including high blood pressure, diabetes, and obesity. Early stress detection promotes a healthy lifestyle in society. This work demonstrates a deep learning-based method for identifying stress from facial expressions and speech signals.An image dataset formed by collecting images from the web is used to construct and train the model Convolution Neural Network (CNN), which then divides the images into two categories: stressed and normal. Recurrent Neural Network (RNN), which is used to categorize speech signals into stressed and normal categories based on the features extracted by the MFCC (Mel Frequency Cepstral Coefficient), is thought to perform better on sequential data since it maintains the past results to determine the final output

    A Parameter Based Comparative Study of Deep Learning Algorithms for Stock Price Prediction

    Get PDF
    Stock exchanges are places where buyers and sellers meet to trade shares in public companies. Stock exchanges encourage investment. Companies can grow, expand, and generate jobs in the economy by raising cash. These investments play a crucial role in promoting trade, economic expansion, and prosperity. We compare the three well-known deep learning algorithms, LSTM, GRU, and CNN, in this work. Our goal is to provide a thorough study of each algorithm and identify the best strategy when taking into account elements like accuracy, memory utilization, interpretability, and more. To do this, we recommend the usage of hybrid models, which combine the advantages of the various methods while also evaluating the performance of each approach separately. Aim of research is to investigate model with the highest accuracy and the best outcomes with respect to stock price prediction

    Optimizing Hyperparameters for Enhanced LSTM-Based Prediction System Performance

    Get PDF
    This research paper explores the application of deep learning and supervised machine learning algorithms, specifically Long Short-Term Memory (LSTM), for stock market prediction. The study focuses on the closing prices of three companies - Tata Steel, Apple, and Powergrid - using a dataset sourced from Yahoo Finance. Performance evaluation of the LSTM model employed RMSE, MAPE, and accuracy metrics, along with hyperparameter calibration to determine the optimal model parameters. The findings indicate that a single-layer LSTM model outperformed a multilayer LSTM model across all companies and evaluation metrics. Furthermore, a comparison with existing research demonstrated the superiority of the proposed model. The study emphasizes the effectiveness of LSTM models for stock price prediction, underscores the significance of proper hyperparameter tuning for optimal performance, and concludes that a single-layer LSTM model can yield superior results compared to a multilayer model

    Credibility Analysis of User-Designed Content Using Machine Learning Techniques

    No full text
    Content is a user-designed form of information, for example, observation, perception, or review. This type of information is more relevant to users, as they can relate it to their experience. The research problem is to identify the credibility and the percentage of credibility as well. Assessment of such content is important to convey the right understanding of the information. Different techniques are used for content analysis, such as voting the content, Machine Learning Techniques, and manual assessment to evaluate the content and the quality of information. In this research article, content analysis is performed by collecting the Movie Review dataset from Kaggle. Features are extracted and the most relevant features are shortlisted for experimentation. The effect of these features is analyzed by using base regression algorithms, such as Linear Regression, Lasso Regression, Ridge Regression, and Decision Tree. The contribution of the research is designing a heterogeneous ensemble regression algorithm for content credibility score assessment, which combines the above baseline methods. Moreover, these factors are also toned down to obtain the values closer to Gradient Descent minimum. Different forms of Error Loss, such as Mean Absolute Error, Mean Squared Error, LogCosh, Huber, and Jacobian, and the performance is optimized by introducing the balancing bias. The accuracy of the algorithm is compared with induvial regression algorithms and ensemble regression separately; this accuracy is 96.29%
    corecore