1,192 research outputs found

    Analysis Role of ML and Big Data Play in Driving Digital Marketing's Paradigm Shift

    Get PDF
    Marketing strategies are being revolutionized by the development of user data and the expanding usability of Machine Learning (ML) as well as Big Data approaches. The wide variety of options that ML and Big Data applications provide in building and sustaining a competitive corporate edge are not fully understood by researchers and marketers. Based on a thorough analysis of academic and commercial literature, we offer a classification of ML and Big Data use cases in marketing in this article. In order to effectively employ ML and Big Data in marketing, we have discovered 11 recurrent use cases that are grouped into 4 homogenous families. These families are: fundamentals of the consumer, the consumer experience, decision-making, and financial impact. We go over the taxonomy's repeating patterns and offer a conceptual framework for understanding and extending it, emphasizing the practical ramifications for marketers and academics

    LQMPCS: Design of a Low-Complexity Q-Learning Model based on Proof-of-Context Consensus for Scalable Side Chains

    Get PDF
    Single-chained blockchains are being rapidly replaced by sidechains (or sharded chains), due to their high QoS (Quality of Service), and low complexity characteristics. Existing sidechaining models use context-specific machine-learning optimization techniques, which limits their scalability when applied to real-time use cases. Moreover, these models are also highly complex and require constant reconfigurations when applied to dynamic deployment scenarios. To overcome these issues, this text proposes design of a novel low-complexity Q-Learning Model based on Proof-of-Context (PoC) consensus for scalable sidechains. The proposed model initially describes a Q-Learning method for sidechain formation, which assists in maintaining high scalability even under large-scale traffic scenarios. This model is cascaded with a novel Proof-of-Context based consensus that is capable of representing input data into context-independent formats. These formats assist in providing high-speed consensus, which is uses intent of data, instead of the data samples. To estimate this intent, a set of context-based classification models are used, which assist in representing input data samples into distinctive categories. These models include feature representation via Long-Short-Term-Memory (LSTM), and classification via 1D Convolutional Neural Networks (CNNs), that can be used for heterogeneous application scenarios. Due to representation of input data samples into context-based categories, the proposed model is able to reduce mining delay by 8.3%, reduce energy needed for mining by 2.9%, while maintaining higher throughput, and lower mining jitters when compared with standard sidechaining techniques under similar use cases

    A Systematic Survey of Classification Algorithms for Cancer Detection

    Get PDF
    Cancer is a fatal disease induced by the occurrence of a count of inherited issues and also a count of pathological changes. Malignant cells are dangerous abnormal areas that could develop in any part of the human body, posing a life-threatening threat. To establish what treatment options are available, cancer, also referred as a tumor, should be detected early and precisely. The classification of images for cancer diagnosis is a complex mechanism that is influenced by a diverse of parameters. In recent years, artificial vision frameworks have focused attention on the classification of images as a key problem. Most people currently rely on hand-made features to demonstrate an image in a specific manner. Learning classifiers such as random forest and decision tree were used to determine a final judgment. When there are a vast number of images to consider, the difficulty occurs. Hence, in this paper, weanalyze, review, categorize, and discuss current breakthroughs in cancer detection utilizing machine learning techniques for image recognition and classification. We have reviewed the machine learning approaches like logistic regression (LR), Naïve Bayes (NB), K-nearest neighbors (KNN), decision tree (DT), and Support Vector Machines (SVM)

    A Review on Cloud Data Security Challenges and existing Countermeasures in Cloud Computing

    Get PDF
    Cloud computing (CC) is among the most rapidly evolving computer technologies. That is the required accessibility of network assets, mainly information storage with processing authority without the requirement for particular and direct user administration. CC is a collection of public and private data centers that provide a single platform for clients throughout the Internet. The growing volume of personal and sensitive information acquired through supervisory authorities demands the usage of the cloud not just for information storage and for data processing at cloud assets. Nevertheless, due to safety issues raised by recent data leaks, it is recommended that unprotected sensitive data not be sent to public clouds. This document provides a detailed appraisal of the research regarding data protection and privacy problems, data encrypting, and data obfuscation, including remedies for cloud data storage. The most up-to-date technologies and approaches for cloud data security are examined. This research also examines several current strategies for addressing cloud security concerns. The performance of each approach is then compared based on its characteristics, benefits, and shortcomings. Finally, go at a few active cloud storage data security study fields

    Forecasting formation of a Tropical Cyclone Using Reanalysis Data

    Full text link
    The tropical cyclone formation process is one of the most complex natural phenomena which is governed by various atmospheric, oceanographic, and geographic factors that varies with time and space. Despite several years of research, accurately predicting tropical cyclone formation remains a challenging task. While the existing numerical models have inherent limitations, the machine learning models fail to capture the spatial and temporal dimensions of the causal factors behind TC formation. In this study, a deep learning model has been proposed that can forecast the formation of a tropical cyclone with a lead time of up to 60 hours with high accuracy. The model uses the high-resolution reanalysis data ERA5 (ECMWF reanalysis 5th generation), and best track data IBTrACS (International Best Track Archive for Climate Stewardship) to forecast tropical cyclone formation in six ocean basins of the world. For 60 hours lead time the models achieve an accuracy in the range of 86.9% - 92.9% across the six ocean basins. The model takes about 5-15 minutes of training time depending on the ocean basin, and the amount of data used and can predict within seconds, thereby making it suitable for real-life usage
    • …
    corecore