42,015 research outputs found

    QoE Modelling, Measurement and Prediction: A Review

    Full text link
    In mobile computing systems, users can access network services anywhere and anytime using mobile devices such as tablets and smart phones. These devices connect to the Internet via network or telecommunications operators. Users usually have some expectations about the services provided to them by different operators. Users' expectations along with additional factors such as cognitive and behavioural states, cost, and network quality of service (QoS) may determine their quality of experience (QoE). If users are not satisfied with their QoE, they may switch to different providers or may stop using a particular application or service. Thus, QoE measurement and prediction techniques may benefit users in availing personalized services from service providers. On the other hand, it can help service providers to achieve lower user-operator switchover. This paper presents a review of the state-the-art research in the area of QoE modelling, measurement and prediction. In particular, we investigate and discuss the strengths and shortcomings of existing techniques. Finally, we present future research directions for developing novel QoE measurement and prediction technique

    Survey of data mining approaches to user modeling for adaptive hypermedia

    Get PDF
    The ability of an adaptive hypermedia system to create tailored environments depends mainly on the amount and accuracy of information stored in each user model. Some of the difficulties that user modeling faces are the amount of data available to create user models, the adequacy of the data, the noise within that data, and the necessity of capturing the imprecise nature of human behavior. Data mining and machine learning techniques have the ability to handle large amounts of data and to process uncertainty. These characteristics make these techniques suitable for automatic generation of user models that simulate human decision making. This paper surveys different data mining techniques that can be used to efficiently and accurately capture user behavior. The paper also presents guidelines that show which techniques may be used more efficiently according to the task implemented by the applicatio

    Classification hardness for supervised learners on 20 years of intrusion detection data

    Get PDF
    This article consolidates analysis of established (NSL-KDD) and new intrusion detection datasets (ISCXIDS2012, CICIDS2017, CICIDS2018) through the use of supervised machine learning (ML) algorithms. The uniformity in analysis procedure opens up the option to compare the obtained results. It also provides a stronger foundation for the conclusions about the efficacy of supervised learners on the main classification task in network security. This research is motivated in part to address the lack of adoption of these modern datasets. Starting with a broad scope that includes classification by algorithms from different families on both established and new datasets has been done to expand the existing foundation and reveal the most opportune avenues for further inquiry. After obtaining baseline results, the classification task was increased in difficulty, by reducing the available data to learn from, both horizontally and vertically. The data reduction has been included as a stress-test to verify if the very high baseline results hold up under increasingly harsh constraints. Ultimately, this work contains the most comprehensive set of results on the topic of intrusion detection through supervised machine learning. Researchers working on algorithmic improvements can compare their results to this collection, knowing that all results reported here were gathered through a uniform framework. This work's main contributions are the outstanding classification results on the current state of the art datasets for intrusion detection and the conclusion that these methods show remarkable resilience in classification performance even when aggressively reducing the amount of data to learn from

    Classification of web services using data mining algorithms and improved learning model

    Get PDF
    As per the global digital report, 52.9% of the world population is using the internet, and 42% of the world population is actively using e-commerce, banking, and other online applications. Web servicesare software components accessed using networked communications and provide services to end users. Software developers provide a high quality of web service. To meet the demands of user requirements, it is necessary for a developer to ensure quality architecture and quality of services. To meet the demands of user measure service quality by the ranking of web services, in this paper, we analyzed QWS datasetand found important parameters are best practices, successability, availability, response time, reliability and throughput, and compliance. We have used various data mining techniques and conductedexperiments to classify QWS data set into four categorical values as class1, 2, 3, and 4. The results are compared with various techniques random forest, artificial neural network, J48 decision tree, extremegradient boosting, K-nearest neighbor, and support vector machine. Multiple classifiers analyzed, and it was observed that the classifier technique eXtreme gradient boosting got the maximum accuracy of98.44%, and random forest got the accuracy of 98.13%. In future, we can extend the quality of web service for mixed attributes
    corecore