37 research outputs found
Modelling Confidence for Quality of Service Assessment in Cloud Computing
The ability to assess the quality of a service (QoS) is important to the emerging cloud computing paradigm. When many cloud service providers exist offering many functionally identical services, the prospective users of these services will wish to use one that offers the best quality. Many techniques and tools have been proposed to assess QoS, and the ability to deal with uncertainty surrounding the QoS verdicts given by any such techniques or tools is essential. In this paper, we present a probabilistic model to quantify confidence in QoS assessment. More specifically, we take the number of QoS data items used in assessment and the variation of data in the dataset into account in our measure of assessment reliability. Our experiments show that our confidence model can help consumers to select services based on their requirements effectively
Quality of service assessment over multiple attributes
The development of the Internet and World Wide Web have led to many services being offered electronically. When there is sufficient demand from consumers for a certain service, multiple providers may exist, each offering identical service functionality but with varying qualities. It is desirable therefore that we are able to assess the quality of a service (QoS), so that service consumers can be given additional guidance in se lecting their preferred services. Various methods have been proposed to assess QoS using the data collected by monitoring tools, but they do not deal with multiple QoS attributes adequately. Typically these methods assume that the quality of a service may be assessed by first assessing the quality level delivered by each of its attributes individ ually, and then aggregating these in some way to give an overall verdict for the service. These methods, however, do not consider interaction among the multiple attributes of a service when some packaging of qualities exist (i.e. multiple levels of quality over multiple attributes for the same service). In this thesis, we propose a method that can give a better prediction in assessing QoS over multiple attributes, especially when the qualities of these attributes are monitored asynchronously. We do so by assessing QoS attributes collectively rather than indi vidually and employ a k nearest neighbour based technique to deal with asynchronous data. To quantify the confidence of a QoS assessment, we present a probabilistic model that integrates two reliability measures: the number of QoS data items used in the as sessment and the variation of data in this dataset. Our empirical evaluation shows that the new method is able to give a better prediction over multiple attributes, and thus provides better guidance for consumers in selecting their preferred services than the existing methods do
Ontology for Task and Quality Management in Crowdsourcing
This paper suggests an ontology for task and quality control mechanisms representation in crowdsourcing systems. The ontology is built to provide reasoning about tasks and quality control mechanisms to improve tasks and quality management in crowdsourcing. The ontology is formalized in OWL (Web Ontology Language) and implemented using Protégé. The developed ontology consists of 19 classes, 7 object properties, and 32 data properties. The development methodology of the ontology involves three phases including Specification (identifying scope, purpose and competency questions), Conceptualization (data dictionary, UML, and instance creation), and finally Implementation and Evaluation
A Lexicon-Based Approach to Build Reputation from Social Media
Nowadays, many social media platforms are widely used to express people’s opinions about their daily experiences and interests. These platforms encourage people to exchange and share information about a particular brand, company or even a political point of view. Consequently, huge amount of data which can be extracted and analyzed to obtain some useful knowledge are available. In this paper, we propose to build a reputation of a given service provider (i.e. brand, product or service) from the collected social media data. To do so, we have developed a lexicon as a basic component for sentiment polarity in Arabic idioms. That is, the lexicon is used to classify words extracted from “Tweets†into either a positive or negative word. We use beta probability density functions to combine feedback from the lexicon to derive reputation scores. The experimental results show that our proposed approach is consistent with sentiment analysis approach results
Recognizing Physical Activities for Spinal Cord Injury Rehabilitation Using Wearable Sensors
The research area of activity recognition is fast growing with diverse applications. However, advances in this field have not yet been used to monitor the rehabilitation of individuals with spinal cord injury. Noteworthily, relying on patient surveys to assess adherence can undermine the outcomes of rehabilitation. Therefore, this paper presents and implements a systematic activity recognition method to recognize physical activities applied by subjects during rehabilitation for spinal cord injury. In the method, raw sensor data are divided into fragments using a dynamic segmentation technique, providing higher recognition performance compared to the sliding window, which is a commonly used approach. To develop the method and build a predictive model, a machine learning approach was adopted. The proposed method was evaluated on a dataset obtained from a single wrist-worn accelerometer. The results demonstrated the effectiveness of the proposed method in recognizing all of the activities that were examined, and it achieved an overall accuracy of 96.86%
Dynamic Segmentation for Physical Activity Recognition Using a Single Wearable Sensor
Data segmentation is an essential process in activity recognition when using machine learning techniques. Previous studies on physical activity recognition have mostly relied on the sliding window approach for segmentation. However, choosing a fixed window size for multiple activities with different durations may affect recognition accuracy, especially when the activities belong to the same category (i.e., dynamic or static). This paper presents and verifies a new method for dynamic segmentation of physical activities performed during the rehabilitation of individuals with spinal cord injuries. To adaptively segment the raw data, signal characteristics are analyzed to determine the suitable type of boundaries. Then, the algorithm identifies the time boundaries to represent the start- and endpoints of each activity. To verify the method and build a predictive model, an experiment was conducted in which data were collected using a single wrist-worn accelerometer sensor. The experimental results were compared with the sliding window approach, indicating that the proposed method outperformed the sliding window approach in terms of overall accuracy, which exceeded 5%, as well as model robustness. The results also demonstrated efficient physical activity segmentation using the proposed method, resulting in high classification performance for all activities considered.</jats:p
A semiautomatic annotation approach for sentiment analysis
Sentiment analysis (SA) aims to extract users’ opinions automatically from their posts and comments. Almost all prior works have used machine learning algorithms. Recently, SA research has shown promising performance in using the deep learning approach. However, deep learning is greedy and requires large datasets to learn, so it takes more time for data annotation. In this research, we proposed a semiautomatic approach using Naïve Bayes (NB) to annotate a new dataset in order to reduce the human effort and time spent on the annotation process. We created a dataset for the purpose of training and testing the classifier by collecting Saudi dialect tweets. The dataset produced from the semiautomatic model was then used to train and test deep learning classifiers to perform Saudi dialect SA. The accuracy achieved by the NB classifier was 83%. The trained semiautomatic model was used to annotate the new dataset before it was fed into the deep learning classifiers. The three deep learning classifiers tested in this research were convolutional neural network (CNN), long short-term memory (LSTM) and bidirectional long short-term memory (Bi-LSTM). Support vector machine (SVM) was used as the baseline for comparison. Overall, the performance of the deep learning classifiers exceeded that of SVM. The results showed that CNN reported the highest performance. On one hand, the performance of Bi-LSTM was higher than that of LSTM and SVM, and, on the other hand, the performance of LSTM was higher than that of SVM. The proposed semiautomatic annotation approach is usable and promising to increase speed and save time and effort in the annotation process. </jats:p
