6 research outputs found

    Critical review of the e-loyalty literature: a purchase-centred framework

    Get PDF
    Over the last few years, the concept of online loyalty has been examined extensively in the literature, and it remains a topic of constant inquiry for both academics and marketing managers. The tremendous development of the Internet for both marketing and e-commerce settings, in conjunction with the growing desire of consumers to purchase online, has promoted two main outcomes: (a) increasing numbers of Business-to-Customer companies running businesses online and (b) the development of a variety of different e-loyalty research models. However, current research lacks a systematic review of the literature that provides a general conceptual framework on e-loyalty, which would help managers to understand their customers better, to take advantage of industry-related factors, and to improve their service quality. The present study is an attempt to critically synthesize results from multiple empirical studies on e-loyalty. Our findings illustrate that 62 instruments for measuring e-loyalty are currently in use, influenced predominantly by Zeithaml et al. (J Marketing. 1996;60(2):31-46) and Oliver (1997; Satisfaction: a behavioral perspective on the consumer. New York: McGraw Hill). Additionally, we propose a new general conceptual framework, which leads to antecedents dividing e-loyalty on the basis of the action of purchase into pre-purchase, during-purchase and after-purchase factors. To conclude, a number of managerial implementations are suggested in order to help marketing managers increase their customers’ e-loyalty by making crucial changes in each purchase stage

    Guarded Attribute Grammars and Publish/Subscribe for implementing distributed collaborative business processes with high data availability

    Get PDF
    With the ever-increasing development of the Internet and the diversification of communication media , business processes of companies are increasingly collaborative and distributed. This contrasts with traditional solutions deployed for their management which are usually centralized, based on the activity flow or on the exchanged documents. Moreover, the users who are usually the main actors in collaborations are often relegated to second place. Recently, a distributed, data-driven and user-centric approach called Guarded Attributed Grammar (GAG) has been proposed for the modeling of such processes; it thus provides an answer to most of these limitations. In this paper, we present an approach for implementing business processes modeled using GAG in which communications are done by publish/subscribe with redirection of subscription (pub/sub-RS). The pub/sub-RS-which we propose-guarantees high data availability during the process execution by ensuring that an actor, perceived as a subscriber, will always receive a data he needs to perform a task as soon as it is produced. Moreover, if the data is semi-stuctured, and produced collaboratively and incrementally by several actors, its subscribers will be notified as soon as one of its components (a prefix) is produced simultaneously, as they will be subscribed in a transparent way to the remaining components (the suffix).Avec le développement toujours croissant de l'Internet et la diversification des moyens de communication, les processus métiers sont de plus en plus collaboratifs et distribués. Ceci contraste avec les solutions traditionnelles déployées pour leur gestion qui sont habituellement centralisées, basées sur le flux d'activités ou sur les documents échangés. Bien plus, les utilisateurs qui sont généralement les acteurs principaux dans la collaboration y sont souvent relégués au second rang. Récemment, une approche distribuée, centrée sur l'utilisateur et pilotée par les données appelée Grammaires Attribuées Gardées (GAG), a été proposée pour la modélisation de tels processus; elle fournit donc une réponse à la plupart de ces limitations. Dans ce papier, nous présentons une approche de mise en œuvre de processus métiers modélisés à l'aide des GAG dans laquelle les communications se font par publish/subscribe avec redirection de souscriptions (pub/sub-RS). Le pub/sub-RS (que nous proposons) garantit une haute disponibilité des données pendant l'exécution d'un processus en assurant qu'un acteur (perçu comme un abonné) recevra toujours une donnée dont il a besoin pour effectuer une tâche dès qu'elle est produite. De plus, si la donnée est semi-structurée et produite collaborativement et incrémentalement par plusieurs acteurs, ses abonnés seront notifiés dès qu'une de ses composantes (un préfixe) est produite en même temps qu'ils seront abonnés de manière transparente aux composantes résiduelles (le suffixe)

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Incident Prioritisation for Intrusion Response Systems

    Get PDF
    The landscape of security threats continues to evolve, with attacks becoming more serious and the number of vulnerabilities rising. To manage these threats, many security studies have been undertaken in recent years, mainly focusing on improving detection, prevention and response efficiency. Although there are security tools such as antivirus software and firewalls available to counter them, Intrusion Detection Systems and similar tools such as Intrusion Prevention Systems are still one of the most popular approaches. There are hundreds of published works related to intrusion detection that aim to increase the efficiency and reliability of detection, prevention and response systems. Whilst intrusion detection system technologies have advanced, there are still areas available to explore, particularly with respect to the process of selecting appropriate responses. Supporting a variety of response options, such as proactive, reactive and passive responses, enables security analysts to select the most appropriate response in different contexts. In view of that, a methodical approach that identifies important incidents as opposed to trivial ones is first needed. However, with thousands of incidents identified every day, relying upon manual processes to identify their importance and urgency is complicated, difficult, error-prone and time-consuming, and so prioritising them automatically would help security analysts to focus only on the most critical ones. The existing approaches to incident prioritisation provide various ways to prioritise incidents, but less attention has been given to adopting them into an automated response system. Although some studies have realised the advantages of prioritisation, they released no further studies showing they had continued to investigate the effectiveness of the process. This study concerns enhancing the incident prioritisation scheme to identify critical incidents based upon their criticality and urgency, in order to facilitate an autonomous mode for the response selection process in Intrusion Response Systems. To achieve this aim, this study proposed a novel framework which combines models and strategies identified from the comprehensive literature review. A model to estimate the level of risks of incidents is established, named the Risk Index Model (RIM). With different levels of risk, the Response Strategy Model (RSM) dynamically maps incidents into different types of response, with serious incidents being mapped to active responses in order to minimise their impact, while incidents with less impact have passive responses. The combination of these models provides a seamless way to map incidents automatically; however, it needs to be evaluated in terms of its effectiveness and performances. To demonstrate the results, an evaluation study with four stages was undertaken; these stages were a feasibility study of the RIM, comparison studies with industrial standards such as Common Vulnerabilities Scoring System (CVSS) and Snort, an examination of the effect of different strategies in the rating and ranking process, and a test of the effectiveness and performance of the Response Strategy Model (RSM). With promising results being gathered, a proof-of-concept study was conducted to demonstrate the framework using a live traffic network simulation with online assessment mode via the Security Incident Prioritisation Module (SIPM); this study was used to investigate its effectiveness and practicality. Through the results gathered, this study has demonstrated that the prioritisation process can feasibly be used to facilitate the response selection process in Intrusion Response Systems. The main contribution of this study is to have proposed, designed, evaluated and simulated a framework to support the incident prioritisation process for Intrusion Response Systems.Ministry of Higher Education in Malaysia and University of Malay

    An Ensemble Self-Structuring Neural Network Approach to Solving Classification Problems with Virtual Concept Drift and its Application to Phishing Websites

    Get PDF
    Classification in data mining is one of the well-known tasks that aim to construct a classification model from a labelled input data set. Most classification models are devoted to a static environment where the complete training data set is presented to the classification algorithm. This data set is assumed to cover all information needed to learn the pertinent concepts (rules and patterns) related to how to classify unseen examples to predefined classes. However, in dynamic (non-stationary) domains, the set of features (input data attributes) may change over time. For instance, some features that are considered significant at time Ti might become useless or irrelevant at time Ti+j. This situation results in a phenomena called Virtual Concept Drift. Yet, the set of features that are dropped at time Ti+j might return to become significant again in the future. Such a situation results in the so-called Cyclical Concept Drift, which is a direct result of the frequently called catastrophic forgetting dilemma. Catastrophic forgetting happens when the learning of new knowledge completely removes the previously learned knowledge. Phishing is a dynamic classification problem where a virtual concept drift might occur. Yet, the virtual concept drift that occurs in phishing might be guided by some malevolent intelligent agent rather than occurring naturally. One reason why phishers keep changing the features combination when creating phishing websites might be that they have the ability to interpret the anti-phishing tool and thus they pick a new set of features that can circumvent it. However, besides the generalisation capability, fault tolerance, and strong ability to learn, a Neural Network (NN) classification model is considered as a black box. Hence, if someone has the skills to hack into the NN based classification model, he might face difficulties to interpret and understand how the NN processes the input data in order to produce the final decision (assign class value). In this thesis, we investigate the problem of virtual concept drift by proposing a framework that can keep pace with the continuous changes in the input features. The proposed framework has been applied to phishing websites classification problem and it shows competitive results with respect to various evaluation measures (Harmonic Mean (F1-score), precision, accuracy, etc.) when compared to several other data mining techniques. The framework creates an ensemble of classifiers (group of classifiers) and it offers a balance between stability (maintaining previously learned knowledge) and plasticity (learning knowledge from the newly offered training data set). Hence, the framework can also handle the cyclical concept drift. The classifiers that constitute the ensemble are created using an improved Self-Structuring Neural Networks algorithm (SSNN). Traditionally, NN modelling techniques rely on trial and error, which is a tedious and time-consuming process. The SSNN simplifies structuring NN classifiers with minimum intervention from the user. The framework evaluates the ensemble whenever a new data set chunk is collected. If the overall accuracy of the combined results from the ensemble drops significantly, a new classifier is created using the SSNN and added to the ensemble. Overall, the experimental results show that the proposed framework affords a balance between stability and plasticity and can effectively handle the virtual concept drift when applied to phishing websites classification problem. Most of the chapters of this thesis have been subject to publicatio

    An Energy-Efficient and Reliable Data Transmission Scheme for Transmitter-based Energy Harvesting Networks

    Get PDF
    Energy harvesting technology has been studied to overcome a limited power resource problem for a sensor network. This paper proposes a new data transmission period control and reliable data transmission algorithm for energy harvesting based sensor networks. Although previous studies proposed a communication protocol for energy harvesting based sensor networks, it still needs additional discussion. Proposed algorithm control a data transmission period and the number of data transmission dynamically based on environment information. Through this, energy consumption is reduced and transmission reliability is improved. The simulation result shows that the proposed algorithm is more efficient when compared with previous energy harvesting based communication standard, Enocean in terms of transmission success rate and residual energy.This research was supported by Basic Science Research Program through the National Research Foundation by Korea (NRF) funded by the Ministry of Education, Science and Technology(2012R1A1A3012227)
    corecore