278 research outputs found

    Combining Website Search Engine Optimization with Advanced Web Log Analysis

    Get PDF
    This paper provides a clear guideline to the development of an online decision-making tool. The importance of ranking for an organizations virtual presence through search engines is also discussed. The system described illustrates the complexity of the competition between organizations to be highly ranked by leading search engines. The system not only reports the rankings of the owners but compares an organization with its competitors and enables it to decisively formulate an online development strategy in improving its ranking and therefore increasing its audience or critical mass. The system (Googalyser) utilizes Web logs and content analysis to provide decisive information to Web developers in order to improve the cases ranking through for example www.Google.com

    Customer information systems for deregulated ASEAN countries

    Get PDF
    In similar fashion to western countries, ASEAN countries are also gearing up towards deregulation. Despite potentially different motivating drivers, the ultimate objectives are free market competition leading to efficient pricing signals as well as providing customers with the freedom to choose their electricity provider and benefit from competitive prices. This paper provides an ASEAN electricity market analysis and describes the development of electricity deregulation in ASEAN countries. By way of background it also highlights the objectives of deregulation, the potential challenges and also the impact areas focusing on existing Customer Information Systems (CIS) that have been developed by other utilities. In addition, this paper proposes a new framework for improving CIS for ASEAN utilities facing deregulation. The framework outlines a CIS, which has intelligent features enabling the utility to estimate and predict customer behaviour with respect to consumption patterns. It describes how these features can assist the utility companies to retain their existing customers as well as attract more customers

    Quality Dimensions for B2C E-Commerce

    Get PDF
    Organizations have still not realized the full potential of e-commerce. One factor that is likely to influence the further adoption of e-commerce is the quality of the e-commerce system as system quality impacts user satisfaction and hence use of the system. However, in order to improve the quality of any systems, one first needs to identify measures to assess quality. Although other researchers have recognized the need for such measures, they have primarily focused on a single specific aspect of e-commerce systems, typically the user interface. In this paper we identify the key components of e-commerce systems and synthesize existing research related to quality of these components to arrive at a comprehensive list of quality dimensions, which in turn provide measures to assess the quality of e-commerce systems

    Data mining as a tool for environmental scientists

    Get PDF
    Over recent years a huge library of data mining algorithms has been developed to tackle a variety of problems in fields such as medical imaging and network traffic analysis. Many of these techniques are far more flexible than more classical modelling approaches and could be usefully applied to data-rich environmental problems. Certain techniques such as Artificial Neural Networks, Clustering, Case-Based Reasoning and more recently Bayesian Decision Networks have found application in environmental modelling while other methods, for example classification and association rule extraction, have not yet been taken up on any wide scale. We propose that these and other data mining techniques could be usefully applied to difficult problems in the field. This paper introduces several data mining concepts and briefly discusses their application to environmental modelling, where data may be sparse, incomplete, or heterogenous

    Graph based Anomaly Detection and Description: A Survey

    Get PDF
    Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised vs. (semi-)supervised approaches, for static vs. dynamic graphs, for attributed vs. plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the ‘why’, of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field

    Comprehensive survey on big data privacy protection

    Get PDF
    In recent years, the ever-mounting problem of Internet phishing has been threatening the secure propagation of sensitive data over the web, thereby resulting in either outright decline of data distribution or inaccurate data distribution from several data providers. Therefore, user privacy has evolved into a critical issue in various data mining operations. User privacy has turned out to be a foremost criterion for allowing the transfer of confidential information. The intense surge in storing the personal data of customers (i.e., big data) has resulted in a new research area, which is referred to as privacy-preserving data mining (PPDM). A key issue of PPDM is how to manipulate data using a specific approach to enable the development of a good data mining model on modified data, thereby meeting a specified privacy need with minimum loss of information for the intended data analysis task. The current review study aims to utilize the tasks of data mining operations without risking the security of individuals’ sensitive information, particularly at the record level. To this end, PPDM techniques are reviewed and classified using various approaches for data modification. Furthermore, a critical comparative analysis is performed for the advantages and drawbacks of PPDM techniques. This review study also elaborates on the existing challenges and unresolved issues in PPDM.Published versio

    Data mining in biomedicine : current applications and further directions for research

    Get PDF
    Author name used in this manuscript: S. K. KwokAuthor name used in this manuscript: A. H. C. Tsang2009-2010 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe
    corecore