17 research outputs found

    Using web mining in e-commerce applications

    Get PDF
    Nowadays, the web is an important part of our daily life. The web is now the best medium of doing business. Large companies rethink their business strategy using the web to improve business. Business carried on the Web offers the opportunity to potential customers or partners where their products and specific business can be found. Business presence through a company web site has several advantages as it breaks the barrier of time and space compared with the existence of a physical office. To differentiate through the Internet economy, winning companies have realized that e-commerce transactions is more than just buying / selling, appropriate strategies are key to improve competitive power. One effective technique used for this purpose is data mining. Data mining is the process of extracting interesting knowledge from data. Web mining is the use of data mining techniques to extract information from web data. This article presents the three components of web mining: web usage mining, web structure mining and web content mining.e-commerce, web mining, web content mining, web structure mining, web usage mining

    AN INDISCERNIBILITY APPROACH FOR PRE PROCESSING OF WEB LOG FILES

    Get PDF
    World Wide Web has a spectacular growth not only in terms of the number of websites and volume of information, but also in terms of the number of visitors. Web log files contain tremendous information about the user traffic and behavior. A large amount of pre processing is required for eliminating the noise and is one of the challenging tasks in web usage mining. This paper proposes an indiscernibility approach in rough set theory for pre processing of web log files

    Identifying Interesting Knowledge Factors from Big Data for Effective E-Market Prediction

    Get PDF
    Knowledge management plays an important role in disseminating valuable information. Knowledge creation involves analyzing data and transforming information into knowledge. Knowledge management plays an important role in improving organizational decision-making. It is evident that data mining and predictive analytics contribute a major part in the creation of knowledge and forecast the future outcomes. The ability to predict the performance of the advertising campaigns can become an asset to the advertisers. Tools like Google analytics were able to capture user logs. Large amounts of information ranging from visitor location, visitor flow throughout the website to various actions the visitor performs after clicking an ad resides in those logs. This research approach is an effort to identify key knowledge factors in the marketing sector that can further be optimized for effective e-market prediction

    Web-log mining for predictive web caching

    Full text link

    Combining Coauthorship Network and Content for Literature Recommendation

    Get PDF
    This paper studies literature recommendation approaches using both content features and coauthorship relations of articles in literature databases. Most literature databases allow data access (via site subscription) without having to identify users, and thus task-focused recommendation is more appropriate in this context. Previous work mostly utilizes content and usage log for making task-focused recommendation. More recent works start to incorporate coauthorship network for recommendation and found it beneficial when the specified articles preferred by authors are similar in their content. However, it was also found that recommendation based on content features achieves better performance under other circumstances. Therefore, in this work we propose to incorporate both content and coauthorship network in making task-focused recommendation. Three hybrid methods, namely switching, proportional, and fusion are developed and compared. Our experimental results show that in general the proposed hybrid approach achieves better performance than approaches that utilize only one source of knowledge. In particular, the fusion method tends to have higher recommendation accuracy for articles of higher ranks. Besides, the content-based approach is more likely to recommend articles of low fidelity, whereas the coauthorship network-based approach has the least chance

    A comparative study of the AHP and TOPSIS methods for implementing load shedding scheme in a pulp mill system

    Get PDF
    The advancement of technology had encouraged mankind to design and create useful equipment and devices. These equipment enable users to fully utilize them in various applications. Pulp mill is one of the heavy industries that consumes large amount of electricity in its production. Due to this, any malfunction of the equipment might cause mass losses to the company. In particular, the breakdown of the generator would cause other generators to be overloaded. In the meantime, the subsequence loads will be shed until the generators are sufficient to provide the power to other loads. Once the fault had been fixed, the load shedding scheme can be deactivated. Thus, load shedding scheme is the best way in handling such condition. Selected load will be shed under this scheme in order to protect the generators from being damaged. Multi Criteria Decision Making (MCDM) can be applied in determination of the load shedding scheme in the electric power system. In this thesis two methods which are Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were introduced and applied. From this thesis, a series of analyses are conducted and the results are determined. Among these two methods which are AHP and TOPSIS, the results shown that TOPSIS is the best Multi criteria Decision Making (MCDM) for load shedding scheme in the pulp mill system. TOPSIS is the most effective solution because of the highest percentage effectiveness of load shedding between these two methods. The results of the AHP and TOPSIS analysis to the pulp mill system are very promising

    Software Defect Association Mining and Defect Correction Effort Prediction

    Get PDF
    Much current software defect prediction work concentrates on the number of defects remaining in software system. In this paper, we present association rule mining based methods to predict defect associations and defect-correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that for the defect association prediction, the accuracy is very high and the false negative rate is very low. Likewise for the defect-correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect-correction effort prediction method with other types of methods: PART, C4.5, and Na¨ıve Bayes and show that accuracy has been improved by at least 23%. We also evaluated the impact of support and confidence levels on prediction accuracy, false negative rate, false positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy
    corecore