43,821 research outputs found

    Data mining technology for the evaluation of learning content interaction

    Get PDF
    Interactivity is central for the success of learning. In e-learning and other educational multimedia environments, the evaluation of interaction and behaviour is particularly crucial. Data mining – a non-intrusive, objective analysis technology – shall be proposed as the central evaluation technology for the analysis of the usage of computer-based educational environments and in particular of the interaction with educational content. Basic mining techniques are reviewed and their application in a Web-based third-level course environment is illustrated. Analytic models capturing interaction aspects from the application domain (learning) and the software infrastructure (interactive multimedia) are required for the meaningful interpretation of mining results

    A gentle transition from Java programming to Web Services using XML-RPC

    Get PDF
    Exposing students to leading edge vocational areas of relevance such as Web Services can be difficult. We show a lightweight approach by embedding a key component of Web Services within a Level 3 BSc module in Distributed Computing. We present a ready to use collection of lecture slides and student activities based on XML-RPC. In addition we show that this material addresses the central topics in the context of web services as identified by Draganova (2003)

    Collaborative Categorization on the Web

    Get PDF
    Collaborative categorization is an emerging direction for research and innovative applications. Arguably, collaborative categorization on the Web is an especially promising emerging form of collaborative Web systems because of both, the widespread use of the conventional Web and the emergence of the Semantic Web providing with more semantic information on Web data. This paper discusses this issue and proposes two approaches: collaborative categorization via category merging and collaborative categorization proper. The main advantage of the first approach is that it can be rather easily realized and implemented using existing systems such as Web browsers and mail clients. A prototype system for collaborative Web usage that uses category merging for collaborative categorization is described and the results of field experiments using it are reported. The second approach, called collaborative categorization proper, however, is more general and scales better. The data structure and user interface aspects of an approach to collaborative categorization proper are discussed

    A Survey on Web Usage Mining

    Get PDF
    Now a day World Wide Web become very popular and interactive for transferring of information. The web is huge, diverse and active and thus increases the scalability, multimedia data and temporal matters. The growth of the web has outcome in a huge amount of information that is now freely offered for user access. The several kinds of data have to be handled and organized in a manner that they can be accessed by several users effectively and efficiently. So the usage of data mining methods and knowledge discovery on the web is now on the spotlight of a boosting number of researchers. Web usage mining is a kind of data mining method that can be useful in recommending the web usage patterns with the help of users2019; session and behavior. Web usage mining includes three process, namely, preprocessing, pattern discovery and pattern analysis. There are different techniques already exists for web usage mining. Those existing techniques have their own advantages and disadvantages. This paper presents a survey on some of the existing web usage mining techniques

    A Survey on Web Mining Techniques and Applications

    Full text link

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Refining the use of the web (and web search) as a language teaching and learning resource

    Get PDF
    The web is a potentially useful corpus for language study because it provides examples of language that are contextualized and authentic, and is large and easily searchable. However, web contents are heterogeneous in the extreme, uncontrolled and hence 'dirty,' and exhibit features different from the written and spoken texts in other linguistic corpora. This article explores the use of the web and web search as a resource for language teaching and learning. We describe how a particular derived corpus containing a trillion word tokens in the form of n-grams has been filtered by word lists and syntactic constraints and used to create three digital library collections, linked with other corpora and the live web, that exploit the affordances of web text and mitigate some of its constraints

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field
    corecore