3,467 research outputs found

    De-Fragmenting Knowledge: Using Metadata for Interconnecting Courses

    Get PDF
    E-learning systems are often based on the notion of "course": an interconnected set of resources aiming at presenting material related to a particular topic. Course authors do provide external links to related material. Such external links are however "frozen" at the time of publication of the course. Metadata are useful for classifying and finding e-learning artifacts. In many cases, metadata are used by Learning Management Systems to import, export, sequence and present learning objects. The use of metadata by humans is in general limited to a search functionality, e.g. by authors who search for material that can be reused. We argue that metadata can be used to enrich the interconnection among courses, and to present to the student a richer variety of interconnected resources. We implemented a system that presents an instance of this idea

    A knowledge hub to enhance the learning processes of an industrial cluster

    Get PDF
    Industrial clusters have been defined as ?networks of production of strongly interdependent firms (including specialised suppliers), knowledge producing agents (universities, research institutes, engineering companies), institutions (brokers, consultants), linked to each other in a value adding production chain? (OECD Focus Group, 1999). The industrial clusters distinctive mode of production is specialisation, based on a sophisticated division of labour, that leads to interlinked activities and need for cooperation, with the consequent emergence of communities of practice (CoPs). CoPs are here conceived as groups of people and/or organisations bound together by shared expertise and propensity towards a joint work (Wenger and Suyden, 1999). Cooperation needs closeness for just-in-time delivery, for communication, for the exchange of knowledge, especially in its tacit form. Indeed the knowledge exchanges between the CoPs specialised actors, in geographical proximity, lead to spillovers and synergies. In the digital economy landscape, the use of collaborative technologies, such as shared repositories, chat rooms and videoconferences can, when appropriately used, have a positive impact on the development of the CoP exchanges process of codified knowledge. On the other end, systems for the individuals profile management, e-learning platforms and intelligent agents can trigger also some socialisation mechanisms of tacit knowledge. In this perspective, we have set-up a model of a Knowledge Hub (KH), driven by the Information and Communication Technologies (ICT-driven), that enables the knowledge exchanges of a CoP. In order to present the model, the paper is organised in the following logical steps: - an overview of the most seminal and consolidated approaches to CoPs; - a description of the KH model, ICT-driven, conceived as a booster of the knowledge exchanges of a CoP, that adds to the economic benefits coming from geographical proximity, the advantages coming from organizational proximity, based on the ICTs; - a discussion of some preliminary results that we are obtaining during the implementation of the model.

    Semantic annotation of Web APIs with SWEET

    Get PDF
    Recently technology developments in the area of services on the Web are marked by the proliferation of Web applications and APIs. The development and evolution of applications based on Web APIs is, however, hampered by the lack of automation that can be achieved with current technologies. In this paper we present SWEET - Semantic Web sErvices Editing Tool - a lightweight Web application for creating semantic descriptions of Web APIs. SWEET directly supports the creation of mashups by enabling the semantic annotation of Web APIs, thus contributing to the automation of the discovery, composition and invocation service tasks. Furthermore, it enables the development of composite SWS based applications on top of Linked Data

    Topic Map Generation Using Text Mining

    Get PDF
    Starting from text corpus analysis with linguistic and statistical analysis algorithms, an infrastructure for text mining is described which uses collocation analysis as a central tool. This text mining method may be applied to different domains as well as languages. Some examples taken form large reference databases motivate the applicability to knowledge management using declarative standards of information structuring and description. The ISO/IEC Topic Map standard is introduced as a candidate for rich metadata description of information resources and it is shown how text mining can be used for automatic topic map generation

    CINDI : the virtual library graphical user interface

    Get PDF
    The search for information on the Internet is often not an easy job for many Internet users. Because of the lack of a standard indexing scheme and an informative query interface, a Net search could have thousands of hits returned and the number of search misses is high. This thesis is part of the work to develop an Indexing and Searching System for the Internet called the CINDI (Concordia INdexing and DIscovery) system. It is aimed at providing a standard index scheme called Semantic-Header and informative query interface for users and providers of resources published on the Internet. This thesis presents the architectural design of the CINDI system, the design and implementation of the client part of the expert system, and the design and implementation of the graphical user interface (GUI) for the CINDI system. As the interface is an important part for the quality of a software, it has been carefully designed and implemented in an effort to be easy to use, user friendly, and consistent. The interface has been implemented under the UNIX system using the Motif toolkit and C programming language. The heart of the indexing system is the record called Semantic-Header that is kept for each item being indexed. The grammar of the Semantic-Header and that of the search query are also discussed in this thesis. An interface between the CINDI system and Netscape Navigator is also implemented. Finally, some directions for future work related to CINDI are described

    Automating the construction of scene classifiers for content-based video retrieval

    Get PDF
    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering
    • 

    corecore