360 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Constraint-based validation of e-learning courseware

    Get PDF

    Knowledge engineering with semantic web technologies for decision support systems based on psychological models of expertise

    Get PDF
    Machines that provide decision support have traditionally used either a representation of human expertise or used mathematical algorithms. Each approach has its own limitations. This study helps to combine both types of decision support system for a single system. However, the focus is on how the machines can formalise and manipulate the human representation of expertise rather than on data processing or machine learning algorithms. It will be based on a system that represents human expertise in a psychological format. The particular decision support system for testing the approach is based on a psychological model of classification that is called the Galatean model of classification. The simple classification problems only require one XML structure to represent each class and the objects to be assigned to it. However, when the classification system is implemented as a decision support system within more complex realworld domains, there may be many variations of the class specification for different types of object to be assigned to the class in different circumstances and by different types of user making the classification decision. All these XML structures will be related to each other in formal ways, based on the original class specification, but managing their relationships and evolution becomes very difficult when the specifications for the XML variants are text-based documents. For dealing with these complexities a knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. The aim is to explore how semantic web technologies can be employed to help the knowledge engineering process for decision support systems based on human expertise, but deployed in complex domains with variable circumstances. The research evaluated OWL as a suitable vehicle for representing psychological expertise. The task was to see how well it can provide a machine formalism for the knowledge without losing its psychological validity or transparency: that is, the ability of end users to understand the knowledge representation intuitively despite its OWL format. The OWL Galatea model is designed in this study to help in automatic knowledge maintenance, reducing the replication of knowledge with variant uncertainties and support in knowledge engineering processes. The OWL-based approaches used in this model also aid in the adaptive knowledge management. An adaptive assessment questionnaire is an example of it, which is dynamically derived using the users age as the seed for creating the alternative questionnaires. The credibility of the OWL Galatea model is tested by applying it on two extremely different assessment domains (i.e. GRiST and ADVANCE). The conclusions are that OWLbased specifications provide the complementary structures for managing complex knowledge based on human expertise without impeding the end users’ understanding of the knowledgebase. The generic classification model is applicable to many domains and the accompanying OWL specification facilitates its implementations

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Information-seeking on the Web with Trusted Social Networks - from Theory to Systems

    Get PDF
    This research investigates how synergies between the Web and social networks can enhance the process of obtaining relevant and trustworthy information. A review of literature on personalised search, social search, recommender systems, social networks and trust propagation reveals limitations of existing technology in areas such as relevance, collaboration, task-adaptivity and trust. In response to these limitations I present a Web-based approach to information-seeking using social networks. This approach takes a source-centric perspective on the information-seeking process, aiming to identify trustworthy sources of relevant information from within the user's social network. An empirical study of source-selection decisions in information- and recommendation-seeking identified five factors that influence the choice of source, and its perceived trustworthiness. The priority given to each of these factors was found to vary according to the criticality and subjectivity of the task. A series of algorithms have been developed that operationalise three of these factors (expertise, experience, affinity) and generate from various data sources a number of trust metrics for use in social network-based information seeking. The most significant of these data sources is Revyu.com, a reviewing and rating Web site implemented as part of this research, that takes input from regular users and makes it available on the Semantic Web for easy re-use by the implemented algorithms. Output of the algorithms is used in Hoonoh.com, a Semantic Web-based system that has been developed to support users in identifying relevant and trustworthy information sources within their social networks. Evaluation of this system's ability to predict source selections showed more promising results for the experience factor than for expertise or affinity. This may be attributed to the greater demands these two factors place in terms of input data. Limitations of the work and opportunities for future research are discussed

    Extensión de la especificación IMS Learning Design desde la adaptación e integración de unidades de aprendizaje

    Get PDF
    IMS Learning Design (IMS-LD) representa una corriente actual en aprendizaje online y blended que se caracteriza porque: a) Es una especificación que pretende estandarizar procesos de aprendizaje, así como reutilizarlos en diversos contextos b) Posee una expresividad pedagógica más elaborada que desarrollos anteriores o en proceso c) Mantiene una relación cordial y prometedora con Learning Management Systems (LMSs), herramientas de autoría y de ejecución d) Existe una amplia variedad de grupos de investigación y proyectos europeos trabajando sobre ella, lo que augura una sostenibilidad, al menos académica Aun así, IMS Learning Design es un producto inicial (se encuentra en su primera versión, de 2003) y mejorable en diversos aspectos, como son la expresividad pedagógica y la interoperabilidad. En concreto, en esta tesis nos centramos en el aprendizaje adaptativo o personalizado y en la integración de Unidades de Aprendizaje, como dos de los pilares que definen la especificación, y que al mismo tiempo la potencian considerablemente. El primero (aprendizaje adaptativo) hace que se puedan abordar itinerarios individuales personalizados de estudio, tanto en flujo de aprendizaje como en contenido o interfaz; el segundo (integración) permite romper el aislamiento de los paquetes de información o cursos (Unidades de Aprendizaje, UoL) y establecer un diálogo con otros sistemas (LMSs), modelos y estándares, así como una reutilización de dichas UoLs en diversos contextos. En esta tesis realizamos un estudio de la especificación desde la base, analizando su modelo de información y cómo se construyen Unidades de Aprendizaje. Desde el Nivel A al Nivel C analizamos y criticamos la estructura de la especificación basándonos en un estudio teórico y una investigación práctica fruto del modelado de Unidades de Aprendizaje reales y ejecutables que nos proporcionan una información muy útil de base, y que mayormente adjuntamos en los anexos, para no interferir en el flujo de lectura del cuerpo principal. A partir de este estudio, analizamos la integración de Unidades de Aprendizaje con otros sistemas y especificaciones, abarcando desde la integración mínima mediante un enlace directo hasta la compartición de variables y estados que permiten una comunicación en tiempo real de ambas partes. Exponemos aquí también las conclusiones de diversos casos de estudio basados en adaptación que se anexan al final de la tesis y que se vuelven un instrumento imprescindible para lograr una solución real y aplicable. Como segundo pilar de la tesis complementario a la integración de Unidades de Aprendizaje, estudiamos el aprendizaje adaptativo: Los tipos, los avances y los enfoques y restricciones de modelado dentro de IMS-LD. Por último, y como complemento de la investigación teórica, a través de diversos casos prácticos estudiamos la manera en que IMS-LD modela la perzonalización del aprendizaje y hasta qué punto. Este primer bloque de análisis (general, integración y aprendizaje adaptativo) nos permite realizar una crítica estructural de IMS-LD en dos grandes apartados: Modelado y Arquitectura. Modelado apunta cuestiones que necesitan mejora, modificación, extensión o incorporación de elementos de modelado dentro de IMS-LD, como son procesos, componentes y recursos de programación. Arquitectura engloba otras cuestiones centradas en la comunicación que realiza IMS-LD con el exterior y que apuntan directamente a capas estructurales de la especificación, más allá del modelado. Aunque se encuentra fuera del núcleo de esta tesis, también se ha realizado una revisión de aspectos relacionados con Herramientas de autoría, por ser este un aspecto que condiciona el alcance del modelado y la penetración de la especificación en los distintos públicos objetivo. Sobre Herramientas, no obstante, no realizamos ninguna propuesta de mejora. La solución desarrollada, se centra en las diversas cuestiones sobre Modelado y Arquitectura encontradas en el análisis. Esta solución se compone de un conjunto de propuestas de estructuras, nuevas o ya existentes y modificadas, a través de las que se refuerza la capacidad expresiva de la especificación y la capacidad de interacción con un entorno de trabajo ajeno. Esta investigación de tres años ha sido llevada a cabo entre 2004 y 2007, principalmente con colegas de The Open University of The Netherlands, The University of Bolton, Universitat Pompeu Fabra y del departamento Research & Innovation de ATOS Origin, y ha sido desarrollada parcialmente dentro de proyectos europeos como UNFOLD, EU4ALL y ProLearn. La conclusión principal que se extrae de esta investigación es que IMS-LD necesita una reestructuración y modificación de ciertos elementos, así como la incorporación de otros nuevos, para mejorar una expresividad pedagógica y una capacidad de integración con otros sistemas de aprendizaje y estándares eLearning, si se pretenden alcanzar dos de los objetivos principales establecidos de base en la definición de esta especificación: La personalización del proceso de aprendizaje y la interoperabilidad real. Aun así, es cierto que la implantación de la especificación se vería claramente mejorada si existieran unas herramientas de más alto nivel (preferiblemente con planteamiento visual) que permitieran un modelado sencillo por parte de los usuarios finales reales de este tipo de especificaciones, como son los profesores, los creadores de contenido y los pedagogos-didactas que diseñan la experienicia de aprendizaje. Este punto, no obstante, es ajeno a la especificación y afecta a la interpretación que de la misma realizan los grupos de investigación y compañías que desarrollan soluciones de autoría. _____________________________________________IMS Learning Design (IMS-LD) is a current asset in eLearning and blended learning, due to several reasons: a) It is a specification that points to standardization and modeling of learning processes, and not just content; at the same time, it is focused on the re-use of the information packages in several contexts; b) It shows a deeper pedagogical expressiveness than other specifications, already delivered or in due process c) It is integrated at different levels into well-known Learning Management Systems (LMSs) d) There are a huge amount of European research projects and groups working with it, which aims at sustainability (in academia, at least) Nevertheless, IMS-LD is roughly an initial outcome (be aware that we are still working with the same release, dated on 2003). Therefore, it can and must be improved in several aspects, i.e., pedagogical expressiveness and interoperability. In this thesis, we concentrate on Adaptive Learning (or Personalised Learning) and on the Integration of Units of Learning (UoLs). They both are core aspects which the specification is built upon. They also can improve it significantly. Adaptation makes personalised learning itineraries, adapted to every role, to every user involved in the process, and focus on several aspects, i.e., flow, content and interface. Integration fosters the re-use of IMS-LD information packages in different contexts and connects both-ways UoLs with other specifications, models and LMSs. In order to achive these goals we carry out a threephase analysis. First, analysis of IMS-LD in several steps: foundations, information model, construction of UoLs. From Level A to Level C, we analyse and review the specification structure. We lean on a theoretical frameword, along with a practical approach, coming from the actual modeling of real UoLs which give an important report back. Out of this analysis we get a report on the general structure of IMS-LD. Second, analysis and review of the integration of UoLs with several LMSs, models and specifications: we analyse three different types of integration: a) minimal integration, with a simple link between parts; b) embedded integration, with a marriage of both parts in a single information package; and d) full integration, sharing variables and states between parts. In this step, we also show different case studies and report our partial conclusions. And third, analysis and review of how IMS-LD models adaptive learning: we define, classify and explain several types of adaptation and we approach them with the specificacion. A key part of this step is the actual modeling of UoLs showing adaptive learning processes. We highlight pros and cons and stress drawbacks and weak points that could be improved in IMS-LD to support adaptation, but also general learning processes Out of this three-step analysis carried out so far (namely general, integration, adaptation) we focus our review of the IMS-LD structure and information model on two blocks: Modeling and Architecture. Modeling is focused on process, components and programming resources of IMS-LD. Architecture is focused on the communication that IMS-LD establishes outside, both ways, and it deals with upper layers of the specification, beyong modeling issues. Modeling and Architecture issues need to be addressed in order to improve the pedagogical expressiveness and the integration of IMS-LD. Furthermore, we provide an orchestrated solution which meets these goals. We develop a structured and organized group of modifications and extensions of IMS-LD, which match the different reported problems issues. We suggest modifications, extensions and addition of different elements, aiming at the strength of the specification on adaptation and integration, along with general interest issues. The main conclusion out of this research is that IMS-LD needs a re-structure and a modification of some elements. It also needs to incorporate new ones. Both actions (modification and extension) are the key to improve the pedagogical expressiveness and the integration with other specifications and eLearning systems. Both actions aim at two clear objectives in the definition of IMS-LD: the personalisation of learning processes, and a real interoperability. It is fair to highlight the welcome help of high-level visual authoring tools. They can support a smoother modeling process that could focus on pedagogical issues and not on technical ones, so that a broad target group made of teachers, learning designers, content creators and pedagogues could make use of the specification in a simpler way. However, this criticism is outside the specification, so outside the core of this thesis too. This three-year research (2004-2007) has been carried out along with colleagues from The Open University of The Netherlands, The University of Bolton, Universitat Pompeu Fabra and from the Department of Research & Innovation of ATOS Origin. In addition, a few European projects, like UNFOLD, EU4ALL and ProLearn, have partially supported it
    corecore