1,960 research outputs found

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Enforcing Customization in e-Learning Systems: an ontology and product line-based approach

    Full text link
    In the era of e-Learning, educational materials are considered a crucial point for all the stakeholders. On the one hand, instructors aim at creating learning materials that meet the needs and expectations of learners easily and effec-tively; On the other hand, learners want to acquire knowledge in a way that suits their characteristics and preferences. Consequently, the provision and customization of educational materials to meet the needs of learners is a constant challenge and is currently synonymous with technological devel-opment. Promoting the personalization of learning materials, especially dur-ing their development, will help to produce customized learning materials for specific learners' needs. The main objective of this thesis is to reinforce and strengthen Reuse, Cus-tomization and Ease of Production issues in e-Learning materials during the development process. The thesis deals with the design of a framework based on ontologies and product lines to develop customized Learning Objects (LOs). With this framework, the development of learning materials has the following advantages: (i) large-scale production, (ii) faster development time, (iii) greater (re) use of resources. The proposed framework is the main contribution of this thesis, and is char-acterized by the combination of three models: the Content Model, which addresses important points related to the structure of learning materials, their granularity and levels of aggregation; the Customization Model, which con-siders specific learner characteristics and preferences to customize the learn-ing materials; and the LO Product Line (LOPL) model, which handles the subject of variability and creates matter-them in an easy and flexible way. With these models, instructors can not only develop learning materials, but also reuse and customize them during development. An additional contribution is the Customization Model, which is based on the Learning Style Model (LSM) concept. Based on the study of seven of them, a Global Learning Style Model Ontology (GLSMO) has been con-structed to help instructors with information on the apprentice's characteris-tics and to recommend appropriate LOs for customization. The results of our work have been reflected in the design of an authoring tool for learning materials called LOAT. They have described their require-ments, the elements of their architecture, and some details of their user inter-face. As an example of its use, it includes a case study that shows how its use in the development of some learning components.En la era del e¿Learning, los materiales educativos se consideran un punto crucial para todos los participantes. Por un lado, los instructores tienen como objetivo crear materiales de aprendizaje que satisfagan las necesidades y ex-pectativas de los alumnos de manera fácil y efectiva; por otro lado, los alumnos quieren adquirir conocimientos de una manera que se adapte a sus características y preferencias. En consecuencia, la provisión y personaliza-ción de materiales educativos para satisfacer las necesidades de los estudian-tes es un desafío constante y es actualmente sinónimo de desarrollo tecnoló-gico. El fomento de la personalización de los materiales de aprendizaje, es-pecialmente durante su desarrollo, ayudará a producir materiales de aprendi-zaje específicos para las necesidades específicas de los alumnos. El objetivo fundamental de esta tesis es reforzar y fortalecer los temas de Reutilización, Personalización y Facilidad de Producción en materiales de e-Learning durante el proceso de desarrollo. La tesis se ocupa del diseño de un marco basado en ontologías y líneas de productos para desarrollar objetos de aprendizaje personalizados. Con este marco, el desarrollo de materiales de aprendizaje tiene las siguientes ventajas: (i) producción a gran escala, (ii) tiempo de desarrollo más rápido, (iii) mayor (re)uso de recursos. El marco propuesto es la principal aportación de esta tesis, y se caracteriza por la combinación de tres modelos: el Modelo de Contenido, que aborda puntos importantes relacionados con la estructura de los materiales de aprendizaje, su granularidad y niveles de agregación, el Modelo de Persona-lización, que considera las características y preferencias específicas del alumno para personalizar los materiales de aprendizaje, y el modelo de Línea de productos LO (LOPL), que maneja el tema de la variabilidad y crea ma-teriales de manera fácil y flexible. Con estos modelos, los instructores no sólo pueden desarrollar materiales de aprendizaje, sino también reutilizarlos y personalizarlos durante el desarrollo. Una contribución adicional es el modelo de personalización, que se basa en el concepto de modelo de estilo de aprendizaje. A partir del estudio de siete de ellos, se ha construido una Ontología de Modelo de Estilo de Aprendiza-je Global para ayudar a los instructores con información sobre las caracterís-ticas del aprendiz y recomendarlos apropiados para personalización. Los resultados de nuestro trabajo se han plasmado en el diseño de una he-rramienta de autor de materiales de aprendizaje llamada LOAT. Se han des-crito sus requisitos, los elementos de su arquitectura, y algunos detalles de su interfaz de usuario. Como ejemplo de su uso, se incluye un caso de estudio que muestra cómo su empleo en el desarrollo de algunos componentes de aprendizaje.En l'era de l'e¿Learning, els materials educatius es consideren un punt crucial per a tots els participants. D'una banda, els instructors tenen com a objectiu crear materials d'aprenentatge que satisfacen les necessitats i expectatives dels alumnes de manera fàcil i efectiva; d'altra banda, els alumnes volen ad-quirir coneixements d'una manera que s'adapte a les seues característiques i preferències. En conseqüència, la provisio' i personalitzacio' de materials edu-catius per a satisfer les necessitats dels estudiants és un desafiament constant i és actualment sinònim de desenvolupament tecnològic. El foment de la personalitzacio' dels materials d'aprenentatge, especialment durant el seu desenvolupament, ajudarà a produir materials d'aprenentatge específics per a les necessitats concretes dels alumnes. L'objectiu fonamental d'aquesta tesi és reforçar i enfortir els temes de Reutilització, Personalització i Facilitat de Producció en materials d'e-Learning durant el procés de desenvolupament. La tesi s'ocupa del disseny d'un marc basat en ontologies i línia de productes per a desenvolupar objec-tes d'aprenentatge personalitzats. Amb aquest marc, el desenvolupament de materials d'aprenentatge té els següents avantatges: (i) produccio' a gran esca-la, (ii) temps de desenvolupament mes ràpid, (iii) major (re)ús de recursos. El marc proposat és la principal aportacio' d'aquesta tesi, i es caracteritza per la combinacio' de tres models: el Model de Contingut, que aborda punts im-portants relacionats amb l'estructura dels materials d'aprenentatge, la se-ua granularitat i nivells d'agregació, el Model de Línia de Producte, que ges-tiona el tema de la variabilitat i crea materials d'aprenentatge de manera fàcil i flexible. Amb aquests models, els instructors no solament poden desenvolu-par materials d'aprenentatge, sinó que també poden reutilitzar-los i personalit-zar-los durant el desenvolupament. Una contribucio' addicional és el Model de Personalitzacio', que es basa en el concepte de model d'estil d'aprenentatge. A partir de l'estudi de set d'ells, s'ha construït una Ontologia de Model d'Estil d'Aprenentatge Global per a ajudar als instructors amb informacio' sobre les característiques de l'aprenent i recomanar els apropiats per a personalitzacio'. Els resultats del nostre treball s'han plasmat en el disseny d'una eina d'autor de materials d'aprenentatge anomenada LOAT. S'han descrit els seus requi-sits, els elements de la seua arquitectura, i alguns detalls de la seua interfície d'usuari. Com a exemple del seu ús, s'inclou un cas d'estudi que mostra com és el desenvolupament d'alguns components d'aprenentatge.Ezzat Labib Awad, A. (2017). Enforcing Customization in e-Learning Systems: an ontology and product line-based approach [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90515TESI

    Software Support for Discourse-Based Textual Information Analysis: A Systematic Literature Review and Software Guidelines in Practice

    Get PDF
    [Abstract] The intrinsic characteristics of humanities research require technological support and software assistance that also necessarily goes through the analysis of textual narratives. When these narratives become increasingly complex, pragmatics analysis (i.e., at discourse or argumentation levels) assisted by software is a great ally in the digital humanities. In recent years, solutions have been developed from the information visualization domain to support discourse analysis or argumentation analysis of textual sources via software, with applications in political speeches, debates, online forums, but also in written narratives, literature or historical sources. This paper presents a wide and interdisciplinary systematic literature review (SLR), both in software-related areas and humanities areas, on the information visualization and the software solutions adopted to support pragmatics textual analysis. As a result of this review, this paper detects weaknesses in existing works on the field, especially related to solutions’ availability, pragmatic framework dependence and lack of information sharing and reuse software mechanisms. The paper also provides some software guidelines for improving the detected weaknesses, exemplifying some guidelines in practice through their implementation in a new web tool, Viscourse. Viscourse is conceived as a complementary tool to assist textual analysis and to facilitate the reuse of informational pieces from discourse and argumentation text analysis tasks.Ministerio de Economía, Industria y Competitividad; FJCI-2016-6 28032Ministerio de Ciencia, Innovación y Universidades; RTI2018-093336-B-C2

    Living Knowledge

    Get PDF
    Diversity, especially manifested in language and knowledge, is a function of local goals, needs, competences, beliefs, culture, opinions and personal experience. The Living Knowledge project considers diversity as an asset rather than a problem. With the project, foundational ideas emerged from the synergic contribution of different disciplines, methodologies (with which many partners were previously unfamiliar) and technologies flowed in concrete diversity-aware applications such as the Future Predictor and the Media Content Analyser providing users with better structured information while coping with Web scale complexities. The key notions of diversity, fact, opinion and bias have been defined in relation to three methodologies: Media Content Analysis (MCA) which operates from a social sciences perspective; Multimodal Genre Analysis (MGA) which operates from a semiotic perspective and Facet Analysis (FA) which operates from a knowledge representation and organization perspective. A conceptual architecture that pulls all of them together has become the core of the tools for automatic extraction and the way they interact. In particular, the conceptual architecture has been implemented with the Media Content Analyser application. The scientific and technological results obtained are described in the following

    Software solutions for web information systems in digital humanities: review, analysis and comparative study

    Get PDF
    Research in the humanities increasingly depends on how information is structured and managed and how, on the basis of that information, new knowledge is produced. Additionally, participatory approaches, which often rely on web information systems as their supportive infrastructure, have made an impact on the most recent historiographical trends, in particular in the methodological framework of digital humanities. The aim of this paper was to produce, from an operational and implementation perspective, a review of software solutions frequently used to develop web information systems for research projects in humanities and cultural heritage, in order to provide an understanding of the various possibilities available and their positives and limitations, also based on different users’ requirements. An individual and comparative analysis of sixteen different application frameworks commonly used in these fields, either generic or developed for a specific research domain, has been carried out, considering their main functionalities, strengths, and weaknesses. The achieved results facilitate critical and reasoned decision-making among several available options, guiding the makers of those systems, both researcher(s) and developers(s), and providing them also with a common ground of terms and use cases to facilitate their necessary dialogue

    A Reference Architecture for Data-Driven and Adaptive Internet-Delivered Psychological Treatment Systems: Software Architecture Development and Validation Study

    Get PDF
    Background: Internet-delivered psychological treatment (IDPT) systems are software applications that offer psychological treatments via the internet. Such IDPT systems have become one of the most commonly practiced and widely researched forms of psychotherapy. Evidence shows that psychological treatments delivered by IDPT systems can be an effective way of treating mental health morbidities. However, current IDPT systems have high dropout rates and low user adherence. The primary reason is that the current IDPT systems are not flexible, adaptable, and personalized as they follow a fixed tunnel-based treatment architecture. A fixed tunnel-based architecture follows predefined, sequential treatment content for every patient, irrespective of their context, preferences, and needs. Moreover, current IDPT systems have poor interoperability, making it difficult to reuse and share treatment materials. There is a lack of development and documentation standards, conceptual frameworks, and established (clinical) guidelines for such IDPT systems. As a result, several ad hoc forms of IDPT models exist. Consequently, developers and researchers have tended to reinvent new versions of IDPT systems, making them more complex and less interoperable. Objective: This study aimed to design, develop, and evaluate a reference architecture (RA) for adaptive systems that can facilitate the design and development of adaptive, interoperable, and reusable IDPT systems. Methods: This study was conducted in collaboration with a large interdisciplinary project entitled INTROMAT (Introducing Mental Health through Adaptive Technology), which brings together information and communications technology researchers, information and communications technology industries, health researchers, patients, clinicians, and patients’ next of kin to reach its vision. First, we investigated previous studies and state-of-the-art works based on the project’s problem domain and goals. On the basis of the findings from these investigations, we identified 2 primary gaps in current IDPT systems: lack of adaptiveness and limited interoperability. Second, we used model-driven engineering and Domain-Driven Design techniques to design, develop, and validate the RA for building adaptive, interoperable, and reusable IDPT systems to address these gaps. Third, based on the proposed RA, we implemented a prototype as the open-source software. Finally, we evaluated the RA and open-source implementation using empirical (case study) and nonempirical approaches (software architecture analysis method, expert evaluation, and software quality attributes). Results: This paper outlines an RA that supports flexible user modeling and the adaptive delivery of treatments. To evaluate the proposed RA, we developed an open-source software based on the proposed RA. The open-source framework aims to improve development productivity, facilitate interoperability, increase reusability, and expedite communication with domain experts. Conclusions: Our results showed that the proposed RA is flexible and capable of adapting interventions based on patients’ needs, preferences, and context. Furthermore, developers and researchers can extend the proposed RA to various health care interventions.publishedVersio

    Implementation of a knowledge discovery and enhancement module from structured information gained from unstructured sources of information

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    2017 DWH Long-Term Data Management Coordination Workshop Report

    Get PDF
    On June 7 and 8, 2017, the Coastal Response Research Center (CRRC)[1], NOAA Office of Response and Restoration (ORR) and NOAA National Marine Fisheries Service (NMFS) Restoration Center (RC), co-sponsored the Deepwater Horizon Oil Spill (DWH) Long Term Data Management (LTDM) workshop at the ORR Gulf of Mexico (GOM) Disaster Response Center (DRC) in Mobile, AL. There has been a focus on restoration planning, implementation and monitoring of the on-going DWH-related research in the wake of the DWH Natural Resource Damage Assessment (NRDA) settlement. This means that data management, accessibility, and distribution must be coordinated among various federal, state, local, non-governmental organizations (NGOs), academic, and private sector partners. The scope of DWH far exceeded any other spill in the U.S. with an immense amount of data (e.g., 100,000 environmental samples, 15 million publically available records) gathered during the response and damage assessment phases of the incident as well as data that continues to be produced from research and restoration efforts. The challenge with the influx in data is checking the quality, documenting data collection, storing data, integrating it into useful products, managing it and archiving it for long term use. In addition, data must be available to the public in an easily queried and accessible format. Answering questions regarding the success of the restoration efforts will be based on data generated for years to come. The data sets must be readily comparable, representative and complete; be collected using cross-cutting field protocols; be as interoperable as possible; meet standards for quality assurance/quality control (QA/QC); and be unhindered by conflicting or ambiguous terminology. During the data management process for the NOAA Natural Resource Damage Assessment (NRDA) for the DWH disaster, NOAA developed a data management warehouse and visualization system that will be used as a long term repository for accessing/archiving NRDA injury assessment data. This serves as a foundation for the restoration project planning and monitoring data for the next 15 or more years. The main impetus for this workshop was to facilitate public access to the DWH data collected and managed by all entities by developing linkages to or data exchanges among applicable GOM data management systems. There were 66 workshop participants (Appendix A) representing a variety of organizations who met at NOAA’s GOM Disaster Response Center (DRC) in order to determine the characteristics of a successful common operating picture for DWH data, to understand the systems that are currently in place to manage DWH data, and make the DWH data interoperable between data generators, users and managers. The external partners for these efforts include, but are not limited to the: RESTORE Council, Gulf of Mexico Research Initiative (GoMRI), Gulf of Mexico Research Initiative Information and Data Cooperative (GRIIDC), the National Academy of Sciences (NAS) Gulf Research Program, Gulf of Mexico Alliance (GOMA), and National Fish and Wildlife Foundation (NFWF). The workshop objectives were to: Foster collaboration among the GOM partners with respect to data management and integration for restoration planning, implementation and monitoring; Identify standards, protocols and guidance for LTDM being used by these partners for DWH NRDA, restoration, and public health efforts; Obtain feedback and identify next steps for the work completed by the Environmental Disasters Data Management (EDDM) Working Groups; and Work towards best practices on public distribution and access of this data. The workshop consisted of plenary presentations and breakout sessions. The workshop agenda (Appendix B) was developed by the organizing committee. The workshop presentations topics included: results of a pre-workshop survey, an overview of data generation, the uses of DWH long term data, an overview of LTDM, an overview of existing LTDM systems, an overview of data management standards/ protocols, results from the EDDM working groups, flow diagrams of existing data management systems, and a vision on managing big data. The breakout sessions included discussions of: issues/concerns for data stakeholders (e.g., data users, generators, managers), interoperability, ease of discovery/searchability, data access, data synthesis, data usability, and metadata/data documentation. [1] A list of acronyms is provided on Page 1 of this report
    • …
    corecore