1,777 research outputs found

    Peirce, meaning and the semantic web

    Get PDF
    The so-called ‘Semantic Web’ is phase II of Tim Berners-Lee’s original vision for the WWW, whereby resources would no longer be indexed merely ‘syntactically’, via opaque character-strings, but via their meanings. We argue that one roadblock to Semantic Web development has been researchers’ adherence to a Cartesian, ‘private’ account of meaning, which has been dominant for the last 400 years, and which understands the meanings of signs as what their producers intend them to mean. It thus strives to build ‘silos of meaning’ which explicitly and antecedently determine what signs on the Web will mean in all possible situations. By contrast, the field is moving forward insofar as it embraces Peirce’s ‘public’, evolutionary account of meaning, according to which the meaning of signs just is the way they are interpreted and used to produce further signs. Given the extreme interconnectivity of the Web, it is argued that silos of meaning are unnecessary as plentiful machine-understandable data about the meaning of Web resources exists already in the form of those resources themselves, for applications that are able to leverage it, and it is Peirce’s account of meaning which can best make sense of the recent explosion in ‘user-defined content’ on the Web, and its relevance to achieving Semantic Web goals

    Application of semantic web technologies for automatic multimedia annotation

    Get PDF

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    The Application of Semantic Web Technologies to Content Analysis in Sociology

    Get PDF
    In der Soziologie werden Texte als soziale PhĂ€nomene verstanden, die als Mittel zur Analyse von sozialer Wirklichkeit dienen können. Im Laufe der Jahre hat sich eine breite Palette von Techniken in der soziologischen Textanalyse entwickelt, du denen quantitative und qualitative Methoden, sowie vollstĂ€ndig manuelle und computergestĂŒtzte AnsĂ€tze gehören. Die Entwicklung des World Wide Web und sozialer Medien, aber auch technische Entwicklungen wie maschinelle Schrift- und Spracherkennung tragen dazu bei, dass die Menge an verfĂŒgbaren und analysierbaren Texten enorm angestiegen ist. Dies fĂŒhrte in den letzten Jahren dazu, dass auch Soziologen auf mehr computergestĂŒtzte AnsĂ€tze zur Textanalyse setzten, wie zum Beispiel statistische ’Natural Language Processing’ (NLP) Techniken. Doch obwohl vielseitige Methoden und Technologien fĂŒr die soziologische Textanalyse entwickelt wurden, fehlt es an einheitlichen Standards zur Analyse und Veröffentlichung textueller Daten. Dieses Problem fĂŒhrt auch dazu, dass die Transparenz von Analyseprozessen und Wiederverwendbarkeit von Forschungsdaten leidet. Das ’Semantic Web’ und damit einhergehend ’Linked Data’ bieten eine Reihe von Standards zur Darstellung und Organisation von Informationen und Wissen. Diese Standards werden von zahlreichen Anwendungen genutzt, darunter befinden sich auch Methoden zur Veröffentlichung von Daten und ’Named Entity Linking’, eine spezielle Form von NLP. Diese Arbeit versucht die Frage zu diskutieren, in welchem Umfang diese Standards und Tools aus der SemanticWeb- und Linked Data- Community die computergestĂŒtzte Textanalyse in der Soziologie unterstĂŒtzen können. Die dafĂŒr notwendigen Technologien werden kurz vorgsetellt und danach auf einen Beispieldatensatz der aus Verfassungstexten der Niederlande von 1883 bis 2016 bestand angewendet. Dabei wird demonstriert wie aus den Dokumenten RDF Daten generiert und veröffentlicht werden können, und wie darauf zugegriffen werden kann. Es werden Abfragen erstellt die sich zunĂ€chst ausschließlich auf die lokalen Daten beziehen und daraufhin wird demonstriert wie dieses lokale Wissen durch Informationen aus externen Wissensbases angereichert werden kann. Die vorgestellten AnsĂ€tze werden im Detail diskutiert und es werden Schnittpunkte fĂŒr ein mögliches Engagement der Soziologen im Semantic Web Bereich herausgearbeitet, die die vogestellten Analysen und Abfragemöglichkeiten in Zukunft erweitern können

    Ontologies on the semantic web

    Get PDF
    As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The “Semantic Web” was touted by its developers as equally revolutionary but has not yet achieved anything like the Web’s exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT

    Infectious Disease Ontology

    Get PDF
    Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain

    Applying semantic web technologies to knowledge sharing in aerospace engineering

    Get PDF
    This paper details an integrated methodology to optimise Knowledge reuse and sharing, illustrated with a use case in the aeronautics domain. It uses Ontologies as a central modelling strategy for the Capture of Knowledge from legacy docu-ments via automated means, or directly in systems interfacing with Knowledge workers, via user-defined, web-based forms. The domain ontologies used for Knowledge Capture also guide the retrieval of the Knowledge extracted from the data using a Semantic Search System that provides support for multiple modalities during search. This approach has been applied and evaluated successfully within the aerospace domain, and is currently being extended for use in other domains on an increasingly large scale

    Linked Data Supported Content Analysis for Sociology

    Get PDF
    Philology and hermeneutics as the analysis and interpretation of natural language text in written historical sources are the predecessors of modern content analysis and date back already to antiquity. In empirical social sciences, especially in sociology, content analysis provides valuable insights to social structures and cultural norms of the present and past. With the ever growing amount of text on the web to analyze, also numerous computer-assisted text analysis techniques and tools were developed in sociological research. However, existing methods often go without sufficient standardization. As a consequence, sociological text analysis is lacking transparency, reproducibility and data re-usability. The goal of this paper is to show, how Linked Data principles and Entity Linking techniques can be used to structure, publish and analyze natural language text for sociological research to tackle these shortcomings. This is achieved on the use case of constitutional text documents of the Netherlands from 1884 to 2016 which represent an important contribution to the European cultural heritage. Finally, the generated data is made available and re-usable as Linked Data not only for sociologists, but also for all other researchers in the digital humanities domain interested in the development of constitutions in the Netherlands
    • 

    corecore