213,091 research outputs found

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    The Strategic Role of Semantic Web in the Big Data Context

    Get PDF
    Artigo apresentado no I Workshop de Informação, Dados e Tecnologia, realizado entre nos dias 04 e 06 de setembro de 2017, na cidade de Florianópolis (SC), no Auditório do Espaço Físico Integrado (EFI) da Universidade Federal de Santa Catarina (UFSC).A Web Semântica apresenta um corpus teórico e diversas tecnologias e aplicações que demonstram a sua consistência, inclusive no que tange ao uso de seus conceitos e de suas tecnologias em outros escopos não se limitando unicamente a Web. Neste sentido, os projetos de Big Data podem tirar proveito da aplicação dos princípios e dos desenvolvimentos realizados na área da Web Semântica, para aperfeiçoar os processos de análises de dados, em especial na inserção de características semânticas para contextualização dos dados. Assim, esta pesquisa tem como objetivo analisar e discutir o potencial das tecnologias da Web Semântica como meio de integração e desenvolvimento de aplicações de Big Data. Utilizou-se uma metodologia qualitativa exploratória, onde buscou-se pontos de convergência entre a Web Semântica e Big Data. Foram identificados e discutidos três pontos principais: a aplicação do Linked Data enquanto fonte de dados para o Big Data; o uso de ontologias nas análises de dados; e o uso das tecnologias da Web Semântica para promoção da interoperabilidade em cenários de Big Data. Neste sentido, foi possível identificar que a Web Semântica, em especial no que permeia suas tecnologias e aplicações, pode auxiliar significativamente o desenvolvimento do Big Data, por fornecer um paradigma complementar dos aplicados majoritariamente nas análises de dados.The Semantic Web presents a theoretical corpus and a range of technologies and applications that demonstrate its consistency, including in use of its concepts and its technologies in other scopes than the Web. In this sense, Big Data's projects can take advantage of the application of principles and developments in the area of the Semantic Web, to improve the processes of data analysis, especially in the insertion of semantic characteristics for data contextualization. Thus, this research aims to analyze and discuss the potential of Semantic Web technologies as a means of integrating and developing Big Data applications. An exploratory qualitative methodology was used, where we searched for points of the literature and documentary texts dealt with the convergence between the Semantic Web and the Big Data. Three main points were identified and discussed: the application of Linked Data as a data source for Big Data; the use of ontologies in data analysis; the use of Semantic Web technologies to promote interoperability in Big Data scenarios. Therefore, it was possible to identify that the Semantic Web, especially with regard to its technologies, can help Big Data, since it provides a paradigm different from those applied mainly in data analysis

    Semantic learning webs

    Get PDF
    By 2020, microprocessors will likely be as cheap and plentiful as scrap paper,scattered by the millions into the environment, allowing us to place intelligent systems everywhere. This will change everything around us, including the nature of commerce, the wealth of nations, and the way we communicate, work, play, and live. This will give us smart homes, cars, TVs , jewellery, and money. We will speak to our appliances, and they will speak back. Scientists also expect the Internet will wire up the entire planet and evolve into a membrane consisting of millions of computer networks, creating an “intelligent planet.” The Internet will eventually become a “Magic Mirror” that appears in fairy tales, able to speak with the wisdom of the human race. Michio Kaku, Visions: How Science Will Revolutionize the Twenty - First Century, 1998 If the semantic web needed a symbol, a good one to use would be a Navaho dream-catcher: a small web, lovingly hand-crafted, [easy] to look at, and rumored to catch dreams; but really more of a symbol than a reality. Pat Hayes, Catching the Dreams, 2002 Though it is almost impossible to envisage what the Web will be like by the end of the next decade, we can say with some certainty that it will have continued its seemingly unstoppable growth. Given the investment of time and money in the Semantic Web (Berners-Lee et al., 2001), we can also be sure that some form of semanticization will have taken place. This might be superficial - accomplished simply through the addition of loose forms of meta-data mark-up, or more principled – grounded in ontologies and formalised by means of emerging semantic web standards, such as RDF (Lassila and Swick, 1999) or OWL (Mc Guinness and van Harmelen, 2003). Whatever the case, the addition of semantic mark-up will make at least part of the Web more readily accessible to humans and their software agents and will facilitate agent interoperability. If current research is successful there will also be a plethora of e-learning platforms making use of a varied menu of reusable educational material or learning objects. For the learner, the semanticized Web will, in addition, offer rich seams of diverse learning resources over and above the course materials (or learning objects) specified by course designers. For instance, the annotation registries, which provide access to marked up resources, will enable more focussed, ontologically-guided (or semantic) search. This much is already in development. But we can go much further. Semantic technologies make it possible not only to reason about the Web as if it is one extended knowledge base but also to provide a range of additional educational semantic web services such as summarization, interpretation or sense-making, structure-visualization, and support for argumentation

    A metadata service for service oriented architectures

    Get PDF
    Service oriented architectures provide a modern paradigm for web services allowing seamless interoperation among network applications and supporting a flexible approach to building large complex information systems. A number of industrial standards have emerged to exploit this paradigm with the development o f the J2E E and .N E T infrastructure platforms, communication protocol SOAP, d e scription language WSDL and orchestration languages BPEL, XLANG and WSCI. At the same time the Semantic Web enables automated use of ontologies to describe web services in a machine interpretable language. To enable process composition and large scale resource integration over heterogeneous sources a new research in itiative is needed. Current initiatives have identified the role of Peer-to-Peer networks and Service Oriented Architectures to enable large scale resource communication an d integration. However this approach neglects to identify or utilise the role of Semantic Web technologies to promote greater automation and reliability using service semantics, thus a new framework is required adopting Peer-to-Peer networks, Service Oriented Architectures and Semantic Web technologies. In this context, this thesis presents a management an d storage framework for a distributed service repository over a super peer network to facilitate process composition

    Open research for diffusion of open digital memories at Web 2.0/3.0

    Get PDF
    This paper suggests an experimental perspective of Open Research, understood as a process of deconstruction of knowledge about society that leads to its reconstruction, archiving and dissemination in the form of Open Digital Memories. This posture was developed within the Project Public Communication of Art: the Case of Global / Local Art Museums, at the University of Lisbon. The project was funded by Foundation for Science and Technology, and produced 6 books and 8 sites, among other final results. Researching and memorizing may be pursuited through an open style that includes the production and reception of investigation by both the researcher and the common citizen. This may involve multiple shared tasks: questioning the social, organization and critique of sources and data, co-participation in the use of methods, public discussions on work in progress and on research results. For this aim, Open Research must articulate Social Sciences and Humanities to New Media, specially across digital social networks, both at Web 2.0 (the Reading/Writing Internet) and at Web 3.0 (the so-called Semantic Web). Two strategies contributing to this posture will be exemplified, within the optics of Semantic-Logic Sociology: Experimental Books and Social Semantic-Logic Sites. They use the following instruments for producing/writing and receiving/reading social and semantic knowledge, some of these shown in the present paper: Visual Ontologies built from Social Hybridologies, GeoNeoLogic Methods (Multitouch Questionnaire, Trichotomies Game, etc..), Conceptual Abstracts, Present Books, Author-Actor Maps, GeoNeoLogic Novels, Visual Social Ontologies, Knowledge Interactive Windows, Visual Socio-Semantic Indexes, Visual Meta-Semantic Indexes. In short, Open Research and Open Digital Memories may constitute some of the fundamental pillars of emergent Research Society. This means a social paradigm where common citizens may become a sort of ‘lay researchers’ and, in the process, reformulate contemporary expert’s knowledge and power.Fundação para a Ciência e a Tecnologia (FCT

    VOSD: a general-purpose virtual observatory over semantic databases

    Get PDF
    E-Science relies heavily on manipulating massive amounts of data for research purposes. Researchers should be able to contribute their own data and methods, thus making their results accessible and reproducible by others worldwide. They need an environment which they can use anytime and anywhere to perform data-intensive computations. Virtual observatories serve this purpose. With the advance of the Semantic Web, more and more data is available in Resource Description Framework based databases. It is often desirable to have the ability to link data from local sources to these public data sets. We present a prototype system, which satisfies the requirements of a virtual observatory over semantic databases, such as user roles, data import, query execution, visualization, exporting result, etc. The system has special features which facilitate working with semantic data: visual query editor, use of ontologies, knowledge inference, querying remote endpoints, linking remote data with local data, extracting data from web pages

    Knowledge Extraction from Textual Resources through Semantic Web Tools and Advanced Machine Learning Algorithms for Applications in Various Domains

    Get PDF
    Nowadays there is a tremendous amount of unstructured data, often represented by texts, which is created and stored in variety of forms in many domains such as patients' health records, social networks comments, scientific publications, and so on. This volume of data represents an invaluable source of knowledge, but unfortunately it is challenging its mining for machines. At the same time, novel tools as well as advanced methodologies have been introduced in several domains, improving the efficacy and the efficiency of data-based services. Following this trend, this thesis shows how to parse data from text with Semantic Web based tools, feed data into Machine Learning methodologies, and produce services or resources to facilitate the execution of some tasks. More precisely, the use of Semantic Web technologies powered by Machine Learning algorithms has been investigated in the Healthcare and E-Learning domains through not yet experimented methodologies. Furthermore, this thesis investigates the use of some state-of-the-art tools to move data from texts to graphs for representing the knowledge contained in scientific literature. Finally, the use of a Semantic Web ontology and novel heuristics to detect insights from biological data in form of graph are presented. The thesis contributes to the scientific literature in terms of results and resources. Most of the material presented in this thesis derives from research papers published in international journals or conference proceedings

    A Web Service Composition Method Based on OpenAPI Semantic Annotations

    Full text link
    Automatic Web service composition is a research direction aimed to improve the process of aggregating multiple Web services to create some new, specific functionality. The use of semantics is required as the proper semantic model with annotation standards is enabling the automation of reasoning required to solve non-trivial cases. Most previous models are limited in describing service parameters as concepts of a simple hierarchy. Our proposed method is increasing the expressiveness at the parameter level, using concept properties that define attributes expressed by name and type. Concept properties are inherited. The paper also describes how parameters are matched to create, in an automatic manner, valid compositions. Additionally, the composition algorithm is practically used on descriptions of Web services implemented by REST APIs expressed by OpenAPI specifications. Our proposal uses knowledge models (ontologies) to enhance these OpenAPI constructs with JSON-LD semantic annotations in order to obtain better compositions for involved services. We also propose an adjusted composition algorithm that extends the semantic knowledge defined by our model.Comment: International Conference on e-Business Engineering (ICEBE) 9 page

    A generic framework for the development of standardised learning objects within the discipline of construction management

    Get PDF
    E-learning has occurred in the academic world in different forms since the early 1990s. Its use varies from interactive multimedia tools and simulation environments to static resources within learning management systems. E-learning tools and environments are no longer criticised for their lack of use in higher education in general and within the construction domain in particular. The main criticism, however, is that of reinventing the wheel in order to create new learning environments that cater for different educational needs. Therefore, sharing educational content has become the focus of current research, taking e-learning into a whole new era of developments. This era is enabled by the emergence of new technologies (online and wireless) and the development of educational standards, such as SCORM (Sharable Content Object Reference Model) and LOM (Learning Object Metadata) for example. Accordingly, the broad definition of the construction domain and the interlocking nature of subjects taught within this domain, makes the concept of sharing content most appealing. This paper proposes a framework developed to describe the various steps required in order to enable the application of e-learning metadata standards and ontology for sharable learning objects to serve the construction discipline. The paper further describes the application of the proposed framework to a case study for developing an online environment for learning objects that are standardised, sharable, transparent and that cater for the needs of learners, educators and curricula developers in Construction Management. Based on the framework, a learning objects repository is developed incorporating educational and web standards. The repository manages objects as well as metadata using ontology and offers a set of services such as storing, retrieving and searching of learning objects using Semantic Web technologies. Thus, it increases the reusability, sharability and interoperability of learning objects
    corecore