4,339 research outputs found

    Stigmergic hyperlink's contributes to web search

    Get PDF
    Stigmergic hyperlinks are hyperlinks with a "heart beat": if used they stay healthy and online; if neglected, they fade, eventually getting replaced. Their life attribute is a relative usage measure that regular hyperlinks do not provide, hence PageRank-like measures have historically been well informed about the structure of webs of documents, but unaware of what users effectively do with the links. This paper elaborates on how to input the users’ perspective into Google’s original, structure centric, PageRank metric. The discussion then bridges to the Deep Web, some search challenges, and how stigmergic hyperlinks could help decentralize the search experience, facilitating user generated search solutions and supporting new related business models.info:eu-repo/semantics/publishedVersio

    Towards Modeling of DataWeb Applications - A Requirement\u27s Perspective

    Get PDF
    The web is more and more used as a platform for fullfledged, increasingly complex information systems, where a huge amount of change-intensive data is managed by underlying database systems. From a software engineering point of view, the development of such so called DataWeb applications requires proper modeling methods in order to ensure architectural soundness and maintainability. The goal of this paper is twofold. First, a framework of requirements, covering the design space of DataWeb modeling methods in terms of three orthogonal dimensions is suggested. Second, on the basis of this framework, eight representative modeling methods for DataWeb applications are surveyed and general shortcomings are identified pointing the way to nextgeneration modeling methods

    Linked Data - the story so far

    No full text
    The term “Linked Data” refers to a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the last three years, leading to the creation of a global data space containing billions of assertions— the Web of Data. In this article, the authors present the concept and technical principles of Linked Data, and situate these within the broader context of related technological developments. They describe progress to date in publishing Linked Data on the Web, review applications that have been developed to exploit the Web of Data, and map out a research agenda for the Linked Data community as it moves forward

    Exploring The Value Of Folksonomies For Creating Semantic Metadata

    No full text
    Finding good keywords to describe resources is an on-going problem: typically we select such words manually from a thesaurus of terms, or they are created using automatic keyword extraction techniques. Folksonomies are an increasingly well populated source of unstructured tags describing web resources. This paper explores the value of the folksonomy tags as potential source of keyword metadata by examining the relationship between folksonomies, community produced annotations, and keywords extracted by machines. The experiment has been carried-out in two ways: subjectively, by asking two human indexers to evaluate the quality of the generated keywords from both systems; and automatically, by measuring the percentage of overlap between the folksonomy set and machine generated keywords set. The results of this experiment show that the folksonomy tags agree more closely with the human generated keywords than those automatically generated. The results also showed that the trained indexers preferred the semantics of folksonomy tags compared to keywords extracted automatically. These results can be considered as evidence for the strong relationship of folksonomies to the human indexer’s mindset, demonstrating that folksonomies used in the del.icio.us bookmarking service are a potential source for generating semantic metadata to annotate web resources

    Métodos de ingeniería web dirigidos por modelos: una revisión de literatura

    Get PDF
    RESUMEN: Este artículo presenta algunos de los métodos de ingeniería Web dirigida por modelos que se han propuesto. En él se discuten y analizan las ventajas y desventajas de dichos métodos con relación a las tendencias actuales y las mejores prácticas en la ingeniería dirigida por modelos. La idea es presentar cada método y analizar los modelos que propone para representar aplicaciones Web, los aspectos arquitectónicos en las transformaciones y el uso de tecnologías actuales de interfaz de usuario Web en el código generado. Esto se hace con el fin de vislumbrar posibles líneas de investigación para trabajos futuros en el área de la ingeniería Web dirigida por modelos.ABSTRACT: This paper presents some of the model-driven Web engineering methods that have been proposed, and discusses and analyzes the advantages and disadvantages of such methods regarding current tendencies and best practices on model-driven engineering. The idea is to present each approach and analyze the models they propose to represent Web applications, the architectural aspects in the transformations, and the use of current Web user interface technologies in the generated code. This is done in order to depict possible research lines for future works on the model-driven Web engineering area

    Business Process Modeling and Quick Prototyping with WebRatio BPM

    Get PDF
    We describe a software tool called WebRatio BPM that helps close the gap between the modeling of business processes and the design and implementation of the software applications that support their enactment. The main idea is to enhance the degree of automation in the conversion of business process models into application models, defined as abstract, platform-independent representations of the application structure and behavior. Application models are themselves amenable to the semiautomatic transformation into application code, resulting in extremely rapid prototyping and shorter time-to-market. Thanks to the proposed chain of model transformations it is also possible to fine tune the final application in several ways, e.g., by integrating the visual identity of the organization or connecting the business process to legacy applications via Web Services

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Challenging Computer Software Frontiers and the Human Resistance to Change

    Get PDF
    This paper examines the driving and opposing forces that are governing the current paradigm shift from a data-processing information technology environment without software intelligence to an information-centric environment in which data changes are automatically interpreted withinthe context of the application domain. The driving forces are related to the large quantity of dataand the complexity of networked systems that both call for software intelligence. The opposing forces are non-technical and due to the natural human resistance to change. Based on this background the paper describes current information-centric technology, proposes avision of intelligent software system capabilities, and identifies four areas of necessary research.Most urgent among these are the ability to dynamically extend and merge ontologies and semantic search capabilities that can be initiated either by human users or software agents.Longer term research interests that pose a more severe challenge are related to the translation of emerging theoretical hierarchical temporal memory (HTM) concepts into usable software capabilities and the automated interpretation of graphical images such as those recorded bysurveillance video cameras
    corecore