19,274 research outputs found

    Dublin City University at the TREC 2005 terabyte track

    Get PDF
    For the 2005 Terabyte track in TREC Dublin City University participated in all three tasks: Adhoc, E±ciency and Named Page Finding. Our runs for TREC in all tasks were primarily focussed on the application of "Top Subset Retrieval" to the Terabyte Track. This retrieval utilises different types of sorted inverted indices so that less documents are processed in order to reduce query times, and is done so in a way that minimises loss of effectiveness in terms of query precision. We also compare a distributed version of our Físréal search system [1][2] against the same system deployed on a single machine

    Exploring The Value Of Folksonomies For Creating Semantic Metadata

    No full text
    Finding good keywords to describe resources is an on-going problem: typically we select such words manually from a thesaurus of terms, or they are created using automatic keyword extraction techniques. Folksonomies are an increasingly well populated source of unstructured tags describing web resources. This paper explores the value of the folksonomy tags as potential source of keyword metadata by examining the relationship between folksonomies, community produced annotations, and keywords extracted by machines. The experiment has been carried-out in two ways: subjectively, by asking two human indexers to evaluate the quality of the generated keywords from both systems; and automatically, by measuring the percentage of overlap between the folksonomy set and machine generated keywords set. The results of this experiment show that the folksonomy tags agree more closely with the human generated keywords than those automatically generated. The results also showed that the trained indexers preferred the semantics of folksonomy tags compared to keywords extracted automatically. These results can be considered as evidence for the strong relationship of folksonomies to the human indexer’s mindset, demonstrating that folksonomies used in the del.icio.us bookmarking service are a potential source for generating semantic metadata to annotate web resources

    Achieving User Interface Heterogeneity in a Distributed Environment

    No full text
    The introduction of distribution into the field of computing has enhanced the possibilities of information processing and interchange on scales which could not previously be achieved with stand-alone machines. However, the successful distribution of a process across a distributed system requires three problems to be considered; how the functionality of a process is distributed, how the data set on which the process works is distributed and how the interface that allows the process to communicate with the outside world is distributed. The focus of the work in this paper lies in describing a model that attempts to provide a solution to the latter problem. The model that has been developed allows the functionality of a process to be separated from and to exist independently from its interface and employs user interface independent display languages to provide distributed and heterogeneous user interfaces to processes. This separation also facilitates access to a service from diverse platforms and can support user interface mobility and third-party application integration. The goals and advantages of this model are partially realised in a prototype that has been designed around the WWW and its associated protocols, and it is predicted how the model could be fully realised by adopting a modular and object-oriented approach, as advocated by the Java programming environment

    The Grid[Way] Job Template Manager, a tool for parameter sweeping

    Full text link
    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort.Comment: 26 pages, 1 figure

    Towards a Flexible Intra-Trustcenter Management Protocol

    Full text link
    This paper proposes the Intra Trustcenter Protocol (ITP), a flexible and secure management protocol for communication between arbitrary trustcenter components. Unlike other existing protocols (like PKCS#7, CMP or XKMS) ITP focuses on the communication within a trustcenter. It is powerful enough for transferring complex messages which are machine and human readable and easy to understand. In addition it includes an extension mechanism to be prepared for future developments.Comment: 12 pages, 0 figures; in The Third International Workshop for Applied PKI (IWAP2004

    Interactive tag maps and tag clouds for the multiscale exploration of large spatio-temporal datasets

    Get PDF
    'Tag clouds' and 'tag maps' are introduced to represent geographically referenced text. In combination, these aspatial and spatial views are used to explore a large structured spatio-temporal data set by providing overviews and filtering by text and geography. Prototypes are implemented using freely available technologies including Google Earth and Yahoo! 's Tag Map applet. The interactive tag map and tag cloud techniques and the rapid prototyping method used are informally evaluated through successes and limitations encountered. Preliminary evaluation suggests that the techniques may be useful for generating insights when visualizing large data sets containing geo-referenced text strings. The rapid prototyping approach enabled the technique to be developed and evaluated, leading to geovisualization through which a number of ideas were generated. Limitations of this approach are reflected upon. Tag placement, generalisation and prominence at different scales are issues which have come to light in this study that warrant further work
    corecore