4 research outputs found

    Machbarkeitsstudie fĂĽr Service Vermittlungsplattform "PiArch" : Service-Plattform fĂĽr revisionssichere Langzeit-Archivierung

    Get PDF
    In der von der Firma bfa ltd. und KTI finanzierten Machbarkeitsstudie wurde untersucht, wie die technische Realisation einer Service-Vermittlungsplattform (PiArch) möglich ist. Bfa möchte mit PiArch eine Service-Plattform anbieten, welche die Funktion eines Brokers übernimmt. Dieser soll den Kunden unterschiedliche Speicher- und Business-Intelligence Anbieter vermitteln und deren Service transparent integrieren. PiArch Service ist demzufolge ein Metaservice, also ein Service der einen Service anbietet. In der Studie wurde mit Hilfe von Business Use Cases die Prozesse definiert und analysiert. Basierend darauf wurde ein Architektur-Entwurf erstellt und einige kritische Punkte im Sinne eines "Proof-of-Concept" auf ihre Machbarkeit geprüft und experimentell implementiert. Zusätzlich zur Implementations-Empfehlung wurden ein Sicherheits- und ein Metadaten-Konzept erarbeitet

    First Report on Alternative Evaluation Methodology PROMISE Deliverable 4.1

    Get PDF
    The first report on alternative evaluation methodology summarizes work done within the PROMISE environment and especially within Work package 4 - Evaluation Metrics and Methodologies. The report outlines efforts to develop and support alternative, automated evaluation methodologies, with a special focus on generating ground truth from existing data sources like Log files or annotations. Events like LogCLEF 2011, PatOlympics 2011 or the CHiC2011 workshop are presented and reviewed on their impact on the three main uses case domains

    Black box evaluation for operational information retrieval applications

    No full text
    The black box application evaluation methodology described in this tutorial is applicable to a broad range of operational information retrieval (IR) applications. Contrary to popular, traditional IR evaluation approaches that are limited to measure the IR system performance on a test collection, the black box evaluation methodology considers an IR application in its entirety: the underlying system, the corresponding document collection, and its configuration/application layer. A comprehensive set of quality criteria is used to estimate the user’s perception of the application. Scores are assigned as a weighted average of results from tests that evaluate individual aspects. The methodology was validated in a small evaluation campaign. An analysis of this campaign shows a correlation between the testers’ perception of the applications and the evaluation scores. Moreover, functional weaknesses of the tested IR applications can be identified and then systematically targeted

    Evaluation for operational IR applications – generalizability and automation

    No full text
    Black box information retrieval (IR) application evaluation allows practitioners to measure the quality of their IR application. Instead of evaluating specific components, e.g. solely the search engine, a complete IR application, including the user’s perspective, is evaluated. The evaluation methodology is designed to be applicable to operational IR applications. The black box evaluation methodology could be packaged into an evaluation and monitoring tool, making it usable for industry stakeholders. The tool should lead practitioners through the evaluation process and maintain the test results for the manual and automatic tests. This paper shows that the methodology is generalizable, even though the diversity of IR applications is high. The challenges in automating tests are the simulation of tasks that require intellectual effort and the handling of different visualizations of the same concept
    corecore