49,434 research outputs found

    The problem of evaluating automated large-scale evidence aggregators

    Get PDF
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy

    Modelling of content-aware indicators for effective determination of shot boundaries in compressed MPEG videos

    Get PDF
    In this paper, a content-aware approach is proposed to design multiple test conditions for shot cut detection, which are organized into a multiple phase decision tree for abrupt cut detection and a finite state machine for dissolve detection. In comparison with existing approaches, our algorithm is characterized with two categories of content difference indicators and testing. While the first category indicates the content changes that are directly used for shot cut detection, the second category indicates the contexts under which the content change occurs. As a result, indications of frame differences are tested with context awareness to make the detection of shot cuts adaptive to both content and context changes. Evaluations announced by TRECVID 2007 indicate that our proposed algorithm achieved comparable performance to those using machine learning approaches, yet using a simpler feature set and straightforward design strategies. This has validated the effectiveness of modelling of content-aware indicators for decision making, which also provides a good alternative to conventional approaches in this topic

    Disability policy evaluation : combining logic models and systems thinking

    Get PDF

    Perceptual Copyright Protection Using Multiresolution Wavelet-Based Watermarking And Fuzzy Logic

    Full text link
    In this paper, an efficiently DWT-based watermarking technique is proposed to embed signatures in images to attest the owner identification and discourage the unauthorized copying. This paper deals with a fuzzy inference filter to choose the larger entropy of coefficients to embed watermarks. Unlike most previous watermarking frameworks which embedded watermarks in the larger coefficients of inner coarser subbands, the proposed technique is based on utilizing a context model and fuzzy inference filter by embedding watermarks in the larger-entropy coefficients of coarser DWT subbands. The proposed approaches allow us to embed adaptive casting degree of watermarks for transparency and robustness to the general image-processing attacks such as smoothing, sharpening, and JPEG compression. The approach has no need the original host image to extract watermarks. Our schemes have been shown to provide very good results in both image transparency and robustness.Comment: 13 pages, 7 figure

    A unified framework for building ontological theories with application and testing in the field of clinical trials

    Get PDF
    The objective of this research programme is to contribute to the establishment of the emerging science of Formal Ontology in Information Systems via a collaborative project involving researchers from a range of disciplines including philosophy, logic, computer science, linguistics, and the medical sciences. The re­searchers will work together on the construction of a unified formal ontology, which means: a general framework for the construction of ontological theories in specific domains. The framework will be constructed using the axiomatic-deductive method of modern formal ontology. It will be tested via a series of applications relating to on-going work in Leipzig on medical taxonomies and data dictionaries in the context of clinical trials. This will lead to the production of a domain-specific ontology which is designed to serve as a basis for applications in the medical field
    corecore