12,572 research outputs found
Medical Image Retrieval: Past and Present
With the widespread dissemination of picture archiving and communication systems (PACSs) in hospitals, the amount of imaging data is rapidly increasing. Effective image retrieval systems are required to manage these complex and large image databases. The authors reviewed the past development and the present state of medical image retrieval systems including text-based and content-based systems. In order to provide a more effective image retrieval service, the intelligent content-based retrieval systems combined with semantic systems are required
On Archiving and Retrieval of Sequential Images From Tomographic Databases in PACS
In the picture archiving and communication systems (PACS) used in modern hospitals, the current practice is to retrieve images based on keyword search, which returns a complete set of images from the same scan. Both diagnostically useful and negligible images in the image databases are retrieved and browsed by the physicians. In addition to the text-based search query method, queries based on image contents and image examples have been developed and integrated into existing PACS systems. Most of the content-based image retrieval (CBIR) systems for medical image databases are designed to retrieve images individually. However, in a database of tomographic images, it is often diagnostically more useful to simultaneously retrieve multiple images that are closely related for various reasons, such as physiological continguousness, etc. For example, high resolution computed tomography (HRCT) images are taken in a series of cross-sectional slices of human body. Typically, several slices are relevant for making a diagnosis, requiring a PACS system that can retrieve a contiguous sequence of slices. In this paper, we present an extension to our physician-in-the-loop CBIR system, which allows our algorithms to automatically determine the number of adjoining images to retain after certain key images are identified by the physician. Only the key images, so identified by the physician, and the other adjoining images that cohere with the key images are kept on-line for fast retrieval; the rest of the images can be discarded if so desired. This results in large reduction in the amount of storage needed for fast retrieval
Digital archiving of manuscripts and other heritage items for conservation and information retrieval
Expression of cultural heritage looking from the informatics angle falls into text, images, video and sound categories. ICT can be used to conserve all these heritage items like; the text information consisting of palm leaf manuscripts, stone tablets, handwritten paper documents, old printed records, books, microfilms, fiche etc, images including paintings, drawings, photographs and the like, sound items which includes musical concerts, poetry recitations, chanting of mantras, talks of important persons etc, and video items like archival films historical importance. To retrieve required information from such a large mass of materials in different formats and to transmit them across space and time, there are several limitations. Digital technology allows hitherto unavailable facilities for durable storage and speedy and efficient transmission / retrieval of information contained in all the above formats. Hypertext and hypermedia features of digital media enable integrating text with graphics, sound, video and animation. This paper discusses the international and national efforts for digitizing heritage items, digital archiving solutions available, the possibilities of the media, and the need to follow standards prescribed by organizations like UNESCO to enable easy exchange and pooling of information and documents generated in digital archiving systems at national and international level. The need to develop language technology for local scripts for organizing and preserving our cultural heritage is also stressed
CHORUS Deliverable 4.3: Report from CHORUS workshops on national initiatives and metadata
Minutes of the following Workshops:
âą National Initiatives on Multimedia Content Description and Retrieval, Geneva, October 10th, 2007.
âą Metadata in Audio-Visual/Multimedia production and archiving, Munich, IRT, 21st â 22nd November 2007
Workshop in Geneva 10/10/2007
This highly successful workshop was organised in cooperation with the European Commission. The event brought together
the technical, administrative and financial representatives of the various national initiatives, which have been established
recently in some European countries to support research and technical development in the area of audio-visual content
processing, indexing and searching for the next generation Internet using semantic technologies, and which may lead to an
internet-based knowledge infrastructure. The objective of this workshop was to provide a platform for mutual information
and exchange between these initiatives, the European Commission and the participants. Top speakers were present from
each of the national initiatives. There was time for discussions with the audience and amongst the European National
Initiatives. The challenges, communalities, difficulties, targeted/expected impact, success criteria, etc. were tackled. This
workshop addressed how these national initiatives could work together and benefit from each other.
Workshop in Munich 11/21-22/2007
Numerous EU and national research projects are working on the automatic or semi-automatic generation of descriptive and
functional metadata derived from analysing audio-visual content. The owners of AV archives and production facilities are
eagerly awaiting such methods which would help them to better exploit their assets.Hand in hand with the digitization of
analogue archives and the archiving of digital AV material, metadatashould be generated on an as high semantic level as
possible, preferably fully automatically. All users of metadata rely on a certain metadata model. All AV/multimedia search
engines, developed or under current development, would have to respect some compatibility or compliance with the
metadata models in use. The purpose of this workshop is to draw attention to the specific problem of metadata models in the
context of (semi)-automatic multimedia search
Multimedia information technology and the annotation of video
The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning
Analysis and Synthesis of Metadata Goals for Scientific Data
The proliferation of discipline-specific metadata schemes contributes to artificial barriers that can impede interdisciplinary and transdisciplinary research. The authors considered this problem by examining the domains, objectives, and architectures of nine metadata schemes used to document scientific data in the physical, life, and social sciences. They used a mixed-methods content analysis and Greenbergâs (2005) metadata objectives, principles, domains, and architectural layout (MODAL) framework, and derived 22 metadata-related goals from textual content describing each metadata scheme. Relationships are identified between the domains (e.g., scientific discipline and type of data) and the categories of scheme objectives. For each strong correlation (\u3e0.6), a Fisherâs exact test for nonparametric data was used to determine significance (p \u3c .05).
Significant relationships were found between the domains and objectives of the schemes. Schemes describing observational data are more likely to have âscheme harmonizationâ (compatibility and interoperability with related schemes) as an objective; schemes with the objective âabstractionâ (a conceptual model exists separate from the technical implementation) also have the objective âsufficiencyâ (the scheme defines a minimal amount of information to meet the needs of the community); and schemes with the objective âdata publicationâ do not have the objective âelement refinement.â The analysis indicates that many metadata-driven goals expressed by communities are independent of scientific discipline or the type of data, although they are constrained by historical community practices and workflows as well as the technological environment at the time of scheme creation. The analysis reveals 11 fundamental metadata goals for metadata documenting scientific data in support of sharing research data across disciplines and domains. The authors report these results and highlight the need for more metadata-related research, particularly in the context of recent funding agency policy changes
Gabor Barcodes for Medical Image Retrieval
In recent years, advances in medical imaging have led to the emergence of
massive databases, containing images from a diverse range of modalities. This
has significantly heightened the need for automated annotation of the images on
one side, and fast and memory-efficient content-based image retrieval systems
on the other side. Binary descriptors have recently gained more attention as a
potential vehicle to achieve these goals. One of the recently introduced binary
descriptors for tagging of medical images are Radon barcodes (RBCs) that are
driven from Radon transform via local thresholding. Gabor transform is also a
powerful transform to extract texture-based information. Gabor features have
exhibited robustness against rotation, scale, and also photometric
disturbances, such as illumination changes and image noise in many
applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework
for the image annotation. To find the most discriminative GBC for a given query
image, the effects of employing Gabor filters with different parameters, i.e.,
different sets of scales and orientations, are investigated, resulting in
different barcode lengths and retrieval performances. The proposed method has
been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray
images for indexing, and 1,733 x-rays images for testing. A total error score
as low as ( accuracy for the first hit) was achieved.Comment: To appear in proceedings of The 2016 IEEE International Conference on
Image Processing (ICIP 2016), Sep 25-28, 2016, Phoenix, Arizona, US
Information scraps: how and why information eludes our personal information management tools
In this paper we describe information scraps -- a class of personal information whose content is scribbled on Post-it notes, scrawled on corners of random sheets of paper, buried inside the bodies of e-mail messages sent to ourselves, or typed haphazardly into text files. Information scraps hold our great ideas, sketches, notes, reminders, driving directions, and even our poetry. We define information scraps to be the body of personal information that is held outside of its natural or We have much still to learn about these loose forms of information capture. Why are they so often held outside of our traditional PIM locations and instead on Post-its or in text files? Why must we sometimes go around our traditional PIM applications to hold on to our scraps, such as by e-mailing ourselves? What are information scraps' role in the larger space of personal information management, and what do they uniquely offer that we find so appealing? If these unorganized bits truly indicate the failure of our PIM tools, how might we begin to build better tools? We have pursued these questions by undertaking a study of 27 knowledge workers. In our findings we describe information scraps from several angles: their content, their location, and the factors that lead to their use, which we identify as ease of capture, flexibility of content and organization, and avilability at the time of need. We also consider the personal emotive responses around scrap management. We present a set of design considerations that we have derived from the analysis of our study results. We present our work on an application platform, jourknow, to test some of these design and usability findings
Broadband for culture, a culture for broadband?
The augmentation of cultural participation in Flanders is one of the major cornerstones of the current cultural policy. Digital technologies offer a wide range of opportunities to achieve this goal, as the internet is often seen as a way to augment the number of visitors for arts centres. However, the availability of digital information technologies and the willingness to adopt these new ways of processing cultural material, is a prerequisite for this (r)evolution. This article is based on data collected in three surveys, one for each of the cultural actors; cultural organisations such as museums, arts centres etc, individual artists and art lovers in Flanders. Despite that most artists and cultural organizations are sufficiently equipped with up-to-date technological infrastructure, most websites lack true interactivity with a strong one-to-one relationship between audience, artists and cultural institutions. We therefore conclude that, although there are plenty of broadband connections and other digital tools available to the Flemish art scene, artists and cultural organisations lack a mind-set (or culture) to truly embrace and benefit from the potential of the current digital technologies
Exploiting multimedia in creating and analysing multimedia Web archives
The data contained on the web and the social web are inherently multimedia and consist of a mixture of textual, visual and audio modalities. Community memories embodied on the web and social web contain a rich mixture of data from these modalities. In many ways, the web is the greatest resource ever created by human-kind. However, due to the dynamic and distributed nature of the web, its content changes, appears and disappears on a daily basis. Web archiving provides a way of capturing snapshots of (parts of) the web for preservation and future analysis. This paper provides an overview of techniques we have developed within the context of the EU funded ARCOMEM (ARchiving COmmunity MEMories) project to allow multimedia web content to be leveraged during the archival process and for post-archival analysis. Through a set of use cases, we explore several practical applications of multimedia analytics within the realm of web archiving, web archive analysis and multimedia data on the web in general
- âŠ