18,660 research outputs found
aDORe djatoka: An Open-Source Jpeg 2000 Image Server and Dissemination Service Framework
4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 03:00 PM â 04:30 PMThe JPEG 2000 image format has attracted considerable attention due to its rich feature set defined in a multi-part open ISO standard, and its potential use as a holy-grail preservation format providing both lossless compression and rich service format features. Until recently there was lack of an implementation agnostic (e.g., Kakadu, Aware, etc) API for JPEG 2000 compression and extraction, and an open-source service framework, upon which rich Web 2.0-style applications can be developed. Recently we engaged in the development of aDORe djatoka , an open-source JPEG 2000 image server and dissemination framework to help address some of these issues. The djatoka image server is geared towards Web 2.0 style reuse through URI-addressability of all image disseminations including regions, rotations, and format transformations. Djatoka also provides a JPEG 2000 compression / extraction API that serves as an abstraction layer from the underlying JPEG 2000 library (e.g., Kakadu, Aware, etc). The initial release has attracted considerable interest and is already being used in production environments, such as at the Biodiversity Heritage Library , who uses djatoka to serve more than eleven million images. This presentation introduces the aDORe djatoka image server and describes various interoperability approaches with existing repository systems. Djatoka was derived from a concrete need to introduce a solution to disseminate high-resolution images stored in an aDORe repository system. Djatoka is able to disseminate images that reside either in a repository environment or that are Web-accessible at arbitrary URIs. Since dynamic service requests pertaining to an identified resource (the entire JPEG 2000 image) are being made, the OpenURL Framework was selected to provide an extensible dissemination service framework. The OpenURL service layer simplifies development and provides exciting interoperability opportunities. The presentation will showcase the flexibility of this interface by introducing a mobile image collection viewer developed for the iPhone / iTouch platform
The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain
The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals
Grids and the Virtual Observatory
We consider several projects from astronomy that benefit from the Grid paradigm and
associated technology, many of which involve either massive datasets or the federation
of multiple datasets. We cover image computation (mosaicking, multi-wavelength
images, and synoptic surveys); database computation (representation through XML,
data mining, and visualization); and semantic interoperability (publishing, ontologies,
directories, and service descriptions)
Pathways: Augmenting interoperability across scholarly repositories
In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries
special issue on Digital Libraries and eScienc
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Towards building information modelling for existing structures
The transformation of cities from the industrial age (unsustainable) to the knowledge age (sustainable) is essentially a âwhole life cycleâ process consisting of; planning, development, operation, reuse and renewal. During this transformation, a multi-disciplinary knowledge base, created from studies and research about the built environment aspects is fundamental: historical, architectural, archeologically, environmental, social, economic, etc is critical. Although there are a growing number of applications of 3D VR modelling applications, some built environment applications such as disaster management, environmental simulations, computer aided architectural design and planning require more sophisticated models beyond 3D graphical visualization such as multifunctional, interoperable, intelligent, and multi-representational.
Advanced digital mapping technologies such as 3D laser scanner technologies can be are enablers for effective e-planning, consultation and communication of usersâ views during the planning, design, construction and lifecycle process of the built environment. For example, the 3D laser scanner enables digital documentation of buildings, sites and physical objects for reconstruction and restoration. It also facilitates the creation of educational resources within the built environment, as well as the reconstruction of the built environment. These technologies can be used to drive the productivity gains by promoting a free-flow of information between departments, divisions, offices, and sites; and between themselves, their contractors and partners when the data captured via those technologies are processed and modelled into BIM (Building Information Modelling). The use of these technologies is key enablers to the creation of new approaches to the âWhole Life Cycleâ process within the built and human environment for the 21st century. The paper describes the research towards Building Information Modelling for existing structures via the point cloud data captured by the 3D laser scanner technology. A case study building is elaborated to demonstrate how to produce 3D CAD models and BIM models of existing structures based on designated technique
Interoperability between Multimedia Collections for Content and Metadata-Based Searching
Artiste is a European project developing a cross-collection search system for art galleries and museums. It combines image content retrieval with text based retrieval and uses RDF mappings in order to integrate diverse databases. The test sites of the Louvre, Victoria and Albert Museum, Uffizi Gallery and National Gallery London provide their own database schema for existing metadata, avoiding the need for migration to a common schema. The system will accept a query based on one museumâs fields and convert them, through an RDF mapping into a form suitable for querying the other collections. The nature of some of the image processing algorithms means that the system can be slow for some computations, so the system is session-based to allow the user to return to the results later. The system has been built within a J2EE/EJB framework, using the Jboss Enterprise Application Server
- âŚ