18,660 research outputs found

    aDORe djatoka: An Open-Source Jpeg 2000 Image Server and Dissemination Service Framework

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-05-19 03:00 PM – 04:30 PMThe JPEG 2000 image format has attracted considerable attention due to its rich feature set defined in a multi-part open ISO standard, and its potential use as a holy-grail preservation format providing both lossless compression and rich service format features. Until recently there was lack of an implementation agnostic (e.g., Kakadu, Aware, etc) API for JPEG 2000 compression and extraction, and an open-source service framework, upon which rich Web 2.0-style applications can be developed. Recently we engaged in the development of aDORe djatoka , an open-source JPEG 2000 image server and dissemination framework to help address some of these issues. The djatoka image server is geared towards Web 2.0 style reuse through URI-addressability of all image disseminations including regions, rotations, and format transformations. Djatoka also provides a JPEG 2000 compression / extraction API that serves as an abstraction layer from the underlying JPEG 2000 library (e.g., Kakadu, Aware, etc).  The initial release has attracted considerable interest and is already being used in production environments, such as at the Biodiversity Heritage Library , who uses djatoka to serve more than eleven million images. This presentation introduces the aDORe djatoka image server and describes various interoperability approaches with existing repository systems.  Djatoka was derived from a concrete need to introduce a solution to disseminate high-resolution images stored in an aDORe repository system.  Djatoka is able to disseminate images that reside either in a repository environment or that are Web-accessible at arbitrary URIs.  Since dynamic service requests pertaining to an identified resource (the entire JPEG 2000 image) are being made, the OpenURL Framework was selected to provide an extensible dissemination service framework. The OpenURL service layer simplifies development and provides exciting interoperability opportunities. The presentation will showcase the flexibility of this interface by introducing a mobile image collection viewer developed for the iPhone / iTouch platform

    The INCF Digital Atlasing Program: Report on Digital Atlasing Standards in the Rodent Brain

    Get PDF
    The goal of the INCF Digital Atlasing Program is to provide the vision and direction necessary to make the rapidly growing collection of multidimensional data of the rodent brain (images, gene expression, etc.) widely accessible and usable to the international research community. This Digital Brain Atlasing Standards Task Force was formed in May 2008 to investigate the state of rodent brain digital atlasing, and formulate standards, guidelines, and policy recommendations.

Our first objective has been the preparation of a detailed document that includes the vision and specific description of an infrastructure, systems and methods capable of serving the scientific goals of the community, as well as practical issues for achieving
the goals. This report builds on the 1st INCF Workshop on Mouse and Rat Brain Digital Atlasing Systems (Boline et al., 2007, _Nature Preceedings_, doi:10.1038/npre.2007.1046.1) and includes a more detailed analysis of both the current state and desired state of digital atlasing along with specific recommendations for achieving these goals

    Grids and the Virtual Observatory

    Get PDF
    We consider several projects from astronomy that benefit from the Grid paradigm and associated technology, many of which involve either massive datasets or the federation of multiple datasets. We cover image computation (mosaicking, multi-wavelength images, and synoptic surveys); database computation (representation through XML, data mining, and visualization); and semantic interoperability (publishing, ontologies, directories, and service descriptions)

    Pathways: Augmenting interoperability across scholarly repositories

    Full text link
    In the emerging eScience environment, repositories of papers, datasets, software, etc., should be the foundation of a global and natively-digital scholarly communications system. The current infrastructure falls far short of this goal. Cross-repository interoperability must be augmented to support the many workflows and value-chains involved in scholarly communication. This will not be achieved through the promotion of single repository architecture or content representation, but instead requires an interoperability framework to connect the many heterogeneous systems that will exist. We present a simple data model and service architecture that augments repository interoperability to enable scholarly value-chains to be implemented. We describe an experiment that demonstrates how the proposed infrastructure can be deployed to implement the workflow involved in the creation of an overlay journal over several different repository systems (Fedora, aDORe, DSpace and arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries special issue on Digital Libraries and eScienc

    The aDORe federation architecture: digital repositories at scale

    Get PDF

    Towards building information modelling for existing structures

    Get PDF
    The transformation of cities from the industrial age (unsustainable) to the knowledge age (sustainable) is essentially a ‘whole life cycle’ process consisting of; planning, development, operation, reuse and renewal. During this transformation, a multi-disciplinary knowledge base, created from studies and research about the built environment aspects is fundamental: historical, architectural, archeologically, environmental, social, economic, etc is critical. Although there are a growing number of applications of 3D VR modelling applications, some built environment applications such as disaster management, environmental simulations, computer aided architectural design and planning require more sophisticated models beyond 3D graphical visualization such as multifunctional, interoperable, intelligent, and multi-representational. Advanced digital mapping technologies such as 3D laser scanner technologies can be are enablers for effective e-planning, consultation and communication of users’ views during the planning, design, construction and lifecycle process of the built environment. For example, the 3D laser scanner enables digital documentation of buildings, sites and physical objects for reconstruction and restoration. It also facilitates the creation of educational resources within the built environment, as well as the reconstruction of the built environment. These technologies can be used to drive the productivity gains by promoting a free-flow of information between departments, divisions, offices, and sites; and between themselves, their contractors and partners when the data captured via those technologies are processed and modelled into BIM (Building Information Modelling). The use of these technologies is key enablers to the creation of new approaches to the ‘Whole Life Cycle’ process within the built and human environment for the 21st century. The paper describes the research towards Building Information Modelling for existing structures via the point cloud data captured by the 3D laser scanner technology. A case study building is elaborated to demonstrate how to produce 3D CAD models and BIM models of existing structures based on designated technique

    Interoperability between Multimedia Collections for Content and Metadata-Based Searching

    No full text
    Artiste is a European project developing a cross-collection search system for art galleries and museums. It combines image content retrieval with text based retrieval and uses RDF mappings in order to integrate diverse databases. The test sites of the Louvre, Victoria and Albert Museum, Uffizi Gallery and National Gallery London provide their own database schema for existing metadata, avoiding the need for migration to a common schema. The system will accept a query based on one museum’s fields and convert them, through an RDF mapping into a form suitable for querying the other collections. The nature of some of the image processing algorithms means that the system can be slow for some computations, so the system is session-based to allow the user to return to the results later. The system has been built within a J2EE/EJB framework, using the Jboss Enterprise Application Server
    • …
    corecore