3,271 research outputs found

    Spatial Standard Observer

    Get PDF
    The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images

    3D photogrammetric data modeling and optimization for multipurpose analysis and representation of Cultural Heritage assets

    Get PDF
    This research deals with the issues concerning the processing, managing, representation for further dissemination of the big amount of 3D data today achievable and storable with the modern geomatic techniques of 3D metric survey. In particular, this thesis is focused on the optimization process applied to 3D photogrammetric data of Cultural Heritage assets. Modern Geomatic techniques enable the acquisition and storage of a big amount of data, with high metric and radiometric accuracy and precision, also in the very close range field, and to process very detailed 3D textured models. Nowadays, the photogrammetric pipeline has well-established potentialities and it is considered one of the principal technique to produce, at low cost, detailed 3D textured models. The potentialities offered by high resolution and textured 3D models is today well-known and such representations are a powerful tool for many multidisciplinary purposes, at different scales and resolutions, from documentation, conservation and restoration to visualization and education. For example, their sub-millimetric precision makes them suitable for scientific studies applied to the geometry and materials (i.e. for structural and static tests, for planning restoration activities or for historical sources); their high fidelity to the real object and their navigability makes them optimal for web-based visualization and dissemination applications. Thanks to the improvement made in new visualization standard, they can be easily used as visualization interface linking different kinds of information in a highly intuitive way. Furthermore, many museums look today for more interactive exhibitions that may increase the visitors’ emotions and many recent applications make use of 3D contents (i.e. in virtual or augmented reality applications and through virtual museums). What all of these applications have to deal with concerns the issue deriving from the difficult of managing the big amount of data that have to be represented and navigated. Indeed, reality based models have very heavy file sizes (also tens of GB) that makes them difficult to be handled by common and portable devices, published on the internet or managed in real time applications. Even though recent advances produce more and more sophisticated and capable hardware and internet standards, empowering the ability to easily handle, visualize and share such contents, other researches aim at define a common pipeline for the generation and optimization of 3D models with a reduced number of polygons, however able to satisfy detailed radiometric and geometric requests. iii This thesis is inserted in this scenario and focuses on the 3D modeling process of photogrammetric data aimed at their easy sharing and visualization. In particular, this research tested a 3D models optimization, a process which aims at the generation of Low Polygons models, with very low byte file size, processed starting from the data of High Poly ones, that nevertheless offer a level of detail comparable to the original models. To do this, several tools borrowed from the game industry and game engine have been used. For this test, three case studies have been chosen, a modern sculpture of a contemporary Italian artist, a roman marble statue, preserved in the Civic Archaeological Museum of Torino, and the frieze of the Augustus arch preserved in the city of Susa (Piedmont- Italy). All the test cases have been surveyed by means of a close range photogrammetric acquisition and three high detailed 3D models have been generated by means of a Structure from Motion and image matching pipeline. On the final High Poly models generated, different optimization and decimation tools have been tested with the final aim to evaluate the quality of the information that can be extracted by the final optimized models, in comparison to those of the original High Polygon one. This study showed how tools borrowed from the Computer Graphic offer great potentialities also in the Cultural Heritage field. This application, in fact, may meet the needs of multipurpose and multiscale studies, using different levels of optimization, and this procedure could be applied to different kind of objects, with a variety of different sizes and shapes, also on multiscale and multisensor data, such as buildings, architectural complexes, data from UAV surveys and so on

    Recent Progress in the Development of INCITS W1.1, Appearance-Based Image Quality Standards for Printers

    Get PDF
    In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard.(J),(2) The resulting W1.1 project is based on a proposal(4) that perceived image quality can be described by a small set of broad-based attributes. There are currently five ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress

    Print engine color management using customer image content

    Get PDF
    The production of quality color prints requires that color accuracy and reproducibility be maintained to within very tight tolerances when transferred to different media. Variations in the printing process commonly produce color shifts that result in poor color reproduction. The primary function of a color management system is maintaining color quality and consistency. Currently these systems are tuned in the factory by printing a large set of test color patches, measuring them, and making necessary adjustments. This time-consuming procedure should be repeated as needed once the printer leaves the factory. In this work, a color management system that compensates for print color shifts in real-time using feedback from an in-line full-width sensor is proposed. Instead of printing test patches, this novel attempt at color management utilizes the output pixels already rendered in production pages, for a continuous printer characterization. The printed pages are scanned in-line and the results are utilized to update the process by which colorimetric image content is translated into engine specific color separations (e.g. CIELAB-\u3eCMYK). The proposed system provides a means to perform automatic printer characterization, by simply printing a set of images that cover the gamut of the printer. Moreover, all of the color conversion features currently utilized in production systems (such as Gray Component Replacement, Gamut Mapping, and Color Smoothing) can be achieved with the proposed system

    Searching for Optical Counterparts to Gravitational Waves

    Get PDF
    The era of multi-messenger astronomy has begun. The coordinated activities of multiple, distinct observatories play a critical role in both responding to astrophysical transients and building a more comprehensive interpretation otherwise inaccessible to individual observations. The Transient Robotic Observatory of the South (TOROS) Collaboration has a global network of instruments capable of responding to several transient targets of opportunity. The purpose of this thesis is to demonstrate how optical observatories with small fields of view (degree) can follow up and observe astrophysical transients. TOROS facilities responded to three unique gravitational wave events during the second and third observational campaigns of the Laser Interferometer Gravitational-Wave Observatory. We found no optical transients associated with the binary black hole merger GW170104 or the neutron star-black hole merger S190814bv. We detected the optical counterpart AT2017gfo during the follow-up response to the binary neutron star merger GW170817. Elliptical isophote modeling and subtraction of the host galaxy NGC 4993 reveal an isolated optical transient not seen in previous archival data. We performed relative time-series photometry in SDSS gri-bands on the detected transient. AT2017gfo exhibits rapid dimming in all bands and color change over the ~1.5 hours of time-series observations. We observe colors of AT2017gfo to be g-r = 0.79(0.08), r-i=0.23(0.08), and g-i=1.02(0.08) at ~35 hours post-merger. We calculate the corresponding absolute magnitudes Mg=-14.40(0.06), Mr=-15.06(0.05), and Mi=-15.22(0.06). We observe AT2017gfo to have an angular offset of ~10.4\u27\u27 from the galactic core, corresponding to a linear diameter of ~2 kpc at redshift z=0.00973. Although AT2017gfo is generally recognized to be consistent with an r-process-powered thermal transient, or kilonova, our observations are in partial disagreement with the accepted picture. We address the plausible reasons for the discrepancies in our measurements. We developed a reduction and photometry pipeline during the processing and analysis of data from these events. The CTMO Analysis Pipeline (CAL) is currently at an early phase of operation, with plans for automation and further inclusion of additional analysis methods to optimize the TOROS search and characterization of astrophysical transients

    Manned observations technology development, FY 1992 report

    Get PDF
    This project evaluated the suitability of the NASA/JSC developed electronic still camera (ESC) digital image data for Earth observations from the Space Shuttle, as a first step to aid planning for Space Station Freedom. Specifically, image resolution achieved from the Space Shuttle using the current ESC system, which is configured with a Loral 15 mm x 15 mm (1024 x 1024 pixel array) CCD chip on the focal plane of a Nikon F4 camera, was compared to that of current handheld 70 mm Hasselblad 500 EL/M film cameras

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Preservation in the age of large-scale digitization

    Get PDF
    The digitization of millions of books under programs such as Google Book Search and Microsoft Live Search Books is dramatically expanding our ability to search and find information. The aim of these large-scale projects?to make content accessible?is interwoven with the question of how one keeps that content, whether digital or print, fit for use over time. This report by Oya Y. Rieger examines large-scale digital initiatives (LSDIs) to identify issues that will influence the availability and usability, over time, of the digital books these projects create. Ms. Rieger is interim assistant university librarian for digital library and information technologies at the Cornell University Library. The paper describes four large-scale projects?Google Book Search, Microsoft Live Search Books, Open Content Alliance, and the Million Book Project?and their digitization strategies. It then discusses a range of issues affecting the stewardship of the digital collections they create: selection, quality in content creation, technical infrastructure, and organizational infrastructure. The paper also attempts to foresee the likely impacts of large-scale digitization on book collections

    SymED: Adaptive and Online Symbolic Representation of Data on the Edge

    Full text link
    The edge computing paradigm helps handle the Internet of Things (IoT) generated data in proximity to its source. Challenges occur in transferring, storing, and processing this rapidly growing amount of data on resource-constrained edge devices. Symbolic Representation (SR) algorithms are promising solutions to reduce the data size by converting actual raw data into symbols. Also, they allow data analytics (e.g., anomaly detection and trend prediction) directly on symbols, benefiting large classes of edge applications. However, existing SR algorithms are centralized in design and work offline with batch data, which is infeasible for real-time cases. We propose SymED - Symbolic Edge Data representation method, i.e., an online, adaptive, and distributed approach for symbolic representation of data on edge. SymED is based on the Adaptive Brownian Bridge-based Aggregation (ABBA), where we assume low-powered IoT devices do initial data compression (senders) and the more robust edge devices do the symbolic conversion (receivers). We evaluate SymED by measuring compression performance, reconstruction accuracy through Dynamic Time Warping (DTW) distance, and computational latency. The results show that SymED is able to (i) reduce the raw data with an average compression rate of 9.5%; (ii) keep a low reconstruction error of 13.25 in the DTW space; (iii) simultaneously provide real-time adaptability for online streaming IoT data at typical latencies of 42ms per symbol, reducing the overall network traffic.Comment: 14 pages, 5 figure
    • …
    corecore