5,294 research outputs found

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    Designing a fruit identification algorithm in orchard conditions to develop robots using video processing and majority voting based on hybrid artificial neural network

    Get PDF
    The first step in identifying fruits on trees is to develop garden robots for different purposes such as fruit harvesting and spatial specific spraying. Due to the natural conditions of the fruit orchards and the unevenness of the various objects throughout it, usage of the controlled conditions is very difficult. As a result, these operations should be performed in natural conditions, both in light and in the background. Due to the dependency of other garden robot operations on the fruit identification stage, this step must be performed precisely. Therefore, the purpose of this paper was to design an identification algorithm in orchard conditions using a combination of video processing and majority voting based on different hybrid artificial neural networks. The different steps of designing this algorithm were: (1) Recording video of different plum orchards at different light intensities; (2) converting the videos produced into its frames; (3) extracting different color properties from pixels; (4) selecting effective properties from color extraction properties using hybrid artificial neural network-harmony search (ANN-HS); and (5) classification using majority voting based on three classifiers of artificial neural network-bees algorithm (ANN-BA), artificial neural network-biogeography-based optimization (ANN-BBO), and artificial neural network-firefly algorithm (ANN-FA). Most effective features selected by the hybrid ANN-HS consisted of the third channel in hue saturation lightness (HSL) color space, the second channel in lightness chroma hue (LCH) color space, the first channel in L*a*b* color space, and the first channel in hue saturation intensity (HSI). The results showed that the accuracy of the majority voting method in the best execution and in 500 executions was 98.01% and 97.20%, respectively. Based on different performance evaluation criteria of the classifiers, it was found that the majority voting method had a higher performance.European Union (EU) under Erasmus+ project entitled “Fostering Internationalization in Agricultural Engineering in Iran and Russia” [FARmER] with grant number 585596-EPP-1-2017-1-DE-EPPKA2-CBHE-JPinfo:eu-repo/semantics/publishedVersio

    Combination of up-converting materials with semiconductor light sources

    Get PDF
    Methods, apparatus and systems for an up-converter resonant cavity light emitting diode device includes a semiconductor light source, an up-converter to form the light emitter with up-converting materials and an electrical source coupled with the semiconductor light source for providing electrical energy to the semiconductor light source to provide a desired wavelength emitted light. The semiconductor light source is a resonant cavity light emitting diode or laser that emits an approximately 975 nm wavelength to provide electrical and optical confinement to the semiconductor light source to form an up-converting resonant cavity light emitting diode (UCIRCLED). Rows and columns of electrodes provide active matrix addressing of plural sets of UCIRCLEDs for display devices. The up-converter resonant cavity light emitting diode device has applications in head mounted projection display optical system using spectrally selective beam splitters to eliminate spectral overlap between colors a

    Composite cavity for enhanced efficiency of up conversion.

    Get PDF
    Methods, apparatus and systems for an up-converter resonant cavity light emitting diode device includes a semiconductor light source, an up-converter to form the light emitter with up-converting materials and an electrical source coupled with the semiconductor light source for providing electrical energy to the semiconductor light source to provide a desired wavelength emitted light. The semiconductor light source is a resonant cavity light emitting diode or laser that emits an approximately 975 nm wavelength to provide electrical and optical confinement to the semiconductor light source to fonn a resonant cavity up-converting light emitting diode (UCIRCLED). Rows and columns of electrodes provide active matrix addressing of plural sets of UC/RCLEDs for display devices. The up-converter resonant cavity light emitting diode device has applications in head mounted projection display optical system using spectrally selective beam splitters to eliminate spectral overlap between colors a

    Composite cavity for enhanced efficiency of up conversion.

    Get PDF
    Methods, apparatus and systems for an up-converter resonant cavity light emitting diode device includes a semiconductor light source, an up-converter to form the light emitter with up-converting materials and an electrical source coupled with the semiconductor light source for providing electrical energy to the semiconductor light source to provide a desired wavelength emitted light. The semiconductor light source is a resonant cavity light emitting diode or laser that emits an approximately 975 nm wavelength to provide electrical and optical confinement to the semiconductor light source to fonn a resonant cavity up-converting light emitting diode (UCIRCLED). Rows and columns of electrodes provide active matrix addressing of plural sets of UC/RCLEDs for display devices. The up-converter resonant cavity light emitting diode device has applications in head mounted projection display optical system using spectrally selective beam splitters to eliminate spectral overlap between colors a

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    A Study of the color management implementation on the RGB-based digital imaging workflow: digital camera to RGB printers

    Get PDF
    An RGB (red, green, and blue color information) workflow is used in digital photography today because a lot of the devices digital cameras, scanners, monitors, image recorders (LVT or Light Value Technology), and some types of printers are based on RGB color information. In addition, rapidly growing new media such as the Internet and CD-ROM (Compact Disc-Read-Only Memory) publishing use an RGB -based monitor as the output device. Because color is device-dependent, each device has a different method of representing color information. Each has a different range of color they can reproduce. Most of the time, the range of color, color gamut, that devices can produce is smaller than that of the original capturing device. As a result, a color image reproduction does not match accurately with its original. Therefore, in typical color image reproduction, the task of matching a color image reproduction with its original is a significant problem that operators must overcome to achieve good quality color image reproduction. Generally, there are two approaches to conquer these problems. The first method is trial-and-error in the legacy-based system. This method is effective in a pair-wise working environment and highly depended on a skill operator. The second method is the ICC-based (ICC or International Color Consortium) color management system (CMS) which is more practical in the multiple devices working environment. Using the right method leads to the higher efficiency of a digital photography produc tion. Therefore, the purpose of this thesis project is to verify that ICC-based CMS with an RGB workflow has higher efficiency (better utilized of resource and capacity) than a legacy-based traditional color reproduction workflow. In this study, the RGB workflows from digital cameras to RGB digital printers were used because of the increasing num ber of digital camera users and the advantages of using an RGB workflow in digital pho tography. There were two experimental image reproduction workflows the legacy-based system and the ICC-based color management system. Both of them used the same raw RGB images that were captured from digital cameras as their input files. The color images were modified with two different color matching methods according to each workflow. Then, they were printed out to two RGB digital printers. Twenty observers were asked to evaluate the picture quality as well as the reproduction quality. The results demonstrated that the two workflows had the ability to produce an accept able picture quality reproduction. For reproduction quality aspect, the reproductions of the ICC-based CMS workflow had higher reproduction quality than the legacy-based workflow. In addition, when the time usage of the workflow was taken into account, it showed that the ICC-based CMS had higher efficiency than the legacy-based system. However, many times, image production jobs do not start with optimum quality raw images as in this study; for example, they are under/over exposure or have some defects. These images need some retouching work or fine adjustment to improve their quality. In these cases, the ICC-based CMS with skilled operators can be implemented to these types of production in order to achieve the high efficiency workflow

    Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

    Full text link
    Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded by RGB), which is specified by the object's reflectance, selected viewing conditions and a standard observer, translucency (encoded by A) is neither linked to any measurable physical nor perceptual quantity. Thus, reproducing translucency encoded by A is open for interpretation. In this paper, we propose a rigorous definition for A suitable for use in graphical 3D printing, which is independent of the 3D printing hardware and software, and which links both optical material properties and perceptual uniformity for human observers. By deriving our definition from the absorption and scattering coefficients of virtual homogeneous reference materials with an isotropic phase function, we achieve two important properties. First, a simple adjustment of A is possible, which preserves the translucency appearance if an object is re-scaled for printing. Second, determining the value of A for a real (potentially non-homogeneous) material, can be achieved by minimizing a distance function between light transport measurements of this material and simulated measurements of the reference materials. Such measurements can be conducted by commercial spectrophotometers used in graphic arts. Finally, we conduct visual experiments employing the method of constant stimuli, and derive from them an embedding of A into a nearly perceptually uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article) and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix). Supplemental material including code: https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa
    corecore