4,453 research outputs found

    Two Decades of Colorization and Decolorization for Images and Videos

    Full text link
    Colorization is a computer-aided process, which aims to give color to a gray image or video. It can be used to enhance black-and-white images, including black-and-white photos, old-fashioned films, and scientific imaging results. On the contrary, decolorization is to convert a color image or video into a grayscale one. A grayscale image or video refers to an image or video with only brightness information without color information. It is the basis of some downstream image processing applications such as pattern recognition, image segmentation, and image enhancement. Different from image decolorization, video decolorization should not only consider the image contrast preservation in each video frame, but also respect the temporal and spatial consistency between video frames. Researchers were devoted to develop decolorization methods by balancing spatial-temporal consistency and algorithm efficiency. With the prevalance of the digital cameras and mobile phones, image and video colorization and decolorization have been paid more and more attention by researchers. This paper gives an overview of the progress of image and video colorization and decolorization methods in the last two decades.Comment: 12 pages, 19 figure

    Large-Scale Gravitational Instability and Star Formation in the Large Magellanic Cloud

    Full text link
    Large-scale star formation in disk galaxies is hypothesized to be driven by global gravitational instability. The observed gas surface density is commonly used to compute the strength of gravitational instability, but according to this criterion star formation often appears to occur in gravitationally stable regions. One possible reason is that the stellar contribution to the instability has been neglected. We have examined the gravitational instability of the Large Magellanic Cloud (LMC) considering the gas alone, and considering the combination of collisional gas and collisionless stars. We compare the gravitationally unstable regions with the on-going star formation revealed by Spitzer observations of young stellar objects. Although only 62% of the massive young stellar object candidates are in regions where the gas alone is unstable, some 85% lie in regions unstable due to the combination of gas and stars. The combined stability analysis better describes where star formation occurs. In agreement with other observations and numerical models, a small fraction of the star formation occurs in regions with gravitational stability parameter Q > 1. We further measure the dependence of the star formation timescale on the strength of gravitational instability, and quantitatively compare it to the exponential dependence expected from numerical simulations.Comment: Accepted for publication in ApJ, 10 pages, 5 figure

    Adaptive smoothness constraint image multilevel fuzzy enhancement algorithm

    Get PDF
    For the problems of poor enhancement effect and long time consuming of the traditional algorithm, an adaptive smoothness constraint image multilevel fuzzy enhancement algorithm based on secondary color-to-grayscale conversion is proposed. By using fuzzy set theory and generalized fuzzy set theory, a new linear generalized fuzzy operator transformation is carried out to obtain a new linear generalized fuzzy operator. By using linear generalized membership transformation and inverse transformation, secondary color-to-grayscale conversion of adaptive smoothness constraint image is performed. Combined with generalized fuzzy operator, the region contrast fuzzy enhancement of adaptive smoothness constraint image is realized, and image multilevel fuzzy enhancement is realized. Experimental results show that the fuzzy degree of the image is reduced by the improved algorithm, and the clarity of the adaptive smoothness constraint image is improved effectively. The time consuming is short, and it has some advantages

    Local Contrast Enhancement Utilizing Bidirectional Switching Equalization Of Separated And Clipped Sub-Histograms

    Get PDF
    Digital image contrast enhancement methods that are based on histogram equalization (HE) technique are useful for the use in consumer electronic products due to their simple implementation. However, almost all the suggested enhancement methods are using global processing technique, which does not emphasize local contents. Kaedah penyerlahan beza jelas imej digit berdasarkan teknik penyeragaman histogram adalah berguna dalam penggunaan produk elektronik pengguna disebabkan pelaksanaan yang mudah. Walau bagaimanapun, kebanyakan kaedah penyerlahan yang dicadangkan adalah menggunakan teknik proses sejagat dan tidak menekan kepada kandungan setempat

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Improved texture image classification through the use of a corrosion-inspired cellular automaton

    Full text link
    In this paper, the problem of classifying synthetic and natural texture images is addressed. To tackle this problem, an innovative method is proposed that combines concepts from corrosion modeling and cellular automata to generate a texture descriptor. The core processes of metal (pitting) corrosion are identified and applied to texture images by incorporating the basic mechanisms of corrosion in the transition function of the cellular automaton. The surface morphology of the image is analyzed before and during the application of the transition function of the cellular automaton. In each iteration the cumulative mass of corroded product is obtained to construct each of the attributes of the texture descriptor. In a final step, this texture descriptor is used for image classification by applying Linear Discriminant Analysis. The method was tested on the well-known Brodatz and Vistex databases. In addition, in order to verify the robustness of the method, its invariance to noise and rotation were tested. To that end, different variants of the original two databases were obtained through addition of noise to and rotation of the images. The results showed that the method is effective for texture classification according to the high success rates obtained in all cases. This indicates the potential of employing methods inspired on natural phenomena in other fields.Comment: 13 pages, 14 figure
    corecore