1,380 research outputs found

    Illumination system including a virtual light source Patent

    Get PDF
    Illumination system design for use as sunlight simulator in space environment simulators with multiple light sources reflected to single virtual sourc

    An approach for Shadow Detection and Removal based on Multiple Light Sources

    Get PDF
    Shadows in images are essential but sometimes unwanted as they can decline the result of computer vision algorithms. A shadow is obtained by the interaction of light with objects in an image surface. Shadows may letdown the image analysis processes and also cause a poor quality of information which in turn leads to problems in execution of algorithms. In this paper, a method has been proposed to detect and remove the shadows where multiple sources of light is been estimated, as we can take an example of playground stadium where multiple floodlights are fixed, multiple shadows can be observed originating from each of the targets. To successfully track individual target, it is essential to achieve an accurate image of the foreground. Also, an effort has been done to list some of the very crucial techniques related to shadow detection and removal. Many times, the shadow of the background information is merged with the foreground object and makes the process more complex. DOI: 10.17762/ijritcc2321-8169.150517

    Improving Lens Flare Removal with General Purpose Pipeline and Multiple Light Sources Recovery

    Full text link
    When taking images against strong light sources, the resulting images often contain heterogeneous flare artifacts. These artifacts can importantly affect image visual quality and downstream computer vision tasks. While collecting real data pairs of flare-corrupted/flare-free images for training flare removal models is challenging, current methods utilize the direct-add approach to synthesize data. However, these methods do not consider automatic exposure and tone mapping in image signal processing pipeline (ISP), leading to the limited generalization capability of deep models training using such data. Besides, existing methods struggle to handle multiple light sources due to the different sizes, shapes and illuminance of various light sources. In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP and remodeling the principle of automatic exposure in the synthesis pipeline and design a more reliable light sources recovery strategy. The new pipeline approaches realistic imaging by discriminating the local and global illumination through convex combination, avoiding global illumination shifting and local over-saturation. Our strategy for recovering multiple light sources convexly averages the input and output of the neural network based on illuminance levels, thereby avoiding the need for a hard threshold in identifying light sources. We also contribute a new flare removal testing dataset containing the flare-corrupted images captured by ten types of consumer electronics. The dataset facilitates the verification of the generalization capability of flare removal methods. Extensive experiments show that our solution can effectively improve the performance of lens flare removal and push the frontier toward more general situations.Comment: ICCV 202

    Multiplexed Illumination for Scene Recovery in the Presence of Global Illumination

    Get PDF
    Global illumination effects such as inter-reflections and subsurface scattering result in systematic, and often significant errors in scene recovery using active illumination. Recently, it was shown that the direct and global components could be separated efficiently for a scene illuminated with a single light source. In this paper, we study the problem of direct-global separation for multiple light sources. We derive a theoretical lower bound for the number of required images, and propose a multiplexed illumination scheme which achieves this lower bound. We analyze the signal-to-noise ratio (SNR) characteristics of the proposed illumination multiplexing method in the context of direct-global separation. We apply our method to several scene recovery techniques requiring multiple light sources, including shape from shading, structured light 3D scanning, photometric stereo, and reflectance estimation. Both simulation and experimental results show that the proposed method can accurately recover scene information with fewer images compared to sequentially separating direct-global components for each light source

    Mocarts: a lightweight radiation transport simulator for easy handling of complex sensing geometries

    Get PDF
    In functional neuroimaging (fNIRS), elaborated sensing geometries pairing multiple light sources and detectors arranged over the tissue surface are needed. A variety of software tools for probing forward models of radiation transport in tissue exist, but their handling of sensing geometries and specification of complex tissue architectures is, most times, cumbersome. In this work, we introduce a lightweight simulator, Monte Carlo Radiation Transport Simulator (MOCARTS) that attends these demands for simplifying specification of tissue architectures and complex sensing geometries. An object-oriented architecture facilitates such goal. The simulator core is evolved from the Monte Carlo Multi-Layer (mcml) tool but extended to support multi-channel simulations. Verification against mcml yields negligible error (RMSE~4-10e-9) over a photon trajectory. Full simulations show concurrent validity of the proposed tool. Finally, the ability of the new software to simulate multi-channel sensing geometries and to define biological tissue models in an intuitive nested-hierarchy way are exemplified

    Fusion for Multiple Light Sources in Texture Mapping Object

    Get PDF
    Abstract—In this paper, fusion method using wavelet transform was used to calculate the effects of multiple sources of light to make the object look more realistic and more informative than the original parts which take the effect of each light source alone. The color value of any pixel in the object depends essentially on the color of the specific mapped texture as well as on the effect of the light sources that produce bright or dark pixels value depending on the distance between the pixel and light source and the direction of light rays. The proposed method merges the effect of multiple lights. Instead of summing these effects, the method takes from each pixel one of three conditions. These states are the maximum effect among all sources of the light, the minimum effect among all sources to reflect fine shadow and the ratio effect from all sources. This ratio depends on the relative parameters among all sources

    Colour Constancy For Non‐Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the scene’s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segment’s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources
    corecore