3 research outputs found

    Automatic exposure control in network video cameras

    Get PDF
    The overall objective of this study is to describe, analyse and suggest improvements on existing automatic exposure control systems in selected network video cameras. Since an image sensor has a limited dynamic range compared to a real scene, it is necessary to automatically control the exposure level and thus adapt to the amount of light in the scene. This can be done by adjusting parameters such as exposure time, gain and variable aperture in an automatic control loop. The two cameras in this study run different implementations of such a control loop and the topic of this study is to test their performance, to review their implementation of automatic exposure control, to comment on their implementation from a theoretical stand point, and to suggest improvements. The most focus has been correction of integrator function or adding of integrator functionality to the controllers to remove steady state errors. Integrator windup was solved for two cases. Some other minor bugs giving unwanted behavior such ass finite word length in the integrators. Also improving gain scheduling and correction of clamping of signals are suggested. A suggestion for smear control improvement is to use feed forward the changes when changes are needed to exposure, this enables to control faster and still limit the impact on the picture quality

    Extending the dynamic range of robotic vision

    No full text
    Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes
    corecore