12 research outputs found

    Development of a Non-contact Skin Temperature Measurement System for Newborns Based on Intelligent Analysis of Thermometric Data

    Full text link
    Поступила: 16.10.2022. Принята в печать: 07.12.2022.Received: 16.10.2022. Accepted: 07.12.2022.Измерение температуры тела – жизненно важный параметр при наблюдении за новорожденными. Особенность термометрии у новорожденных заключается в определенных анатомических особенностях детей, которые делают их уязвимыми к изменениям температуры окружающей среды. К таким особенностям относятся: уменьшенное количество подкожно-жировой клетчатки, тонкий эпидермис, кровеносные сосуды расположены вблизи поверхности кожи. В настоящей статье произведен выбор метода для преобразования исходных пирометрических данных в тепловую карту, а также анализ полученных экспериментальных тепловизионных данных на предмет возможности использования пирометрического метода сканирования ложемента инкубатора для новорожденных.Measurement of body temperature is a vital parameter when monitoring newborns. The peculiarity of thermometry of newborns involves certain anatomical features of children, which make them vulnerable to changes in environmental temperature. These features include: the reduced amount of subcutaneous fat, thin epidermis, blood vessels located near the surface of the skin. This paper demonstrates the choice of the method of converting the original biometric data into the thermal map, as well as the analysis of the obtained experimental thermal imaging data of using the pyrometric method of scanning the incubator bed for newborns.Настоящее исследование выполнено научно-конструкторским бюро гражданского приборостроения акционерного общества «Производственное объединение “Уральский оптико-механический завод” имени Э. С. Яламова» в рамках комплексного проекта «Модернизация и внедрение в промышленное производство линейки неонатальных медицинских изделий “BONO” с целью увеличения импортонезависимости».The curent sudy is provided by the Joint Stocj Company Production Association Ural Optical and Mechanical Plant named after Mr. E. S. Yalamov Civil Devices Research and Development Bureou in a framework of the following project: Modernization and industrial production integration of the BONO medical devices modelling line dedicated to the incerase of import independency of the product line

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    Single image defocus estimation by modified gaussian function

    Get PDF
    © 2019 John Wiley & Sons, Ltd. This article presents an algorithm to estimate the defocus blur from a single image. Most of the existing methods estimate the defocus blur at edge locations, which further involves the reblurring process. For this purpose, existing methods use the traditional Gaussian function in the phase of reblurring but it is found that the traditional Gaussian kernel is sensitive to the edges and can cause loss of edges information. Hence, there are more chances of missing spatially varying blur at edge locations. We offer the repeated averaging filters as an alternative to the traditional Gaussian function, which is more effective, and estimate the spatially varying defocus blur at edge locations. By using repeated averaging filters, a blur sparse map is computed. The obtained sparse map is propagated by integration of superpixels segmentation and transductive inference to estimate full defocus blur map. Our adopted method of repeated averaging filters has less computational time of defocus blur map estimation and has better visual estimates of the final defocus recovered map. Moreover, it has surpassed many previous state-of-the-art proposed systems in terms of quantative analysis

    Real-Time Embedded Eye Image Defocus Estimation for Iris Biometrics

    Get PDF
    One of the main challenges faced by iris recognition systems is to be able to work with people in motion, where the sensor is at an increasing distance (more than 1 m) from the person. The ultimate goal is to make the system less and less intrusive and require less cooperation from the person. When this scenario is implemented using a single static sensor, it will be necessary for the sensor to have a wide field of view and for the system to process a large number of frames per second (fps). In such a scenario, many of the captured eye images will not have adequate quality (contrast or resolution). This paper describes the implementation in an MPSoC (multiprocessor system-on-chip) of an eye image detection system that integrates, in the programmable logic (PL) part, a functional block to evaluate the level of defocus blur of the captured images. In this way, the system will be able to discard images that do not have the required focus quality in the subsequent processing steps. The proposals were successfully designed using Vitis High Level Synthesis (VHLS) and integrated into an eye detection framework capable of processing over 57 fps working with a 16 Mpixel sensor. Using, for validation, an extended version of the CASIA-Iris-distance V4 database, the experimental evaluation shows that the proposed framework is able to successfully discard unfocused eye images. But what is more relevant is that, in a real implementation, this proposal allows discarding up to 97% of out-of-focus eye images, which will not have to be processed by the segmentation and normalised iris pattern extraction blocks.Funding for open Access charge: Universidad de Málaga / CBU

    Noise-Resilient Depth Estimation for Light Field Images Using Focal Stack and FFT Analysis

    Full text link
    Depth estimation for light field images is essential for applications such as light field image compression, reconstructing perspective views and 3D reconstruction. Previous depth map estimation approaches do not capture sharp transitions around object boundaries due to occlusions, making many of the current approaches unreliable at depth discontinuities. This is especially the case for light field images because the pixels do not exhibit photo-consistency in the presence of occlusions. In this paper, we propose an algorithm to estimate the depth map for light field images using depth from defocus. Our approach uses a small patch size of pixels in each focal stack image for comparing defocus cues, allowing the algorithm to generate sharper depth boundaries. Then, in contrast to existing approaches that use defocus cues for depth estimation, we use frequency domain analysis image similarity checking to generate the depth map. Processing in the frequency domain reduces the individual pixel errors that occur while directly comparing RGB images, making the algorithm more resilient to noise. The algorithm has been evaluated on both a synthetic image dataset and real-world images in the JPEG dataset. Experimental results demonstrate that our proposed algorithm outperforms state-of-the-art depth estimation techniques for light field images, particularly in case of noisy images.</jats:p

    (단색수차 보정에 대한 수학적 접근

    Get PDF
    학위논문(박사)--서울대학교 대학원 :자연과학대학 수리과학부,2020. 2. 강명주.This thesis introduces efficient and effective methods for solving monochromatic aberration correction problems. The proposed methods are based on Forward-Backward proximal splitting method, which solves the optimization problem by iteratively solving two sub parts for each step: 1. gradient descent and 2. noise removal. Since the gradient descent part has high computational cost, we develop a low-cost implementation of computing aberration operator and its transpose. Then, we propose 6 different methods, which are based on 6 types of different regularization in the noise removal part. In this thesis, we perform experiments on the proposed image restoration methods. In the experiments, we use synthetic images generated by point spread functions (PSFs), which emulate the effects of monochromatic aberration in modern digital cameras.이 연구는 단색 수차 보정 문제를 풀기 위한 효율적이고 효과적인 방법들을 소개한다. 제안된 방법들은 Forward-Backward proximal splitting 방법에 기반한 것으로 이 방법은 최적화 문제를 경사하강법과 노이즈 제거의 두 문제로 나누어 반복 방법을 통해 푼다. 단색 수차 문제에 있어서 경사하강법은 큰 계산 비용을 요구하기 때문에 수차 연산자의 저비용 구현 방법을 개발한다. 이어서 6가지의 서로 다른 정칙 연산자에 기반한 노이즈 제거 방법을 적용한 영상 복원 방법을 제안한다. 이 연구에서는 제안된 영상 복원 방법들에 대한 실험을 수행한다. 실험에서는 점확산함수 (Point Spread Function)을 이용해 합성된 수차 영상을 이용하는데, 해당 점확산함수는 현대 디지털 카메라의 단색 수차 효과를 모방한 것이다.1 Introduction 1 2 Related Works 5 2.1 Approximation Methods 5 2.1.1 Methods 5 2.1.2 Methods Comparison and Conclusion 7 2.2 Basic Fourier Optics 8 2.2.1 Wavefront Optical Path Difference, W (x, y) 8 2.2.2 Pupil and Amplitude Transfer Functions 11 2.2.3 Point Spread Functions 12 2.3 Mathematical Preliminaries 14 2.3.1 Basic Properties of svcOperators 14 2.3.2 Regularizations in Inverse Problems 16 2.3.3 Convex Optimization Theory 21 3 Proposed Methods 30 3.1 Low Cost Implementation Using Small Support Assumption 31 3.1.1 Vectorization Techniques 33 3.2 Proposed Algorithm 34 3.2.1 Forward Backward Splitting Algorithm 35 3.2.2 Split Bregman Method 38 3.2.3 Algorithms 42 4 Experiments 47 4.1 Implementation Details 47 4.1.1 Generation of synthetic blurry images 47 4.2 Numerical Results 49 4.2.1 Synthetically Blurred Images 50 4.2.2 Image Restoration 52 5 Conclusion and Future Work 65 5.1 Conclusion 65 5.2 Future Work 66 Abstract (in Korean) 71Docto

    Non-parametric Blur Map Regression for Depth of Field Extension

    Get PDF
    Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out-of-focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatiallyvarying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced Regression Tree Fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point-spread-function. A non-blind spatiallyvarying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images

    Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes

    Get PDF
    abstract: Digital imaging and image processing technologies have revolutionized the way in which we capture, store, receive, view, utilize, and share images. In image-based applications, through different processing stages (e.g., acquisition, compression, and transmission), images are subjected to different types of distortions which degrade their visual quality. Image Quality Assessment (IQA) attempts to use computational models to automatically evaluate and estimate the image quality in accordance with subjective evaluations. Moreover, with the fast development of computer vision techniques, it is important in practice to extract and understand the information contained in blurred images or regions. The work in this dissertation focuses on reduced-reference visual quality assessment of images and textures, as well as perceptual-based spatially-varying blur detection. A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The proposed method requires a very small number of reduced-reference (RR) features. Extensive experiments performed on different benchmark databases demonstrate that the proposed RRIQA method, delivers highly competitive performance as compared with the state-of-the-art RRIQA models for both natural and texture images. In the context of texture, the effect of texture granularity on the quality of synthesized textures is studied. Moreover, two RR objective visual quality assessment methods that quantify the perceived quality of synthesized textures are proposed. Performance evaluations on two synthesized texture databases demonstrate that the proposed RR metrics outperforms full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in predicting the perceived visual quality of the synthesized textures. Last but not least, an effective approach to address the spatially-varying blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings is proposed. The evaluations of the proposed approach on a diverse sets of blurry images with different blur types, levels, and content demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore