71 research outputs found

    Critical Computational Aspects of Near Infrared Circular Tomographic Imaging: Analysis of Measurement Number, Mesh Resolution and Reconstruction Basis

    Get PDF
    The image resolution and contrast in Near-Infrared (NIR) tomographic image reconstruction are affected by parameters such as the number of boundary measurements, the mesh resolution in the forward calculation and the reconstruction basis. Increasing the number of measurements tends to make the sensitivity of the domain more uniform reducing the hypersensitivity at the boundary. Using singular-value decomposition (SVD) and reconstructed images, it is shown that the numbers of 16 or 24 fibers are sufficient for imaging the 2D circular domain for the case of 1% noise in the data. The number of useful singular values increases as the logarithm of the number of measurements. For this 2D reconstruction problem, given a computational limit of 10 sec per iteration, leads to choice of forward mesh with 1785 nodes and reconstruction basis of 30×30 elements. In a three-dimensional (3D) NIR imaging problem, using a single plane of data can provide useful images if the anomaly to be reconstructed is within the measurement plane. However, if the location of the anomaly is not known, 3D data collection strategies are very important. Further the quantitative accuracy of the reconstructed anomaly increased approximately from 15% to 89% as the anomaly is moved from the centre to boundary, respectively. The data supports the exclusion of out of plane measurements may be valid for 3D NIR imaging

    Design and Implementation of Deep Learning Based Contactless Authentication System Using Hand Gestures

    Get PDF
    Hand gestures based sign language digits have several contactless applications. Applications include communication for impaired people, such as elderly and disabled people, health-care applications, automotive user interfaces, and security and surveillance. This work presents the design and implementation of a complete end-to-end deep learning based edge computing system that can verify a user contactlessly using ‘authentication code’. The ‘authentication code’ is an ‘n’ digit numeric code and the digits are hand gestures of sign language digits. We propose a memory-efficient deep learning model to classify the hand gestures of the sign language digits. The proposed deep learning model is based on the bottleneck module which is inspired by the deep residual networks. The model achieves classification accuracy of 99.1% on the publicly available sign language digits dataset. The model is deployed on a Raspberry pi 4 Model B edge computing system to serve as an edge device for user verification. The edge computing system consists of two steps, it first takes input from the camera attached to it in real-time and stores it in the buffer. In the second step, the model classifies the digit with the inference rate of 280 ms, by taking the first image in the buffer as input.publishedVersio

    Special Issue on Applied Computational Science and Engineering

    No full text
    The special issue on applied computational sci- ence and engineering from the Journal of the Indian Institute of Science is a very timely issue as it reflects the paradigm shift towards accept- ing computational science as an important aca- demic discipline. Even in traditional institutes like Indian Institute of Science (IISc), creation of a full-fledged academic department in computa- tional science can be considered as a welcoming and important step

    Data-resolution based optimization of the data-collection strategy for near infrared diffuse optical tomography

    No full text
    Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820

    Data-resolution based optimal choice of minimum required measurements for image-guided diffuse optical tomography

    No full text
    Image-guided diffuse optical tomography has the advantage of reducing the total number of optical parameters being reconstructed to the number of distinct tissue types identified by the traditional imaging modality, converting the optical image-reconstruction problem from underdetermined in nature to overdetermined. In such cases, the minimum required measurements might be far less compared to those of the traditional diffuse optical imaging. An approach to choose these optimally based on a data-resolution matrix is proposed, and it is shown that such a choice does not compromise the reconstruction performance. (C) 2013 Optical Society of Americ
    corecore