13,927 research outputs found
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Iterative image reconstruction algorithms for optoacoustic tomography (OAT),
also known as photoacoustic tomography, have the ability to improve image
quality over analytic algorithms due to their ability to incorporate accurate
models of the imaging physics, instrument response, and measurement noise.
However, to date, there have been few reported attempts to employ advanced
iterative image reconstruction algorithms for improving image quality in
three-dimensional (3D) OAT. In this work, we implement and investigate two
iterative image reconstruction methods for use with a 3D OAT small animal
imager: namely, a penalized least-squares (PLS) method employing a quadratic
smoothness penalty and a PLS method employing a total variation norm penalty.
The reconstruction algorithms employ accurate models of the ultrasonic
transducer impulse responses. Experimental data sets are employed to compare
the performances of the iterative reconstruction algorithms to that of a 3D
filtered backprojection (FBP) algorithm. By use of quantitative measures of
image quality, we demonstrate that the iterative reconstruction algorithms can
mitigate image artifacts and preserve spatial resolution more effectively than
FBP algorithms. These features suggest that the use of advanced image
reconstruction algorithms can improve the effectiveness of 3D OAT while
reducing the amount of data required for biomedical applications
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Polarised light stress analysis and laser scatter imaging for non-contact inspection of heat seals in food trays
This paper introduces novel non-contact methods for detecting faults in heat seals of food packages. Two alternative imaging technologies are investigated; laser scatter imaging and polarised light stress images. After segmenting the seal area from the rest of the respective image, a classifier is trained to detect faults in different regions of the seal area using features extracted from the pixels in the respective region. A very large set of candidate features, based on statistical information relating to the colour and texture of each region, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating faults from non-faults. With this approach, different features can be selected and optimised for the different imaging methods. In experiments we compare the performance of classifiers trained using features extracted from laser scatter images only, polarised light stress images only, and a combination of both image types. The results show that the polarised light and laser scatter classifiers achieved accuracies of 96\% and 90\%, respectively, while the combination of both sensors achieved an accuracy of 95\%. These figures suggest that both systems have potential for commercial development
Modeling update for the Thirty Meter Telescope laser guide star dual-conjugate adaptive optics system
This paper describes the modeling efforts undertaken in the past couple of years to derive wavefront error (WFE) performance estimates for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility laser guide star (LGS) dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The estimates describe the expected performance of NFIRAOS as a function of seeing on Mauna Kea, zenith angle, and galactic latitude (GL). They have been developed through a combination of integrated AO simulations, side analyses, allocations, lab and lidar experiments
Modeling of solidification microstructure evolution in laser powder bed fusion fabricated 316L stainless steel using combined computational fluid dynamics and cellular automata
This work presents a novel modeling framework combining computational fluid dynamics (CFD) and cellular automata (CA), to predict the solidification microstructure evolution of laser powder bed fusion (PBF) fabricated 316âŻL stainless steel. A CA model is developed which is based on the modified decentered square method to improve computational efficiency. Using this framework, the fluid dynamics of the melt pool flow in the laser melting process is found to be mainly driven by the competing Marangoni force and the recoil pressure on the liquid metal surface. Evaporation occurs at the front end of the laser spot. The initial high temperature occurs in the center of the laser spot. However, due to Marangoni force, which drives high-temperature liquid flowing to low-temperature region, the highest temperature region shifts to the front side of the laser spot where evaporation occurs. Additionally, the recoil pressure pushes the liquid metal downward to form a depression zone. The simulated melt pool depths are compared well with the experimental data. Additionally, the simulated solidification microstructure using the CA model is in a good agreement with the experimental observation. The simulations show that higher scan speeds result in smaller melt pool depth, and lack-of-fusion pores can be formed. Higher laser scan speed also leads to finer grain size, larger laser-grain angle, and higher columnar grain contents, which are consistent with experimental observations. This model can be potentially used as a tool to optimize the metal powder bed fusion process, through generating desired microstructure and resultant material properties
Structured illumination microscopy with unknown patterns and a statistical prior
Structured illumination microscopy (SIM) improves resolution by
down-modulating high-frequency information of an object to fit within the
passband of the optical system. Generally, the reconstruction process requires
prior knowledge of the illumination patterns, which implies a well-calibrated
and aberration-free system. Here, we propose a new \textit{algorithmic
self-calibration} strategy for SIM that does not need to know the exact
patterns {\it a priori}, but only their covariance. The algorithm, termed
PE-SIMS, includes a Pattern-Estimation (PE) step requiring the uniformity of
the sum of the illumination patterns and a SIM reconstruction procedure using a
Statistical prior (SIMS). Additionally, we perform a pixel reassignment process
(SIMS-PR) to enhance the reconstruction quality. We achieve 2 better
resolution than a conventional widefield microscope, while remaining
insensitive to aberration-induced pattern distortion and robust against
parameter tuning
Fast non-iterative algorithm for 3D point-cloud holography
Recently developed iterative and deep learning-based approaches to
computer-generated holography (CGH) have been shown to achieve high-quality
photorealistic 3D images with spatial light modulators. However, such
approaches remain overly cumbersome for patterning sparse collections of target
points across a photoresponsive volume in applications including biological
microscopy and material processing. Specifically, in addition to requiring
heavy computation that cannot accommodate real-time operation in mobile or
hardware-light settings, existing sampling-dependent 3D CGH methods preclude
the ability to place target points with arbitrary precision, limiting
accessible depths to a handful of planes. Accordingly, we present a
non-iterative point cloud holography algorithm that employs fast deterministic
calculations in order to efficiently allocate patches of SLM pixels to
different target points in the 3D volume and spread the patterning of all
points across multiple time frames. Compared to a matched-performance
implementation of the iterative Gerchberg-Saxton algorithm, our algorithm's
relative computation speed advantage was found to increase with SLM pixel
count, exceeding 100,000x at 512x512 array format.Comment: 22 pages, 11 figures, manuscript and supplemen
Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography
By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty
Two Point Resolution of a Defocused Multi-Aperture System Eyelet
Multi-aperture optical systems based on the insect eye offer an alternative to the common optical system based on the human eye. Some of the advantages of a multi - aperture system include the ability to perform parallel processing, have super resolution and have available large amounts of system redundancy.
An individual eyelet of a multi-aperture system consists of a gradient index lens coupled to optical fibers which transfer the incident light on the lens to individual detectors.
A mathematical model of an individual eyelet was developed. It is a flexible model allowing various system parameters to vary. Computer based algorithms were developed to locate and resolve two points in space. The model was exercised with experimental data and found to have a resolution of 3.1°. The algorithm was also exercised with the computer model and the results compared favorably
- âŚ