1,008 research outputs found

    医用超音波における散乱体分布の高解像かつ高感度な画像化に関する研究

    Get PDF
    Ultrasound imaging as an effective method is widely used in medical diagnosis andNDT (non-destructive testing). In particular, ultrasound imaging plays an important role in medical diagnosis due to its safety, noninvasive, inexpensiveness and real-time compared with other medical imaging techniques. However, in general the ultrasound imaging has more speckles and is low definition than the MRI (magnetic resonance imaging) and X-ray CT (computerized tomography). Therefore, it is important to improve the ultrasound imaging quality. In this study, there are three newproposals. The first is the development of a high sensitivity transducer that utilizes piezoelectric charge directly for FET (field effect transistor) channel control. The second is a proposal of a method for estimating the distribution of small scatterers in living tissue using the empirical Bayes method. The third is a super-resolution imagingmethod of scatterers with strong reflection such as organ boundaries and blood vessel walls. The specific description of each chapter is as follows: Chapter 1: The fundamental characteristics and the main applications of ultrasound are discussed, then the advantages and drawbacks of medical ultrasound are high-lighted. Based on the drawbacks, motivations and objectives of this study are stated. Chapter 2: To overcome disadvantages of medical ultrasound, we advanced our studyin two directions: designing new transducer improves the acquisition modality itself, onthe other hand new signal processing improve the acquired echo data. Therefore, the conventional techniques related to the two directions are reviewed. Chapter 3: For high performance piezoelectric, a structure that enables direct coupling of a PZT (lead zirconate titanate) element to the gate of a MOSFET (metal-oxide semiconductor field-effect transistor) to provide a device called the PZT-FET that acts as an ultrasound receiver was proposed. The experimental analysis of the PZT-FET, in terms of its reception sensitivity, dynamic range and -6 dB reception bandwidth have been investigated. The proposed PZT-FET receiver offers high sensitivity, wide dynamic range performance when compared to the typical ultrasound transducer. Chapter 4: In medical ultrasound imaging, speckle patterns caused by reflection interference from small scatterers in living tissue are often suppressed by various methodologies. However, accurate imaging of small scatterers is important in diagnosis; therefore, we investigated influence of speckle pattern on ultrasound imaging by the empirical Bayesian learning. Since small scatterers are spatially correlated and thereby constitute a microstructure, we assume that scatterers are distributed according to the AR (auto regressive) model with unknown parameters. Under this assumption, the AR parameters are estimated by maximizing the marginal likelihood function, and the scatterers distribution is estimated as a MAP (maximum a posteriori) estimator. The performance of our method is evaluated by simulations and experiments. Through the results, we confirmed that the band limited echo has sufficient information of the AR parameters and the power spectrum of the echoes from the scatterers is properly extrapolated. Chapter 5: The medical ultrasound imaging of strong reflectance scatterers based on the MUSIC algorithm is the main subject of Chapter 5. Previously, we have proposed a super-resolution ultrasound imaging based on multiple TRs (transmissions/receptions) with different carrier frequencies called SCM (super resolution FM-chirp correlation method). In order to reduce the number of required TRs for the SCM, the method has been extended to the SA (synthetic aperture) version called SA-SCM. However, since super-resolution processing is performed for each line data obtained by the RBF (reception beam forming) in the SA-SCM, image discontinuities tend to occur in the lateral direction. Therefore, a new method called SCM-weighted SA is proposed, in this version the SCM is performed on each transducer element, and then the SCM result is used as the weight for RBF. The SCM-weighted SA can generate multiple B-mode images each of which corresponds to each carrier frequency, and the appropriate low frequency images among them have no grating lobes. For a further improvement, instead of simple averaging, the SCM applied to the result of the SCM-weighted SA for all frequencies again, which is called SCM-weighted SA-SCM. We evaluated the effectiveness of all the methods by simulations and experiments. From the results, it can be confirmed that the extension of the SCM framework can help ultrasound imaging reduce grating lobes, perform super-resolution and better SNR(signal-to-noise ratio). Chapter 6: A discussion of the overall content of the thesis as well as suggestions for further development together with the remaining problems are summarized.首都大学東京, 2019-03-25, 博士(工学)首都大学東

    Joint Demosaicking / Rectification of Fisheye Camera Images using Multi-color Graph Laplacian Regulation

    Get PDF
    To compose one 360 degrees image from multiple viewpoint images taken from different fisheye cameras on a rig for viewing on a head-mounted display (HMD), a conventional processing pipeline first performs demosaicking on each fisheye camera's Bayer-patterned grid, then translates demosaicked pixels from the camera grid to a rectified image grid. By performing two image interpolation steps in sequence, interpolation errors can accumulate, and acquisition noise in each captured pixel can pollute its neighbors, resulting in correlated noise. In this paper, a joint processing framework is proposed that performs demosaicking and grid-to-grid mapping simultaneously, thus limiting noise pollution to one interpolation. Specifically, a reverse mapping function is first obtained from a regular on-grid location in the rectified image to an irregular off-grid location in the camera's Bayer-patterned image. For each pair of adjacent pixels in the rectified grid, its gradient is estimated using the pair's neighboring pixel gradients in three colors in the Bayer-patterned grid. A similarity graph is constructed based on the estimated gradients, and pixels are interpolated in the rectified grid directly via graph Laplacian regularization (GLR). To establish ground truth for objective testing, a large dataset containing pairs of simulated images both in the fisheye camera grid and the rectified image grid is built. Experiments show that the proposed joint demosaicking / rectification method outperforms competing schemes that execute demosaicking and rectification in sequence in both objective and subjective measures

    Joint Demosaicking / Rectification of Fisheye Camera Images using Multi-color Graph Laplacian Regulation

    Get PDF
    To compose one 360 degrees image from multiple viewpoint images taken from different fisheye cameras on a rig for viewing on a head-mounted display (HMD), a conventional processing pipeline first performs demosaicking on each fisheye camera's Bayer-patterned grid, then translates demosaicked pixels from the camera grid to a rectified image grid. By performing two image interpolation steps in sequence, interpolation errors can accumulate, and acquisition noise in each captured pixel can pollute its neighbors, resulting in correlated noise. In this paper, a joint processing framework is proposed that performs demosaicking and grid-to-grid mapping simultaneously, thus limiting noise pollution to one interpolation. Specifically, a reverse mapping function is first obtained from a regular on-grid location in the rectified image to an irregular off-grid location in the camera's Bayer-patterned image. For each pair of adjacent pixels in the rectified grid, its gradient is estimated using the pair's neighboring pixel gradients in three colors in the Bayer-patterned grid. A similarity graph is constructed based on the estimated gradients, and pixels are interpolated in the rectified grid directly via graph Laplacian regularization (GLR). To establish ground truth for objective testing, a large dataset containing pairs of simulated images both in the fisheye camera grid and the rectified image grid is built. Experiments show that the proposed joint demosaicking / rectification method outperforms competing schemes that execute demosaicking and rectification in sequence in both objective and subjective measures

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Quantitative seismic interpretation in thin-bedded geology using full-wavefield elastic modelling

    Get PDF
    Refleksjonsseismikk brukes til å lage seismiske «bilder» av den øverste delen av jordskorpen, blant annet med tanke på leting etter reservoarer for olje, gass, karbonlagring og geotermisk energi. I tillegg til å gi grunnlag for en strukturell tolkning, kan de seismiske dataene brukes til å kvantifisere egenskapene til det faste materialet og væskeinnholdet i bergartene. Et viktig verktøy i slik kvantitativ seismisk tolkning er analyse av såkalt AVO: amplitudenes variasjon med avstanden mellom kilde og mottaker (offset). Tynne geologiske lag gir utfordringer for AVO-modellering og tolkning, fordi lagtykkelsen vil kunne være mindre enn oppløsningen i de seismiske dataene. En problemstilling som tas opp i denne avhandlingen er nettopp hvordan man kan gjøre nøyaktig seismisk (forover) modellering i medier med tynne lag. En konvensjonell tilnærming innen AVO- modellering og inversjon er å bruke såkalt konvolusjonsmodellering. Denne metoden tar imidlertid bare hensyn til de primære seismiske refleksjonene og er derfor unøyaktig når modellene har tynne lag. To bedre alternativer er endelig-differanse-modellering og reflektivitetsmetoden. Reflektivitetsmetoden er en delvis analytisk modelleringsmetode for horisontalt lagdelte medier og er beregningsmessig billigere enn endelig-differansemodellering, der beregningene er basert på et tett samplet rutenett (grid). Jeg viser i avhandlingen at reflektivitetsmetoden er godt egnet for AVO-modellering i lagdelte medier. Seismiske data har en båndbegrenset karakter. En konsekvens er at beregning av reservoaregenskaper fra seismiske data generelt ikke er entydig, noe som særlig kommer til uttrykk for lagdelt geologi med tynne lag. Probabilistiske inversjonsmetoder, som for eksempel bayesianske metoder, tar hensyn til denne flertydigheten ved å forutsi sannsynligheter, noe som gjør det mulig a kvantisere usikkerheten. I avhandlingen kombinerer jeg seismisk modellering med bayesiansk klassifisering og inversjon. Modelleringen er utført med reflektivitetsmetoden og er basert på det komplette elastiske bølgefeltet. Formålet er å adressere to konkrete kvantitative seismiske tolkningsproblemer: 1) kvantifisering av usikkerhet i bayesiansk porevæske-klassifisering i nærvær av tynne lag med høy impedans, forårsaket av kalsittsementering i sandstein, og 2) estimering av reservoaregenskapene til turbiditt-reservoarer karakterisert ved alternerende lag av sandstein og skifer. I den første anvendelsen viser jeg i en modelleringsstudie at kalsitt-sementerte lag kan gi en detekterbar refleksjonsrespons, noe som kan påvirke amplituden målt ved reservoartoppen og dermed forstyrre AVO-målingen. Den observerte effekten øker usikkerheten ved porevæske-klassifisering basert på AVO-attributter, som jeg har demonstrert i en case-studie. Følgelig øker sannsynligheten for en falsk hydrokarbon-indikasjon betydelig i nærvær av kalsittsementerte lag. I den andre anvendelsen presenterer jeg en bayesiansk inversjon som tar AVO-skjæringspunktet og gradienten målt på toppen av et reservoar som inngangsdata og estimerer sannsynlighetstetthetsfunksjonen til forholdstallene «net-to-gross» og «net-pay-to-net». Metoden ble anvendt på syntetiske data og AVO-attributtkart fra Jotunfeltet på norsk kontinentalsokkel. Det ble funnet at AVO-gradienten korrelerer med reservoarets net-togross forhold, mens AVO-skjæringspunktet er mest følsomt for typen porevæske. Etter inversjon genererte jeg kart over de mest sannsynlige verdiene av forholdene net-to-gross og net-pay-to-net, samt kart over net pay og usikkerhetene. Disse kartene kan bidra til å identifisere potensielle soner med høy reservoarkvalitet og hydrokarbonmetning.Reflection seismics is used to image the subsurface for the exploration of oil and gas, geothermal or carbon storage reservoirs, among others. In addition to the structural interpretation of the resulting seismic images, the seismic data can be interpreted quantitatively with the goal to obtain rock and fluid properties. An essential tool in quantitative seismic interpretation is the analysis of the amplitude variation with offset (AVO). Thin-bedded geology below the seismic resolution poses challenges for AVO modelling and interpretation. One problem addressed in this thesis is accurate seismic forward modelling in thin-bedded media. Primaries-only convolutional modelling, commonly used in conventional AVO modelling and inversion, is prone to failure in the presence of thin beds. Better alternatives are finite-difference modelling or the reflectivity method. The reflectivity method is a semi-analytic modelling method for horizontally layered media and is computationally cheaper than finite-difference modelling on densely sampled grids. I show in this thesis that the reflectivity method is well-suited for the AVO modelling of layered media. The band-limited nature of seismic data is one reason for the non-unique estimation of reservoir properties from seismic data, especially in thin-bedded geology. Probabilistic inversion methods, such as Bayesian methods, honour this non-uniqueness by predicting probabilities that allow the uncertainty to be quantified. In this thesis, I integrate full-wavefield elastic seismic modelling by the reflectivity method with Bayesian classification and inversion. The objective is to address two concrete quantitative seismic interpretation problems: 1) the uncertainty quantification of Bayesian pore-fluid classification in the presence of thin high-impedance layers caused by calcite cementation in sandstone, and 2) the estimation of reservoir properties of turbidite reservoirs characterised by sand-shale interbedding. In the first application, I show through a modelling study that calcite-cemented beds lead to detectable reflection responses that can interfere with the target reflection at the reservoir top and thereby perturb the AVO behaviour. The observed effect increases the uncertainty of pore-fluid classification based on AVO attributes, as demonstrated by a case study. Consequently, the probability of a false hydrocarbon indication is significantly increased in the presence of calcite-cemented beds. In the second application, I present a Bayesian inversion that takes the AVO intercept and gradient measured at the top of a reservoir as input and estimates the probability density function of the net-to-gross ratio and the net-pay-to-net ratio. The method was applied to synthetic data and AVO attribute maps from the Jotun field on the Norwegian Continental Shelf. It was found that the AVO gradient correlates with the net-to-gross ratio of the reservoir, while the AVO intercept is most sensitive to the type of pore fluid. After inversion, maps of the most-likely values of the net-to-gross ratio, net-pay-to-net ratio, net pay and the uncertainty could be generated. These maps help to identify potential zones of high reservoir quality and hydrocarbon saturation.Doktorgradsavhandlin

    Image processing and synthesis: From hand-crafted to data-driven modeling

    Get PDF
    This work investigates image and video restoration problems using effective optimization algorithms. First, we study the problem of single image dehazing to suppress artifacts in compressed or noisy images and videos. Our method is based on the linear haze model and minimizes the gradient residual between the input and output images. This successfully suppresses any new artifacts that are not obvious in the input images. Second, we propose a new method for image inpainting using deep neural networks. Given a set of training data, deep generate models can generate high-quality natural images following the same distribution. We search the nearest neighbor in the latent space of the deep generate models using a weighted context loss and prior loss. This code is then converted to the clean and uncorrupted image of the input. Third, we study the problem of recovering high-quality images from very noisy raw data captured in low-light conditions with short exposures. We build deep neural networks to learn the camera processing pipeline specifically for low-light raw data with an extremely low signal-to-noise ratio (SNR). To train the networks, we capture a new dataset of more than five thousand images with short-exposed and long-exposed pairs. Promising results are obtained compared with the traditional image processing pipeline. Finally, we propose a new method for extreme-low light video processing. The raw video frames are pre-processed using spatial-temporal denoising. A neural network is trained to move the error in the pre-processed data, learning to perform the image processing pipeline and encourage temporal smoothness of the output. Both quantitative and qualitative results demonstrate the proposed method significantly outperform the existing methods. It also paves the way for future research on this area

    Advances in Sensors and Sensing for Technical Condition Assessment and NDT

    Get PDF
    The adequate assessment of key apparatus conditions is a hot topic in all branches of industry. Various online and offline diagnostic methods are widely applied to provide early detections of any abnormality in exploitation. Furthermore, different sensors may also be applied to capture selected physical quantities that may be used to indicate the type of potential fault. The essential steps of the signal analysis regarding the technical condition assessment process may be listed as: signal measurement (using relevant sensors), processing, modelling, and classification. In the Special Issue entitled “Advances in Sensors and Sensing for Technical Condition Assessment and NDT”, we present the latest research in various areas of technology
    corecore