175 research outputs found

    Digital Signal Processing

    Get PDF
    Contains reports on twelve research projects.U. S. Navy - Office of Naval Research (Contract N00014-75-C-0951)National Science Foundation (Grant ENG76-24117)National Aeronautics and Space Administration (Grant NSG-5157)Joint Services Electronics Program (Contract DAABO7-76-C-1400)U.S. Navy-Office of Naval Research (Contract N00014-77-C-0196)Woods Hole Oceanographic InstitutionU. S. Navy - Office of Naval Research (Contract N00014-75-C-0852)Department of Ocean Engineering, M.I.T.National Science Foundation subcontract to Grant GX 41962 to Woods Hole Oceanographic Institutio

    On adaptive filter structure and performance

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre- DSC:D75686/87 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Sparsity Promoting Regularization for Effective Noise Suppression in SPECT Image Reconstruction

    Get PDF
    The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, Single-Photon Emission Computed Tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a Preconditioned Fixed-point Proximity Algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O (1/k) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: Contrast-to-Noise Ratio (CNR), Ensemble Variance Images (EVI), Background Ensemble Noise (BEN), Normalized Mean-Square Error (NMSE), and Channelized Hotelling Observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing staircase artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging

    Ultrasound Imaging

    Get PDF
    In this book, we present a dozen state of the art developments for ultrasound imaging, for example, hardware implementation, transducer, beamforming, signal processing, measurement of elasticity and diagnosis. The editors would like to thank all the chapter authors, who focused on the publication of this book

    Navigating the roadblocks to spectral color reproduction: data-efficient multi-channel imaging and spectral color management

    Get PDF
    Commercialization of spectral imaging for color reproduction will require the identification and traversal of roadblocks to its success. Among the drawbacks associated with spectral reproduction is a tremendous increase in data capture bandwidth and processing throughput. Methods are proposed for attenuating these increases with data-efficient methods based on adaptive multi-channel visible-spectrum capture and with low-dimensional approaches to spectral color management. First, concepts of adaptive spectral capture are explored. Current spectral imaging approaches require tens of camera channels although previous research has shown that five to nine channels can be sufficient for scenes limited to pre-characterized spectra. New camera systems are proposed and evaluated that incorporate adaptive features reducing capture demands to a similar few channels with the advantage that a priori information about expected scenes is not needed at the time of system design. Second, proposals are made to address problems arising from the significant increase in dimensionality within the image processing stage of a spectral image workflow. An Interim Connection Space (ICS) is proposed as a reduced dimensionality bottleneck in the processing workflow allowing support of spectral color management. In combination these investigations into data-efficient approaches improve two critical points in the spectral reproduction workflow: capture and processing. The progress reported here should help the color reproduction community appreciate that the route to data-efficient multi-channel visible spectrum imaging is passable and can be considered for many imaging modalities

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    Residue Number Systems: a Survey

    Get PDF

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    Autoenfoque en imagen ultrasónica

    Get PDF
    La inspección de componentes por ultrasonidos se realiza, actualmente, con sistemas de imagen phased array, versión industrial de los ecógrafos médicos. En ambos casos se utiliza un array con decenas o centenares de pequeños transductores piezoeléctricos que se controlan individualmente para enfocar y deflectar el haz ultrasónico en emisión y recepción. Pero, mientras que en medicina el array está en contacto con el cuerpo, que es flexible, en la industria se suele interponer un medio acoplante entre el array y el componente a inspeccionar. Cuando la geometría de la pieza no es plana se utiliza agua como medio acoplante, que se adapta a la forma de la pieza y proporciona un medio continuo y de baja atenuación para la transmisión del sonido. En estas condiciones existen dos medios de propagación, lo que dificulta la determinación de los retardos de enfoque por efectos de la refracción. Como en estas condiciones no existen fórmulas cerradas que faciliten su cálculo, hasta la fecha se han venido utilizando procesos iterativos computacionalmente costosos que impiden la modificación rápida del enfoque cuando varía la geometría de la pieza (por ejemplo, durante la realización de un barrido). Estas razones han impedido el desarrollo de técnicas de autoenfoque efectivas. Esta Tesis aporta tres técnicas que, junto al cálculo en tiempo real de los parámetros de enfoque y un soporte arquitectural de imagen a ultra-alta velocidad, están entre las primeras aproximaciones reales para solucionar el problema del autoenfoque en imagen ultrasónica. De hecho, una de ellas (AUTOFOCUS) ha sido patentada y transferida a la industria, que la comercializa en equipos phased array con esta capacidad. La memoria describe las motivaciones, fundamentos, aproximaciones conocidas al problema así como las dificultades y las soluciones investigadas. Una segunda parte incluye las publicaciones más relevantes donde se han comunicado los resultados, contrastando los teóricamente esperados con los experimentalmente obtenidos
    corecore