56,804 research outputs found

    Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution

    Get PDF
    We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD) that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI) data via constrained spherical deconvolution (CSD). Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC), a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning

    Model based methods for locating, enhancing and recognising low resolution objects in video

    Get PDF
    Visual perception is our most important sense which enables us to detect and recognise objects even in low detail video scenes. While humans are able to perform such object detection and recognition tasks reliably, most computer vision algorithms struggle with wide angle surveillance videos that make automatic processing difficult due to low resolution and poor detail objects. Additional problems arise from varying pose and lighting conditions as well as non-cooperative subjects. All these constraints pose problems for automatic scene interpretation of surveillance video, including object detection, tracking and object recognition.Therefore, the aim of this thesis is to detect, enhance and recognise objects by incorporating a priori information and by using model based approaches. Motivated by the increasing demand for automatic methods for object detection, enhancement and recognition in video surveillance, different aspects of the video processing task are investigated with a focus on human faces. In particular, the challenge of fully automatic face pose and shape estimation by fitting a deformable 3D generic face model under varying pose and lighting conditions is tackled. Principal Component Analysis (PCA) is utilised to build an appearance model that is then used within a particle filter based approach to fit the 3D face mask to the image. This recovers face pose and person-specific shape information simultaneously. Experiments demonstrate the use in different resolution and under varying pose and lighting conditions. Following that, a combined tracking and super resolution approach enhances the quality of poor detail video objects. A 3D object mask is subdivided such that every mask triangle is smaller than a pixel when projected into the image and then used for model based tracking. The mask subdivision then allows for super resolution of the object by combining several video frames. This approach achieves better results than traditional super resolution methods without the use of interpolation or deblurring.Lastly, object recognition is performed in two different ways. The first recognition method is applied to characters and used for license plate recognition. A novel character model is proposed to create different appearances which are then matched with the image of unknown characters for recognition. This allows for simultaneous character segmentation and recognition and high recognition rates are achieved for low resolution characters down to only five pixels in size. While this approach is only feasible for objects with a limited number of different appearances, like characters, the second recognition method is applicable to any object, including human faces. Therefore, a generic 3D face model is automatically fitted to an image of a human face and recognition is performed on a mask level rather than image level. This approach does not require an initial pose estimation nor the selection of feature points, the face alignment is provided implicitly by the mask fitting process

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Image enhancement from a stabilised video sequence

    Get PDF
    The aim of video stabilisation is to create a new video sequence where the motions (i.e. rotations, translations) and scale differences between frames (or parts of a frame) have effectively been removed. These stabilisation effects can be obtained via digital video processing techniques which use the information extracted from the video sequence itself, with no need for additional hardware or knowledge about camera physical motion. A video sequence usually contains a large overlap between successive frames, and regions of the same scene are sampled at different positions. In this paper, this multiple sampling is combined to achieve images with a higher spatial resolution. Higher resolution imagery play an important role in assisting in the identification of people, vehicles, structures or objects of interest captured by surveillance cameras or by video cameras used in face recognition, traffic monitoring, traffic law reinforcement, driver assistance and automatic vehicle guidance systems

    Physics with the ALICE experiment

    Full text link
    ALICE experiment at LHC collects data in pp collisions at s\sqrt{s}=0.9, 2.76 and 7 TeV and in PbPb collisions at 2.76 TeV. Highlights of the detector performance and an overview of experimental results measured with ALICE in pp and AA collisions are presented in this paper. Physics with proton-proton collisions is focused on hadron spectroscopy at low and moderate pTp_T. Measurements with lead-lead collisions are shown in comparison with those in pp collisions, and the properties of hot quark matter are discussed.Comment: Presented at the Conference of the Nuclear Physics Division of the Russian Academy of Science, 11-25.11.2011, ITEP, Moscow. 16 pages, 14 figure

    An Electron-Tracking Compton Telescope for a Survey of the Deep Universe by MeV gamma-rays

    Get PDF
    Photon imaging for MeV gammas has serious difficulties due to huge backgrounds and unclearness in images, which are originated from incompleteness in determining the physical parameters of Compton scattering in detection, e.g., lack of the directional information of the recoil electrons. The recent major mission/instrument in the MeV band, Compton Gamma Ray Observatory/COMPTEL, which was Compton Camera (CC), detected mere ∌30\sim30 persistent sources. It is in stark contrast with ∌\sim2000 sources in the GeV band. Here we report the performance of an Electron-Tracking Compton Camera (ETCC), and prove that it has a good potential to break through this stagnation in MeV gamma-ray astronomy. The ETCC provides all the parameters of Compton-scattering by measuring 3-D recoil electron tracks; then the Scatter Plane Deviation (SPD) lost in CCs is recovered. The energy loss rate (dE/dx), which CCs cannot measure, is also obtained, and is found to be indeed helpful to reduce the background under conditions similar to space. Accordingly the significance in gamma detection is improved severalfold. On the other hand, SPD is essential to determine the point-spread function (PSF) quantitatively. The SPD resolution is improved close to the theoretical limit for multiple scattering of recoil electrons. With such a well-determined PSF, we demonstrate for the first time that it is possible to provide reliable sensitivity in Compton imaging without utilizing an optimization algorithm. As such, this study highlights the fundamental weak-points of CCs. In contrast we demonstrate the possibility of ETCC reaching the sensitivity below 1×10−121\times10^{-12} erg cm−2^{-2} s−1^{-1} at 1 MeV.Comment: 19 pages, 12 figures, Accepted to the Astrophysical Journa

    Pion Freeze-Out Time in Pb+Pb Collisions at 158 A GeV/c Studied via pi-/pi+ and K-/K+ Ratios

    Get PDF
    The effect of the final state Coulomb interaction on particles produced in Pb+Pb collisions at 158 A GeV/c has been investigated in the WA98 experiment through the study of the pi-/pi+ and K-/K+ ratios measured as a function of transverse mass. While the ratio for kaons shows no significant transverse mass dependence, the pi-/pi+ ratio is enhanced at small transverse mass values with an enhancement that increases with centrality. A silicon pad detector located near the target is used to estimate the contribution of hyperon decays to the pi-/pi+ ratio. The comparison of results with predictions of the RQMD model in which the Coulomb interaction has been incorporated allows to place constraints on the time of the pion freeze-out.Comment: 9 pages, 12 figure
    • 

    corecore