15 research outputs found

    Quadruplet Selection Methods for Deep Embedding Learning

    Get PDF
    Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts.Comment: 6 pages, 2 figures, accepted by IEEE ICIP 201

    Statistically segregated k-space sampling for accelerating multiple-acquisition MRI

    Get PDF
    A central limitation of multiple-acquisition magnetic resonance imaging (MRI) is the degradation in scan efficiency as the number of distinct datasets grows. Sparse recovery techniques can alleviate this limitation via randomly undersampled acquisitions. A frequent sampling strategy is to prescribe for each acquisition a different random pattern drawn from a common sampling density. However, naive random patterns often contain gaps or clusters across the acquisition dimension that in turn can degrade reconstruction quality or reduce scan efficiency. To address this problem, a statistically-segregated sampling method is proposed for multiple-acquisition MRI. This method generates multiple patterns sequentially, while adaptively modifying the sampling density to minimize k-space overlap across patterns. As a result, it improves incoherence across acquisitions while still maintaining similar sampling density across the radial dimension of k-space. Comprehensive simulations and in vivo results are presented for phase-cycled balanced steady-state free precession and multi-echo T2-weighted imaging. Segregated sampling achieves significantly improved quality in both Fourier and compressedsensing reconstructions of multiple-acquisition datasets

    Deep iterative reconstruction for phase retrieval

    Get PDF
    The classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms, including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based 1-110 method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels. (C) 2019 Optical Society of Americ

    Face Recognition Based on Embedding Learning

    No full text
    Face recognition is a key task of computer vision research that has been employed in various security and surveillance applications. Recently, the importance of this task has risen with the improvements in the quality of sensors of cameras, as well as with the increasing coverage of camera networks setup everywhere in the cities. Moreover, biometry-based technologies have been developed for the last three decades and have been available on many devices such as the mobile phones. The goal is to identify people based on specific physiological landmarks. Faces are one of the most commonly utilized landmarks, due to the fact that facial recognition systems do not require any voluntary actions such as placing hands or fingers on a sensor, unlike the other bio-metric methods. In order to inhibit cyber-crimes and identity theft, the development of effective methods is necessary. In this paper, we address the face recognition problem by matching any face image visually with previously captured ones. Firstly, considering the challenges due to optical artifacts and environmental factors such as illumination changes and low resolution, in this paper, we deal with these problems by using convolutional neural networks (CNN) with state-of-the-art architecture, ResNet. Secondly, we make use of a large amount of data consisting of face images and train these networks with the help of our proposed loss function. Application of CNNs was proven to be effective in visual recognition compared to the traditional methods based on hand-crafted features. In this work, we further improve the performance by introducing a novel training policy, which utilizes quadruplet pairs. In order to ameliorate the learning process, we exploit several methods for generating quadruplet pairs from the dataset and define a new loss function corresponding to the generation policy. With the help of the proposed selection methods, we obtain improvement in classification accuracy, recall, and normalized mutual information. Finally, we report results for the end-to-end system for face recognition, performing both detection and classification

    A method for quadruplet sample selection in deep feature learning Derin Öznitelik Öǧrenme için Dördüz Örnek Seçme Yöntemi

    No full text
    Recently, the deep learning based feature learning methodologies have been developed to recognize the objects in fine-grained detail. In order to increase the discriminativeness and robustness of the utilized features, this paper proposes a sample selection methodology for the quadruplet based feature learning. The feature space is manipulated by using the hierarchical structure of the training set. In the training process, the quadruplets are selected by considering the distances between the samples in the feature space in order to improve the effectiveness of the training. We have shown by the experiments that the proposed method improves the fine-grained recognition accuracy

    The effect of swimming training on adrenomedullin levels, oxidative stress variables, and gastrocnemius muscle contractile properties in hypertensive rats

    No full text
    Introduction/Aim: Regular exercise may have beneficial effects on high blood-pressure, as shown in different types of experimental hypertension models in rats. The present study aims to investigate the effects of 6-week swimming training on blood pressure, oxidative stress variables of selected tissues, serum adrenomedullin (ADM) levels, and in situ muscle contraction in rats with hypertension induced by Nω-nitro-L-arginine methyl ester hydrochloride (L-NAME), an inhibitor of endothelial nitric oxide synthases (eNOs). Materials and Methods: Twenty-six male Sprague Dawley, 8 weeks of age, rats were randomly divided into four groups: (I) normotensive (C), (II) normotensive + exercise (E), (III) hypertensive (L), and (IV) hypertensive + exercise (LE). Hypertension was induced by the oral administration of L-NAME (60 mg/kg) for 6 weeks. Exercise was performed 5 times (1-h each) per week for 6 weeks. At the end of the experiment, blood and tissue samples (the gastrocnemius muscle, heart, kidney, and thoracic aorta) were collected following contractile properties of the gastrocnemius muscle in situ weredetermined. In the collected tissues, oxidative stress (e.g., lipid oxidation and antioxidant enzyme activity) and serum ADM levels were measured. 6-week L-NAME administration per se (Group L) led to a significant increase in systolic and diastolic blood pressure compared to other groups.  Results: Importantly, 6-week exercise caused a protective effect of high blood pressure in the rats received L-NAME (Group LE). The level of ADM was lower in the rats received L-NAME than that of the control group. L-NAME increased lipid peroxidation in the thoracic aorta and decreased superoxide dismutase in the heart, kidney and muscle, and decreased catalase and glutathione in the heart. However, the exercise intervention did not have protective effect on the L-NAME-mediated oxidative damage in the collected tissues.  Conclusion: In conclusion, 6-week exercise intervention rescued rats from high blood pressure, but did not have ameliorative effect on the decreased ADM levels

    Digital computation of linear canonical transforms

    No full text
    We deal with the problem of efficient and accurate digital computation of the samples of the linear canonical transform (LCT) of a function, from the samples of the original function. Two approaches are presented and compared. The first is based on decomposition of the LCT into chirp multiplication, Fourier transformation, and scaling operations. The second is based on decomposition of the LCT into a fractional Fourier transform followed by scaling and chirp multiplication. Both algorithms take similar to N log N time, where N is the time-bandwidth product of the signals. The only essential deviation from exactness arises from the approximation of a continuous Fourier transform with the discrete Fourier transform. Thus, the algorithms compute LCTs with a performance similar to that of the fast Fourier transform algorithm in computing the Fourier transform, both in terms of speed and accuracy

    Fast Algorithms for Digital Computation of Linear Canonical Transforms

    No full text
    Fast and accurate algorithms for digital computation of linear canonical transforms (LCTs) are discussed. Direct numerical integration takes O.N-2/time, where N is the number of samples. Designing fast and accurate algorithms that take O. N logN/time is of importance for practical utilization of LCTs. There are several approaches to designing fast algorithms. One approach is to decompose an arbitrary LCT into blocks, all of which have fast implementations, thus obtaining an overall fast algorithm. Another approach is to define a discrete LCT (DLCT), based on which a fast LCT (FLCT) is derived to efficiently compute LCTs. This strategy is similar to that employed for the Fourier transform, where one defines the discrete Fourier transform (DFT), which is then computed with the fast Fourier transform (FFT). A third, hybrid approach involves a DLCT but employs a decomposition-based method to compute it. Algorithms for two-dimensional and complex parametered LCTs are also discussed
    corecore