523 research outputs found

    Locating Multiple Multi-scale Electromagnetic Scatterers by A Single Far-field Measurement

    Full text link
    Two inverse scattering schemes were recently developed in \cite{LiLiuShangSun} for locating multiple electromagnetic (EM) scatterers, respectively, of small size and regular size compared to the detecting EM wavelength. Both schemes make use of a single far-field measurement. The scheme of locating regular-size scatterers requires the {\it a priori} knowledge of the possible shapes, orientations and sizes of the underlying scatterer components. In this paper, we extend that imaging scheme to a much more practical setting by relaxing the requirement on the orientations and sizes. We also develop an imaging scheme of locating multiple multi-scale EM scatterers, which may include at the same time, both components of regular size and small size. For the second scheme, a novel local re-sampling technique is developed. Furthermore, more robust and accurate reconstruction can be achieved for the second scheme if an additional far-field measurement is used. Rigorous mathematical justifications are provided and numerical results are presented to demonstrate the effectiveness and the promising features of the proposed imaging schemes.Comment: Any comments are welcom

    Asymptotic efficiency and finite-sample properties of the generalized profiling estimation of parameters in ordinary differential equations

    Full text link
    Ordinary differential equations (ODEs) are commonly used to model dynamic behavior of a system. Because many parameters are unknown and have to be estimated from the observed data, there is growing interest in statistics to develop efficient estimation procedures for these parameters. Among the proposed methods in the literature, the generalized profiling estimation method developed by Ramsay and colleagues is particularly promising for its computational efficiency and good performance. In this approach, the ODE solution is approximated with a linear combination of basis functions. The coefficients of the basis functions are estimated by a penalized smoothing procedure with an ODE-defined penalty. However, the statistical properties of this procedure are not known. In this paper, we first give an upper bound on the uniform norm of the difference between the true solutions and their approximations. Then we use this bound to prove the consistency and asymptotic normality of this estimation procedure. We show that the asymptotic covariance matrix is the same as that of the maximum likelihood estimation. Therefore, this procedure is asymptotically efficient. For a fixed sample and fixed basis functions, we study the limiting behavior of the approximation when the smoothing parameter tends to infinity. We propose an algorithm to choose the smoothing parameters and a method to compute the deviation of the spline approximation from solution without solving the ODEs.Comment: Published in at http://dx.doi.org/10.1214/09-AOS724 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The effect of conditional EFNB1 deletion in the T cell compartment on T cell development and function

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Eph kinases are the largest family of cell surface receptor tyrosine kinases. The ligands of Ephs, ephrins (EFNs), are also cell surface molecules. Ephs interact with EFNs transmitting signals in both directions, i.e., from Ephs to EFNs and from EFNs to Ephs. EFNB1 is known to be able to co-stimulate T cells <it>in vitro </it>and to modulate thymocyte development in a model of foetal thymus organ culture. To further understand the role of EFNB1 in T cell immunity, we generated T-cell-specific EFNB1 gene knockout mice to assess T cell development and function in these mice.</p> <p>Results</p> <p>The mice were of normal size and cellularity in the thymus and spleen and had normal T cell subpopulations in these organs. The bone marrow progenitors from KO mice and WT control mice repopulated host spleen T cell pool to similar extents. The activation and proliferation of KO T cells was comparable to that of control mice. Naïve KO CD4 cells showed an ability to differentiate into Th1, Th2, Th17 and Treg cells similar to control CD4 cells.</p> <p>Conclusions</p> <p>Our results suggest that the function of EFNB1 in the T cell compartment could be compensated by other members of the EFN family, and that such redundancy safeguards the pivotal roles of EFNB1 in T cell development and function.</p

    Patching Weak Convolutional Neural Network Models through Modularization and Composition

    Full text link
    Despite great success in many applications, deep neural networks are not always robust in practice. For instance, a convolutional neuron network (CNN) model for classification tasks often performs unsatisfactorily in classifying some particular classes of objects. In this work, we are concerned with patching the weak part of a CNN model instead of improving it through the costly retraining of the entire model. Inspired by the fundamental concepts of modularization and composition in software engineering, we propose a compressed modularization approach, CNNSplitter, which decomposes a strong CNN model for NN-class classification into NN smaller CNN modules. Each module is a sub-model containing a part of the convolution kernels of the strong model. To patch a weak CNN model that performs unsatisfactorily on a target class (TC), we compose the weak CNN model with the corresponding module obtained from a strong CNN model. The ability of the weak CNN model to recognize the TC can thus be improved through patching. Moreover, the ability to recognize non-TCs is also improved, as the samples misclassified as TC could be classified as non-TCs correctly. Experimental results with two representative CNNs on three widely-used datasets show that the averaged improvement on the TC in terms of precision and recall are 12.54% and 2.14%, respectively. Moreover, patching improves the accuracy of non-TCs by 1.18%. The results demonstrate that CNNSplitter can patch a weak CNN model through modularization and composition, thus providing a new solution for developing robust CNN models.Comment: Accepted at ASE'2

    Reusing Deep Neural Network Models through Model Re-engineering

    Full text link
    Training deep neural network (DNN) models, which has become an important task in today's software development, is often costly in terms of computational resources and time. With the inspiration of software reuse, building DNN models through reusing existing ones has gained increasing attention recently. Prior approaches to DNN model reuse have two main limitations: 1) reusing the entire model, while only a small part of the model's functionalities (labels) are required, would cause much overhead (e.g., computational and time costs for inference), and 2) model reuse would inherit the defects and weaknesses of the reused model, and hence put the new system under threats of security attack. To solve the above problem, we propose SeaM, a tool that re-engineers a trained DNN model to improve its reusability. Specifically, given a target problem and a trained model, SeaM utilizes a gradient-based search method to search for the model's weights that are relevant to the target problem. The re-engineered model that only retains the relevant weights is then reused to solve the target problem. Evaluation results on widely-used models show that the re-engineered models produced by SeaM only contain 10.11% weights of the original models, resulting 42.41% reduction in terms of inference time. For the target problem, the re-engineered models even outperform the original models in classification accuracy by 5.85%. Moreover, reusing the re-engineered models inherits an average of 57% fewer defects than reusing the entire model. We believe our approach to reducing reuse overhead and defect inheritance is one important step forward for practical model reuse.Comment: Accepted by ICSE'2
    corecore