2,176 research outputs found

    Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells

    Full text link
    Adherent cells exert traction forces on to their environment, which allows them to migrate, to maintain tissue integrity, and to form complex multicellular structures. This traction can be measured in a perturbation-free manner with traction force microscopy (TFM). In TFM, traction is usually calculated via the solution of a linear system, which is complicated by undersampled input data, acquisition noise, and large condition numbers for some methods. Therefore, standard TFM algorithms either employ data filtering or regularization. However, these approaches require a manual selection of filter- or regularization parameters and consequently exhibit a substantial degree of subjectiveness. This shortcoming is particularly serious when cells in different conditions are to be compared because optimal noise suppression needs to be adapted for every situation, which invariably results in systematic errors. Here, we systematically test the performance of new methods from computer vision and Bayesian inference for solving the inverse problem in TFM. We compare two classical schemes, L1- and L2-regularization, with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. Using artificial data and experimental data, we show that these methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Thus, Bayesian methods can mitigate the considerable uncertainty inherent in comparing cellular traction forces

    The composite absolute penalties family for grouped and hierarchical variable selection

    Full text link
    Extracting useful information from high-dimensional data is an important focus of today's statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and sparsity, the L1L_1-penalized squared error minimization method Lasso has been popular in regression models and beyond. In this paper, we combine different norms including L1L_1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates. Specifically, we introduce the Composite Absolute Penalties (CAP) family, which allows given grouping and hierarchical relationships between the predictors to be expressed. CAP penalties are built by defining groups and combining the properties of norm penalties at the across-group and within-group levels. Grouped selection occurs for nonoverlapping groups. Hierarchical variable selection is reached by defining groups with particular overlapping patterns. We propose using the BLASSO and cross-validation to compute CAP estimates in general. For a subfamily of CAP estimates involving only the L1L_1 and L∞L_{\infty} norms, we introduce the iCAP algorithm to trace the entire regularization path for the grouped selection problem. Within this subfamily, unbiased estimates of the degrees of freedom (df) are derived so that the regularization parameter is selected without cross-validation. CAP is shown to improve on the predictive performance of the LASSO in a series of simulated experiments, including cases with p≫np\gg n and possibly mis-specified groupings. When the complexity of a model is properly calculated, iCAP is seen to be parsimonious in the experiments.Comment: Published in at http://dx.doi.org/10.1214/07-AOS584 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Continuous-variable quantum neural networks

    Full text link
    We introduce a general method for building neural networks on quantum computers. The quantum neural network is a variational quantum circuit built in the continuous-variable (CV) architecture, which encodes quantum information in continuous degrees of freedom such as the amplitudes of the electromagnetic field. This circuit contains a layered structure of continuously parameterized gates which is universal for CV quantum computation. Affine transformations and nonlinear activation functions, two key elements in neural networks, are enacted in the quantum network using Gaussian and non-Gaussian gates, respectively. The non-Gaussian gates provide both the nonlinearity and the universality of the model. Due to the structure of the CV model, the CV quantum neural network can encode highly nonlinear transformations while remaining completely unitary. We show how a classical network can be embedded into the quantum formalism and propose quantum versions of various specialized model such as convolutional, recurrent, and residual networks. Finally, we present numerous modeling experiments built with the Strawberry Fields software library. These experiments, including a classifier for fraud detection, a network which generates Tetris images, and a hybrid classical-quantum autoencoder, demonstrate the capability and adaptability of CV quantum neural networks
    • …
    corecore