57 research outputs found

    Colloquium: Mechanical formalisms for tissue dynamics

    Full text link
    The understanding of morphogenesis in living organisms has been renewed by tremendous progressin experimental techniques that provide access to cell-scale, quantitative information both on theshapes of cells within tissues and on the genes being expressed. This information suggests that ourunderstanding of the respective contributions of gene expression and mechanics, and of their crucialentanglement, will soon leap forward. Biomechanics increasingly benefits from models, which assistthe design and interpretation of experiments, point out the main ingredients and assumptions, andultimately lead to predictions. The newly accessible local information thus calls for a reflectionon how to select suitable classes of mechanical models. We review both mechanical ingredientssuggested by the current knowledge of tissue behaviour, and modelling methods that can helpgenerate a rheological diagram or a constitutive equation. We distinguish cell scale ("intra-cell")and tissue scale ("inter-cell") contributions. We recall the mathematical framework developpedfor continuum materials and explain how to transform a constitutive equation into a set of partialdifferential equations amenable to numerical resolution. We show that when plastic behaviour isrelevant, the dissipation function formalism appears appropriate to generate constitutive equations;its variational nature facilitates numerical implementation, and we discuss adaptations needed in thecase of large deformations. The present article gathers theoretical methods that can readily enhancethe significance of the data to be extracted from recent or future high throughput biomechanicalexperiments.Comment: 33 pages, 20 figures. This version (26 Sept. 2015) contains a few corrections to the published version, all in Appendix D.2 devoted to large deformation

    Optimization for and by machine learning

    No full text
    Optimization and machine learning are both extremely active research topics. In this thesis, we explore problems at the intersection of the two fields. In particular, we will develop two main ideas. First, optimization can be used to improve machine learning. We illustrate this idea by considering computer vision tasks that are modelled with dense conditional random fields. Existing solvers for these models are either slow or inaccurate. We show that, by introducing a specialized solver based on proximal minimization and fast filtering, these models can be solved both quickly and accurately. Similarly, we introduce a specialized linear programming solver for block bounded problems, a common class of problems encountered in machine learning. This solver is efficient, easy to tune and simple to integrate inside larger machine learning algorithms. Second, machine learning can be used to improve optimization, in particular for NP-hard problems. For problems solved by using hand-tuned heuristics, machine learning can be used to discover and improve these heuristics. We show that, for the problem of super-optimization, a better heuristic to explore the space of programs can be learnt using reinforcement learning. For problems where no such heuristics exist, machine learning can be used to get an approximate solution of the original problem. We use this idea to tackle the problem of program synthesis by reformulating it as the problem of learning a program that performs the required task. We introduce a new differentiable formulation of the execution and show that the fastest programs can be recovered for simple tasks. </p

    DEFECT DETECTION IN ENGINEERING CERAMICS USING DIFFERENT NON DESTRUCTIVE TESTING TECHNIQUES

    No full text
    L'emploi des céramiques pour applications mécaniques à haute température et longue durée de vie nécessite des moyens de contrôle non destructif très performants. A cause des caractéristiques particulières de ces matériaux et de la faible taille des défauts recherchés, les techniques mises en oeuvre font l'objet d'un choix rigoureux. Leur application au contrôle industriel est discutée.The use of ceramics for high temperature and long lifetime applications require very sensitive non-destructive testing techniques. Due to the particular characteristics of these materials, and to the very small size of the flaws to be detected, they must be selected very strictly. Their application to industrial control is discussed

    Fast and green computing with graphics processing units for solving sparse linear systems

    No full text
    IEEE Computer SocietyInternational audienceIn this paper, we aim to introduce a new perspective when comparing highly parallelized algorithms on GPU: the energy consumption of the GPU. We give an analysis of the performance of linear algebra operations, including addition of vectors, element-wise product, dot product and sparse matrix-vector product, in order to validate our experimental protocol. We also analyze their uses within conjugate gradient method for solving the gravity equations on Graphics Processing Unit (GPU). Cusp library is considered and compared to our own implementation with a set of real matrices arrising from the Chicxulub crater and obtained by the finite element discretization of the gravity equations. The experiments demonstrate the performance and robustness of our implementation in terms of energy efficiency

    Microwave plasma enhanced CVD of aluminum oxide films: Influence of the deposition parameter on the films characteristics

    No full text
    Thin films of aluminum oxide were deposited on silicon wafers at low temperature by remote microwave plasma-enhanced chemical vapor deposition using an oxygen plasma and a mixture of trimethylaluminum and argon injected in the afterglow. Although the pressure and the total flow rate were low (respectively 2 Pa and 178 sccm), the deposition rate was high (250 nm.min-1) and the films contained only hydrogen as impurity. The most influential parameter on the quality of the film was the temperature which had to reach 550°C to obtain good quality films. A lower pressure made possible a better desorption of the by-products which induced a higher deposition rate and a lower etch rate in a 2%wt hydrofluoric acid solution. In the standard conditions, in presence of a large excess of oxygen (oxygen/trimethylaluminum>18), the trimethylaluminum precursor was fully transformed. The quality of the coatings was almost independent on the microwave power

    Efficient continuous relaxations for dense CRF

    No full text
    Dense conditional random fields (CRF) with Gaussian pairwise potentials have emerged as a popular framework for several computer vision applications such as stereo correspondence and semantic segmentation. By modeling long-range interactions, dense CRFs provide a more detailed labelling compared to their sparse counterparts. Variational inference in these dense models is performed using a filtering-based mean-field algorithm in order to obtain a fully-factorized distribution minimising the Kullback-Leibler divergence to the true distribution. In contrast to the continuous relaxation-based energy minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to provide strong theoretical guarantees on the quality of its solutions. To address this deficiency, we show that it is possible to use the same filtering approach to speed-up the optimisation of several continuous relaxations. Specifically, we solve a convex quadratic programming (QP) relaxation using the eldscient Frank-Wolfe algorithm. This also allows us to solve difference-of-convex relaxations via the iterative concave-convex procedure where each iteration requires solving a convex QP. Finally, we develop a novel divide-and-conquer method to compute the subgradients of a linear programming relaxation that provides the best theoretical bounds for energy minimisation. We demonstrate the advantage of continuous relaxations over the widely used mean-field algorithm on publicly available datasets

    Adaptive neural compilation

    No full text
    This paper proposes an adaptive neural-compilation framework to address the problem of efficient program learning. Traditional code optimisation strategies used in compilers are based on applying pre-specified set of transformations that make the code faster to execute without changing its semantics. In contrast, our work involves adapting programs to make them more efficient while considering correctness only on a target input distribution. Our approach is inspired by the recent works on differentiable representations of programs. We show that it is possible to compile programs written in a low-level language to a differentiable representation. We also show how programs in this representation can be optimised to make them efficient on a target distribution of inputs. Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given data distributions with a high success rate

    Learning to superoptimize programs

    No full text
    Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing (“Hacker’s Delight”) programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization

    Learning to superoptimize programs

    No full text
    Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization

    Learning to superoptimize programs

    No full text
    Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization
    • …
    corecore