80 research outputs found

    Spectres d'absorption infrarouges et modes de vibration de quelques dérivés halogénés en C1 et en C2

    No full text
    Nous avons mesuré, pour la première fois, entre les fréquences 1450 et 525 cm-1, les spectres d'absorption infra rouges de dérivés halogénés — principalement de dérivés iodés — du méthane, de l'éthane, de l'éthylène et de l'acétylène. Nous avons pu, en nous aidant de résultats antérieurs de l'un de nous et de spectres Raman obtenus par divers auteurs, identifier la plupart des maxima d'absorption infrarouges avec des modes définis de vibration des molécules étudiées. Nous avons rapporté plus spécialement certaines fréquences soit aux liaisons carbone-halogène, soit à des interactions carbone-hydrogène-halogène, et nous examinons dans quelle mesure les valeurs de ces fréquences sont influencées par la nature des molécules. D'autre part il nous a été possible de dénommer sans ambiguïté les isomères cis et trans des dichloro-et diiodo-éthylènes

    Understanding neural network sample complexity and interpretable convergence-guaranteed deep learning with polynomial regression

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020Cataloged from PDF version of thesis.Includes bibliographical references (pages 83-89).We first study the sample complexity of one-layer neural networks, namely the number of examples that are needed in the training set for such models to be able to learn meaningful information out-of-sample. We empirically derive quantitative relationships between the sample complexity and the parameters of the network, such as its input dimension and its width. Then, we introduce polynomial regression as a proxy for neural networks through a polynomial approximation of their activation function. This method operates in the lifted space of tensor products of input variables, and is trained by simply optimizing a standard least squares objective in this space. We study the scalability of polynomial regression, and are able to design a bagging-type algorithm to successfully train it. The method achieves competitive accuracy on simple image datasets while being more simple. We also demonstrate that it is more robust and more interpretable that existing approaches. It also offers more convergence guarantees during training. Finally, we empirically show that the widely-used Stochastic Gradient Descent algorithm makes the weights of the trained neural networks converge to the optimal polynomial regression weights.by Matt V. Emschwiller.S.M.S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Cente
    • …
    corecore