5,431 research outputs found

    Optimization techniques in respiratory control system models

    Get PDF
    One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.Postprint (author's final draft

    Stochastic local search: a state-of-the-art review

    Get PDF
    The main objective of this paper is to provide a state-of-the-art review, analyze and discuss stochastic local search techniques used for solving hard combinatorial problems. It begins with a short introduction, motivation and some basic notation on combinatorial problems, search paradigms and other relevant features of searching techniques as needed for background. In the following a brief overview of the stochastic local search methods along with an analysis of the state-of-the-art stochastic local search algorithms is given. Finally, the last part of the paper present and discuss some of the most latest trends in application of stochastic local search algorithms in machine learning, data mining and some other areas of science and engineering. We conclude with a discussion on capabilities and limitations of stochastic local search algorithms

    A Modern Introduction to Online Learning

    Full text link
    In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.Comment: Fixed more typos, added more history bits, added local norms bounds for OMD and FTR

    Practical recommendations for gradient-based training of deep architectures

    Full text link
    Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyper-parameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with deeper architectures

    MUST-CNN: A Multilayer Shift-and-Stitch Deep Convolutional Architecture for Sequence-based Protein Structure Prediction

    Full text link
    Predicting protein properties such as solvent accessibility and secondary structure from its primary amino acid sequence is an important task in bioinformatics. Recently, a few deep learning models have surpassed the traditional window based multilayer perceptron. Taking inspiration from the image classification domain we propose a deep convolutional neural network architecture, MUST-CNN, to predict protein properties. This architecture uses a novel multilayer shift-and-stitch (MUST) technique to generate fully dense per-position predictions on protein sequences. Our model is significantly simpler than the state-of-the-art, yet achieves better results. By combining MUST and the efficient convolution operation, we can consider far more parameters while retaining very fast prediction speeds. We beat the state-of-the-art performance on two large protein property prediction datasets.Comment: 8 pages ; 3 figures ; deep learning based sequence-sequence prediction. in AAAI 201

    Extracting Hierarchies of Search Tasks & Subtasks via a Bayesian Nonparametric Approach

    Get PDF
    A significant amount of search queries originate from some real world information need or tasks. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions, for better recommendations, for satisfaction prediction, and for improved personalization in terms of tasks. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks \& subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task/subtask hierarchies.Comment: 10 pages. Accepted at SIGIR 2017 as a full pape
    corecore