1,892 research outputs found

    Towards an automated method based on Iterated Local Search optimization for tuning the parameters of Support Vector Machines

    Full text link
    We provide preliminary details and formulation of an optimization strategy under current development that is able to automatically tune the parameters of a Support Vector Machine over new datasets. The optimization strategy is a heuristic based on Iterated Local Search, a modification of classic hill climbing which iterates calls to a local search routine.Comment: 3 pages, Benelearn 2017 conference, Eindhove

    InfyNLP at SMM4H Task 2: Stacked Ensemble of Shallow Convolutional Neural Networks for Identifying Personal Medication Intake from Twitter

    Full text link
    This paper describes Infosys's participation in the "2nd Social Media Mining for Health Applications Shared Task at AMIA, 2017, Task 2". Mining social media messages for health and drug related information has received significant interest in pharmacovigilance research. This task targets at developing automated classification models for identifying tweets containing descriptions of personal intake of medicines. Towards this objective we train a stacked ensemble of shallow convolutional neural network (CNN) models on an annotated dataset provided by the organizers. We use random search for tuning the hyper-parameters of the CNN and submit an ensemble of best models for the prediction task. Our system secured first place among 9 teams, with a micro-averaged F-score of 0.693.Comment: 2nd Workshop on Social Media Mining for Healt

    Sequential Preference-Based Optimization

    Full text link
    Many real-world engineering problems rely on human preferences to guide their design and optimization. We present PrefOpt, an open source package to simplify sequential optimization tasks that incorporate human preference feedback. Our approach extends an existing latent variable model for binary preferences to allow for observations of equivalent preference from users

    Easy Hyperparameter Search Using Optunity

    Full text link
    Optunity is a free software package dedicated to hyperparameter optimization. It contains various types of solvers, ranging from undirected methods to direct search, particle swarm and evolutionary optimization. The design focuses on ease of use, flexibility, code clarity and interoperability with existing software in all machine learning environments. Optunity is written in Python and contains interfaces to environments such as R and MATLAB. Optunity uses a BSD license and is freely available online at http://www.optunity.net.Comment: 5 pages, 1 figur

    Neurology-as-a-Service for the Developing World

    Full text link
    Electroencephalography (EEG) is an extensively-used and well-studied technique in the field of medical diagnostics and treatment for brain disorders, including epilepsy, migraines, and tumors. The analysis and interpretation of EEGs require physicians to have specialized training, which is not common even among most doctors in the developed world, let alone the developing world where physician shortages plague society. This problem can be addressed by teleEEG that uses remote EEG analysis by experts or by local computer processing of EEGs. However, both of these options are prohibitively expensive and the second option requires abundant computing resources and infrastructure, which is another concern in developing countries where there are resource constraints on capital and computing infrastructure. In this work, we present a cloud-based deep neural network approach to provide decision support for non-specialist physicians in EEG analysis and interpretation. Named `neurology-as-a-service,' the approach requires almost no manual intervention in feature engineering and in the selection of an optimal architecture and hyperparameters of the neural network. In this study, we deploy a pipeline that includes moving EEG data to the cloud and getting optimal models for various classification tasks. Our initial prototype has been tested only in developed world environments to-date, but our intention is to test it in developing world environments in future work. We demonstrate the performance of our proposed approach using the BCI2000 EEG MMI dataset, on which our service attains 63.4% accuracy for the task of classifying real vs. imaginary activity performed by the subject, which is significantly higher than what is obtained with a shallow approach such as support vector machines.Comment: Presented at NIPS 2017 Workshop on Machine Learning for the Developing Worl

    Structure Optimization for Deep Multimodal Fusion Networks using Graph-Induced Kernels

    Full text link
    A popular testbed for deep learning has been multimodal recognition of human activity or gesture involving diverse inputs such as video, audio, skeletal pose and depth images. Deep learning architectures have excelled on such problems due to their ability to combine modality representations at different levels of nonlinear feature extraction. However, designing an optimal architecture in which to fuse such learned representations has largely been a non-trivial human engineering effort. We treat fusion structure optimization as a hyper-parameter search and cast it as a discrete optimization problem under the Bayesian optimization framework. We propose a novel graph-induced kernel to compute structural similarities in the search space of tree-structured multimodal architectures and demonstrate its effectiveness using two challenging multimodal human activity recognition datasets.Comment: Proceedings of the 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, April 2017, Bruges, Belgiu

    Fast and Reliable Architecture Selection for Convolutional Neural Networks

    Full text link
    The performance of a Convolutional Neural Network (CNN) depends on its hyperparameters, like the number of layers, kernel sizes, or the learning rate for example. Especially in smaller networks and applications with limited computational resources, optimisation is key. We present a fast and efficient approach for CNN architecture selection. Taking into account time consumption, precision and robustness, we develop a heuristic to quickly and reliably assess a network's performance. In combination with Bayesian optimisation (BO), to effectively cover the vast parameter space, our contribution offers a plain and powerful architecture search for this machine learning technique.Comment: As published in the proceedings of the 27th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), pages 179-184, Bruges 2019. 6 pages, 2 figures, 1 tabl

    ML-assisted versatile approach to Calorimeter R&D

    Full text link
    Advanced detector R&D for both new and ongoing experiments in HEP requires performing computationally intensive and detailed simulations as part of the detector-design optimisation process. We propose a versatile approach to this task that is based on machine learning and can substitute the most computationally intensive steps of the process while retaining the GEANT4 accuracy to details. The approach covers entire detector representation from the event generation to the evaluation of the physics performance. The approach allows the use of arbitrary modules arrangement, different signal and background conditions, tunable reconstruction algorithms, and desired physics performance metrics. While combined with properties of detector and electronics prototypes obtained from beam tests, the approach becomes even more versatile. We focus on the Phase II Upgrade of the LHCb Calorimeter under the requirements on operation at high luminosity. We discuss the general design of the approach and particular estimations, including spatial and energy resolution for the future LHCb Calorimeter setup at different pile-up conditions.Comment: 8 pages. arXiv admin note: text overlap with arXiv:2003.0511

    Hyperparameter Search in Machine Learning

    Full text link
    We introduce the hyperparameter search problem in the field of machine learning and discuss its main challenges from an optimization perspective. Machine learning methods attempt to build models that capture some element of interest based on given data. Most common learning algorithms feature a set of hyperparameters that must be determined before training commences. The choice of hyperparameters can significantly affect the resulting model's performance, but determining good values can be complex; hence a disciplined, theoretically sound search strategy is essential.Comment: 5 pages, accepted for MIC 2015: The XI Metaheuristics International Conference in Agadir, Morocc

    Fast model selection by limiting SVM training times

    Full text link
    Kernelized Support Vector Machines (SVMs) are among the best performing supervised learning methods. But for optimal predictive performance, time-consuming parameter tuning is crucial, which impedes application. To tackle this problem, the classic model selection procedure based on grid-search and cross-validation was refined, e.g. by data subsampling and direct search heuristics. Here we focus on a different aspect, the stopping criterion for SVM training. We show that by limiting the training time given to the SVM solver during parameter tuning we can reduce model selection times by an order of magnitude
    corecore