32 research outputs found

    A Decision Support System for Effective Scheduling in an F-16 Pilot Training Squadron

    Get PDF
    Scheduling of flights for a flight training squadron involves the coordination of time and resources in a dynamic environment. The generation of a daily flight schedule (DFS) requires the proper coordination of resources within established time windows. This research provides a decision support tool to assist in the generation of the DFS. Three different priority rules are investigated for determining an initial ordering of flights and a shifting bottleneck heuristic is used to establish a candidate DFS. A user interface allows a scheduler to interact with the decision support tool during the DFS generation process. Furthermore, the decision support tool provides the capability to produce a weekly schedule for short-term planning purposes as well as the entire flight training program schedule for long- term planning purposes

    Accelerated Molecular Dynamics for the Exascale

    Get PDF
    A range of specialized Molecular Dynamics (MD) methods have been developed in order to overcome the challenge of reaching longer timescales in systems that evolve through sequences of rare events. In this talk, we consider Parallel Trajectory Splicing (ParSplice) which works by generating large number of MD trajectory segments in parallel in such a way that they can later be assembled into a single statistically correct state-to-state trajectory, enabling parallel speedups up to N, the number of parallel workers. The prospect of strong-scaling MD is extremely enticing given the continuously increasing scale of available computational resources: on current peta-scale platforms N can be in the hundreds of thousands, which opens the door to MD-accurate millisecond-long atomistic simulations; extending such a capability into the exascale era could be transformative.In practice, however, the ability for ParSplice to scale increasingly relies on predicting where the trajectory will be found in the future. With this insight in mind, we develop a maximum likelihood transition model that is updated on the fly and make use of an uncertainty-driven estimator to approximate the optimal distribution of trajectory segments to be generated next. In addition, we investigate resource optimization schemes designed to fully utilize computational resources in order to generate the maximum expected throughput

    確率的グラフィカルモデルによる多腕バンディット問題のベイズ最適化

    Get PDF
    筑波大学 (University of Tsukuba)201

    Business process improvement with performance-based sequential experiments

    Full text link
    Various lifecycle approaches to Business Process Management (BPM) have a common assumption that a process is incrementally improved in the redesign phase. While this assumption is hardly questioned in BPM research, there is evidence from the field of AB testing that improvement concepts often do not lead to actual improvements. If incremental process improvement can only be achieved in a fraction of the cases, there is a need to rapidly validate the assumed benefits. Contemporary BPM research does not provide techniques and guidelines on testing and validating the supposed improvements in a fair manner. In this research, we address these challenges by integrating business process execution concepts with ideas from a set of software engineering practices known as DevOps. We propose a business process improvement methodology named AB-BPM, and a set of techniques that allow us to enact the steps in this methodology. As a first technique, we develop a simulation technique that estimates the performance of a new version in an offline setting using historical data of the old version. Since the results of simulation can be speculative, we propose shadow testing as the next step. Our Shadow testing technique partially executes the new version in production alongside the old version in such a way that the new version does not throttle the old version. Finally, we develop techniques that offer AB testing for redesigned processes with immediate feedback at runtime. AB testing compares two versions of a deployed product (e.g., a Web page) by observing users responses to versions A/B, and determines which one performs better. We propose two algorithms, LTAvgR and ProcessBandit, that dynamically adjust request allocation to two versions during the test based on their performance

    Optimal Sensing and Transmission in Energy Harvesting Sensor Networks

    Get PDF
    Sensor networks equipped with energy harvesting (EH) devices have attracted great attentions recently. Compared with conventional sensor networks powered by batteries, the energy harvesting abilities of the sensor nodes make sustainable and environment-friendly sensor networks possible. However, the random, scarce and non-uniform energy supply features also necessitate a completely different approach to energy management. A typical EH wireless sensor node consists of an EH module that converts ambient energy to electrical energy, which is stored in a rechargeable battery, and will be used to power the sensing and transmission operations of the sensor. Therefore, both sensing and transmission are subject to the stochastic energy constraint imposed by the EH process. In this dissertation, we investigate optimal sensing and transmission policies for EH sensor networks under such constraints. For EH sensing, our objective is to understand how the temporal and spatial variabilities of the EH processes would affect the sensing performance of the network, and how sensor nodes should coordinate their data collection procedures with each other to cope with the random and non-uniform energy supply and provide reliable sensing performance with analytically provable guarantees. Specifically, we investigate optimal sensing policies for a single sensor node with infinite and finite battery sizes in Chapter 2, status updating/transmission strategy of an EH Source in Chapter 3, and a collaborative sensing policy for a multi-node EH sensor network in Chapter 4. For EH communication, our objective is to evaluate the impacts of stochastic variability of the EH process and practical battery usage constraint on the EH systems, and develop optimal transmission policies by taking such impacts into consideration. Specifically, we consider throughput optimization in an EH system under battery usage constraint in Chapter 5

    Cooperation, Reliability, and Matching in Inland Freight Transport

    Get PDF

    Cooperation, Reliability, and Matching in Inland Freight Transport

    Get PDF

    Accounting for variance and hyperparameter optimization in machine learning benchmarks

    Full text link
    La récente révolution de l'apprentissage automatique s'est fortement appuyée sur l'utilisation de bancs de test standardisés. Ces derniers sont au centre de la méthodologie scientifique en apprentissage automatique, fournissant des cibles et mesures indéniables des améliorations des algorithmes d'apprentissage. Ils ne garantissent cependant pas la validité des résultats ce qui implique que certaines conclusions scientifiques sur les avancées en intelligence artificielle peuvent s'avérer erronées. Nous abordons cette question dans cette thèse en soulevant d'abord la problématique (Chapitre 5), que nous étudions ensuite plus en profondeur pour apporter des solutions (Chapitre 6) et finalement developpons un nouvel outil afin d'amélioration la méthodologie des chercheurs (Chapitre 7). Dans le premier article, chapitre 5, nous démontrons la problématique de la reproductibilité pour des bancs de test stables et consensuels, impliquant que ces problèmes sont endémiques aussi à de grands ensembles d'applications en apprentissage automatique possiblement moins stable et moins consensuels. Dans cet article, nous mettons en évidence l'impact important de la stochasticité des bancs de test, et ce même pour les plus stables tels que la classification d'images. Nous soutenons d'après ces résultats que les solutions doivent tenir compte de cette stochasticité pour améliorer la reproductibilité des bancs de test. Dans le deuxième article, chapitre 6, nous étudions les différentes sources de variation typiques aux bancs de test en apprentissage automatique, mesurons l'effet de ces variations sur les méthodes de comparaison d'algorithmes et fournissons des recommandations sur la base de nos résultats. Une contribution importante de ce travail est la mesure de la fiabilité d'estimateurs peu coûteux à calculer mais biaisés servant à estimer la performance moyenne des algorithmes. Tel qu'expliqué dans l'article, un estimateur idéal implique plusieurs exécution d'optimisation d'hyperparamètres ce qui le rend trop coûteux à calculer. La plupart des chercheurs doivent donc recourir à l'alternative biaisée, mais nous ne savions pas jusqu'à présent la magnitude de la dégradation de cet estimateur. Sur la base de nos résultats, nous fournissons des recommandations pour la comparison d'algorithmes sur des bancs de test avec des budgets de calculs limités. Premièrement, les sources de variations devraient être randomisé autant que possible. Deuxièmement, la randomization devrait inclure le partitionnement aléatoire des données pour les ensembles d'entraînement, de validation et de test, qui s'avère être la plus importante des sources de variance. Troisièmement, des tests statistiques tel que la version du Mann-Withney U-test présenté dans notre article devrait être utilisé plutôt que des comparisons sur la simple base de moyennes afin de prendre en considération l'incertitude des mesures de performance. Dans le chapitre 7, nous présentons un cadriciel d'optimisation d'hyperparamètres développé avec principal objectif de favoriser les bonnes pratiques d'optimisation des hyperparamètres. Le cadriciel est conçu de façon à privilégier une interface simple et intuitive adaptée aux habitudes de travail des chercheurs en apprentissage automatique. Il inclut un nouveau système de versionnage d'expériences afin d'aider les chercheurs à organiser leurs itérations expérimentales et tirer profit des résultats antérieurs pour augmenter l'efficacité de l'optimisation des hyperparamètres. L'optimisation des hyperparamètres joue un rôle important dans les bancs de test, les hyperparamètres étant un facteur confondant significatif. Fournir aux chercheurs un instrument afin de bien contrôler ces facteurs confondants est complémentaire aux recommandations pour tenir compte des sources de variation dans le chapitre 6. Nos recommendations et l'outil pour l'optimisation d'hyperparametre offre une base solide pour une méthodologie robuste et fiable.The recent revolution in machine learning has been strongly based on the use of standardized benchmarks. Providing clear target metrics and undeniable measures of improvements of learning algorithms, they are at the center of the scientific methodology in machine learning. They do not ensure validity of results however, therefore some scientific conclusions based on flawed methodology may prove to be wrong. In this thesis we address this question by first raising the issue (Chapter 5), then we study it to find solutions and recommendations (Chapter 6) and build tools to help improve the methodology of researchers (Chapter 7). In first article, Chapter 5, we demonstrate the issue of reproducibility in stable and consensual benchmarks, implying that these issues are endemic to a large ensemble of machine learning applications that are possibly less stable or less consensual. We raise awareness of the important impact of stochasticity even in stable image classification tasks and contend that solutions for reproducible benchmarks should account for this stochasticity. In second article, Chapter 6, we study the different sources of variation that are typical in machine learning benchmarks, measure their effect on comparison methods to benchmark algorithms and provide recommendations based on our results. One important contribution of this work is that we measure the reliability of a cheaper but biased estimator for the average performance of algorithms. As explained in the article, an ideal estimator involving multiple rounds of hyperparameter optimization is too computationally expensive. Most researchers must resort to use the biased alternative, but it has been unknown until now how serious a degradation of the quality of estimation this leads to. Our investigations provides guidelines for benchmarks on practical budgets. First, as many sources of variations as possible should be randomized. Second, the partitioning of data in training, validation and test sets should be randomized as well, since this is the most important source of variation. Finally, statistical tests should be used instead of ad-hoc average comparisons so that the uncertainty of performance estimation can be accounted for when comparing machine learning algorithms. In Chapter 7, we present a framework for hyperparameter optimization that has been developed with the main goal of encouraging best practices for hyperparameter optimization. The framework is designed to favor a simple and intuitive interface adapted to the workflow of machine learning researchers. It includes a new version control system for experiments to help researchers organize their rounds of experimentations and leverage prior results for more efficient hyperparameter optimization. Hyperparameter optimization plays an important role in benchmarking, with the effect of hyperparameters being a serious confounding factor. Providing an instrument for researchers to properly control this confounding factor is complementary to our guidelines to account for sources of variation in Chapter 7. Our recommendations together with our tool for hyperparameter optimization provides a solid basis for a reliable methodology in machine learning benchmarks
    corecore