555 research outputs found

    Uncovering the physics of flapping flat plates with artificial evolution

    Get PDF
    We consider an experiment in which a rectangular flat plate is flapped with two degrees of freedom, and a genetic algorithm tunes its trajectory parameters so as to achieve maximum average lift force, thus evolving a population of trajectories all yielding optimal lift forces. We cluster the converged population by defining a dynamical formation number for a flapping flat plate, thus showing that optimal unsteady force generation is linked to the formation of a leading-edge vortex with maximum circulation. Force and digital particle image velocimetry measurements confirm this result

    A min-flow algorithm for Minimal Critical Set detection in Resource Constrained Project Scheduling

    Get PDF
    AbstractWe propose a min-flow algorithm for detecting Minimal Critical Sets (MCS) in Resource Constrained Project Scheduling Problems (RCPSP). The MCS detection is a fundamental step in the Precedence Constraint Posting method (PCP), one of the most successful approaches for the RCPSP. The proposed approach is considerably simpler compared to existing flow based MCS detection procedures and has better scalability compared to enumeration- and envelope-based ones, while still providing good quality Critical Sets. The method is suitable for problem variants with generalized precedence relations or uncertain/variable durations

    Boosting Combinatorial Problem Modeling with Machine Learning

    Full text link
    In the past few years, the area of Machine Learning (ML) has witnessed tremendous advancements, becoming a pervasive technology in a wide range of applications. One area that can significantly benefit from the use of ML is Combinatorial Optimization. The three pillars of constraint satisfaction and optimization problem solving, i.e., modeling, search, and optimization, can exploit ML techniques to boost their accuracy, efficiency and effectiveness. In this survey we focus on the modeling component, whose effectiveness is crucial for solving the problem. The modeling activity has been traditionally shaped by optimization and domain experts, interacting to provide realistic results. Machine Learning techniques can tremendously ease the process, and exploit the available data to either create models or refine expert-designed ones. In this survey we cover approaches that have been recently proposed to enhance the modeling process by learning either single constraints, objective functions, or the whole model. We highlight common themes to multiple approaches and draw connections with related fields of research.Comment: Originally submitted to IJCAI201

    Anomaly Detection using Autoencoders in High Performance Computing Systems

    Full text link
    Anomaly detection in supercomputers is a very difficult problem due to the big scale of the systems and the high number of components. The current state of the art for automated anomaly detection employs Machine Learning methods or statistical regression models in a supervised fashion, meaning that the detection tool is trained to distinguish among a fixed set of behaviour classes (healthy and unhealthy states). We propose a novel approach for anomaly detection in High Performance Computing systems based on a Machine (Deep) Learning technique, namely a type of neural network called autoencoder. The key idea is to train a set of autoencoders to learn the normal (healthy) behaviour of the supercomputer nodes and, after training, use them to identify abnormal conditions. This is different from previous approaches which where based on learning the abnormal condition, for which there are much smaller datasets (since it is very hard to identify them to begin with). We test our approach on a real supercomputer equipped with a fine-grained, scalable monitoring infrastructure that can provide large amount of data to characterize the system behaviour. The results are extremely promising: after the training phase to learn the normal system behaviour, our method is capable of detecting anomalies that have never been seen before with a very good accuracy (values ranging between 88% and 96%).Comment: 9 pages, 3 figure

    Informed Deep Learning for Epidemics Forecasting

    Get PDF
    The SARS-CoV-2 pandemic has galvanized the interest of the scientific community toward methodologies apt at predicting the trend of the epidemiological curve, namely, the daily number of infected individuals in the population. One of the critical issues, is providing reliable predictions based on interventions enacted by policy-makers, which is of crucial relevance to assess their effectiveness. In this paper, we provide a novel data-driven application incorporating sub-symbolic knowledge to forecast the spreading of an epidemic depending on a set of interventions. More specifically, we focus on the embedding of classical epidemiological approaches, i.e., compartmental models, into Deep Learning models, to enhance the learning process and provide higher predictive accuracy

    Teaching the Old Dog New Tricks: Supervised Learning with Constraints

    Full text link
    Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained optimization with limited effort. In practice, we use a decomposition scheme alternating master steps (in charge of enforcing the constraints) and learner steps (where any supervised ML model and training algorithm can be employed). The process leads to approximate constraint satisfaction in general, and convergence properties are difficult to establish; despite this fact, we found empirically that even a na\"ive setup of our approach performs well on ML tasks with fairness constraints, and on classical datasets with synthetic constraints

    Estimation of elastic and viscous properties of the left ventricle based on annulus plane harmonic behavior

    Get PDF
    Assessment of left ventricular (LV) function with an emphasis on contractility has been a challenge in cardiac mechanics during the recent decades. The LV function is usually described by the LV pressurevolume (P-V) diagram. The standard P-V diagrams are easy to interpret but difficult to obtain and require invasive instrumentation for measuring the corresponding volume and pressure data. In the present study, we introduce a technique that can estimate the viscoelastic properties of the LV based on harmonic behavior of the ventricular chamber and it can be applied non-invasively as well. The estimation technique is based on modeling the actual long axis displacement of the mitral annulus plane toward the cardiac base as a linear damped oscillator with time-varying coefficients. The time-varying parameters of the model were estimated by a standard Recursive Linear Least Squares (RLLS) technique. LV stiffness at end-systole and end diastole was in the range of 61.86-136.00 dyne/g.cm and 1.25-21.02 dyne/g.cm, respectively. The only input used in this model was the long axis displacement of the annulus plane, which can also be obtained non-invasively using tissue Doppler or MR imaging

    Injective Domain Knowledge in Neural Networks for Transprecision Computing

    Full text link
    Machine Learning (ML) models are very effective in many learning tasks, due to the capability to extract meaningful information from large data sets. Nevertheless, there are learning problems that cannot be easily solved relying on pure data, e.g. scarce data or very complex functions to be approximated. Fortunately, in many contexts domain knowledge is explicitly available and can be used to train better ML models. This paper studies the improvements that can be obtained by integrating prior knowledge when dealing with a non-trivial learning task, namely precision tuning of transprecision computing applications. The domain information is injected in the ML models in different ways: I) additional features, II) ad-hoc graph-based network topology, III) regularization schemes. The results clearly show that ML models exploiting problem-specific information outperform the purely data-driven ones, with an average accuracy improvement around 38%

    An Analysis of Regularized Approaches for Constrained Machine Learning

    Get PDF
    open4noopenLombardi, Michele; Baldo, Federico; Borghesi, Andrea; Milano, MichelaLombardi, Michele; Baldo, Federico; Borghesi, Andrea; Milano, Michel
    • …
    corecore