255 research outputs found

    The Digital Revolution and COVID-19

    Get PDF
    We develop a simple model of digital markets to analyze the impact of Covid- 19 on the digital transformation of sectors. The lockdown due to Covid-19 is modeled as a shock that wipes out the physical market, temporarily leaving digital consumption as the only option. Under plausible assumptions on digital demand and supply, the model predicts that such temporary shock produces an irreversible rise of the digital markets. This happens for three distinct reasons. First, by temporarily eliminating the physical market, Covid-19 provides a strong incentive for rms to carry out the xed investments necessary to venture into the digital market (supply channel). Secondly, by forcing even the most reluctant consumers into the digital market, Covid-19 pushes them to familiarize with digital platforms, and this condence endures in the post-Covid era (demand channel). Finally, if consumerstaste for digitalization is a¤ected by the size of the digital market, a market may be entrapped into a low-digital equilibrium indenitely. In such context, the lockdown due to the pandemic is the shock that may unleash the forces of digitalization and tilt the entire sector towards a high-digital equilibrium (network externalities channel

    classification of oncologic data with genetic programming

    Get PDF
    Discovering the models explaining the hidden relationship between genetic material and tumor pathologies is one of the most important open challenges in biology and medicine. Given the large amount of data made available by the DNA Microarray technique, Machine Learning is becoming a popular tool for this kind of investigations. In the last few years, we have been particularly involved in the study of Genetic Programming for mining large sets of biomedical data. In this paper, we present a comparison between four variants of Genetic Programming for the classification of two different oncologic datasets: the first one contains data from healthy colon tissues and colon tissues affected by cancer; the second one contains data from patients affected by two kinds of leukemia (acute myeloid leukemia and acute lymphoblastic leukemia). We report experimental results obtained using two different fitness criteria: the receiver operating characteristic and the percentage of correctly classified instances. These results, and their comparison with the ones obtained by three nonevolutionary Machine Learning methods (Support Vector Machines, MultiBoosting, and Random Forests) on the same data, seem to hint that Genetic Programming is a promising technique for this kind of classification

    Probabilistic measures of edge criticality in graphs: a study in water distribution networks

    Get PDF
    AbstractThe issue of vulnerability and robustness in networks have been addressed by several methods. The goal is to identify which are the critical components (i.e., nodes/edges) whose failure impairs the functioning of the network and how much this impacts the ensuing increase in vulnerability. In this paper we consider the drop in the network robustness as measured by the increase in vulnerability of the perturbed network and compare it with the original one. Traditional robustness metrics are based on centrality measures, the loss of efficiency and spectral analysis. The approach proposed in this paper sees the graph as a set of probability distributions and computes, specifically the probability distribution of its node to node distances and computes an index of vulnerability through the distance between the node-to-node distributions associated to original network and the one obtained by the removal of nodes and edges. Two such distances are proposed for this analysis: Jensen–Shannon and Wasserstein, based respectively on information theory and optimal transport theory, which are shown to offer a different characterization of vulnerability. Extensive computational results, including two real-world water distribution networks, are reported comparing the new approach to the traditional metrics. This modelling and algorithmic framework can also support the analysis of other networked infrastructures among which power grids, gas distribution and transit networks

    A Hyper-Solution Framework for SVM Classification:Improving Damage Detection on Helicopter Fuselage Panels

    Get PDF
    Abstract The on-line assessment of structural health of aircraft fuselage panels and their remaining useful life is crucial both in military and civilian settings. This paper presents an application of a Support Vector Machines (SVM) classification framework aimed at improving the diagnosis task based on the strain values acquired through a monitoring sensor network deployed on the helicopter fuselage panels. More in details, diagnosis is usually defined as detecting a damage, identifying the specific component affected (i.e., bay or stringer) and then characterizing the damage in terms of center and size. Here, the first two steps are performed through the SVM classification framework while the last one is based on an Artificial Neural Network (ANN) hierarchy already presented in a previous authors' work. The training dataset was built through Finite Elements Method (FEM) based simulation, able to simulate the behavior of any type of panel and damage according to specific parameters to set up; the result of FE simulation consists of the strain fields on different locations. As results, the proposed SVM classification framework permits to improve reliability of detection and characterization tasks respect to the previous approach entirely based on ANN hierarchies. Finally, the remaining useful life is estimated by using another ANN, different for damage on bay and stringer, able to predict the values of two parameters of the NASGRO equation which is used to estimate the damage propagation

    Hyperparameter optimization for recommender systems through Bayesian optimization

    Get PDF
    AbstractRecommender systems represent one of the most successful applications of machine learning in B2C online services, to help the users in their choices in many web services. Recommender system aims to predict the user preferences from a huge amount of data, basically the past behaviour of the user, using an efficient prediction algorithm. One of the most used is the matrix-factorization algorithm. Like many machine learning algorithms, its effectiveness goes through the tuning of its hyper-parameters, and the associated optimization problem also called hyper-parameter optimization. This represents a noisy time-consuming black-box optimization problem. The related objective function maps any possible hyper-parameter configuration to a numeric score quantifying the algorithm performance. In this work, we show how Bayesian optimization can help the tuning of three hyper-parameters: the number of latent factors, the regularization parameter, and the learning rate. Numerical results are obtained on a benchmark problem and show that Bayesian optimization obtains a better result than the default setting of the hyper-parameters and the random search
    • …
    corecore