770 research outputs found

    Light hadron spectroscopy with O(a) improved dynamical fermions

    Get PDF
    We present results for the hadron spectrum and static quark potential from a simulation with two flavours of O(a)O(a) improved dynamical Wilson fermions at β=5.2\beta=5.2. We address the issues of sea quark dependence of observables and finite-size effects.Comment: LATTICE98(spectrum), 3 pages, 4 figure

    How Can Network-Pharmacology Contribute to Antiepileptic Drug Development?

    Get PDF
    Network-pharmacology is a field of pharmacology emerging from the observation that most clinical drugs have multiple targets, contrasting with the previously dominant magic bullet paradigm which proposed the search of exquisitely selective drugs. What is more, drug targets are often involved in multiple diseases and frequently present co-expression patterns. Therefore, useful therapeutic information can be drawn from network representations of drug targets. Here, we discuss potential applications of drug-target networks in the field of antiepileptic drug development.Fil: Di Ianni, Mauricio Emiliano. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Departamento de Ciencias Biológicas. Cátedra de Química Medicinal; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata; ArgentinaFil: Talevi, Alan. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata; Argentina. Universidad Nacional de La Plata. Facultad de Ciencias Exactas. Departamento de Ciencias Biológicas. Cátedra de Química Medicinal; Argentin

    A First Taste of Dynamical Fermions with an O(a) Improved Action

    Full text link
    We present the first results obtained by the UKQCD Collaboration using a non-perturbatively O(a)O(a) improved Wilson quark action with two degenerate dynamical flavours.Comment: Talk presented at Lattice '97, Edinburgh (UK), July 1997. LaTeX 3 pages, uses espcrc2, 3 figure

    Economic and non-economic drivers of the low-carbon energy transition: evidence from households in the UK, rural India, and refugee settlements in Sub-Saharan Africa

    Get PDF
    In this thesis I investigate the drivers of household clean energy technology adoption, looking at the role of economic variables, such as prices and monetary incentives, but also at non-strictly economic dimensions, such as geography, peer influence, health concerns, and heterogeneity in experience, priorities and perceptions of the technology. The topic develops into two main lines of inquiry. The first one explores the uptake of residential solar PV systems in the UK. In Chapter 1 I look at how the UK feed-in tariff (FIT) scheme contributed to shape the distribution of decentralised electricity generation around the country. I ask in particular how effective the policy was at triggering the siting of solar installations in locations with better generation potential. In Chapter 2 I show that peer effects contribute to the diffusion of this technology, and they act as complements to the monetary incentives. I discuss two possible channels through which peer effects may operate social utility derived from imitation, and social learning from information sharing among neighbours and find evidence consistent with a dominant role of the latter. The second line of research focuses on valuation of non-traditional cookstoves in Sub-Saharan refugee settlements (Chapter 3) and rural villages in Odisha, India (Chapter 4). I use stated preferences to investigate how different features of the cooking technologies and household heterogeneity affect willingness to pay. In the context of refugee settlements in Sub-Saharan Africa (Chapter 3), I complement the analysis by looking at how the non-traditional cookstoves distributed among the residents affect fuel effciency, health and safety, time use and the gendered distribution of the cooking workload. In Chapter 4, I focus instead on how positive and negative experiences with biogas for cooking affect the stated willingness to pay for that technology in rural India, and how experience interacts with risk aversion, time preferences, and credit constraints

    Lattice quark masses: a non-perturbative measurement

    Get PDF
    We discuss the renormalization of different definitions of quark masses in the Wilson and the tree-level improved SW-Clover fermionic action. For the improved case we give the correct relationship between the quark mass and the hopping parameter. Using perturbative and non-perturbative renormalization constants, we extract quark masses in the \MSbar scheme from Lattice QCD in the quenched approximation at β=6.0\beta=6.0, β=6.2\beta=6.2 and β=6.4\beta=6.4 for both actions. We find: \bar{m}^{\MSbar}(2 GeV)=5.7 \pm 0.1 \pm 0.8 MeV, m_s^{\MSbar}(2GeV)= 130 \pm 2 \pm 18 MeV and m_c^{\MSbar}(2 GeV) = 1662\pm 30\pm 230 MeV.Comment: 21 pages, 4 figures, typos corrected, no result change

    Optimal and Automated Microservice Deployment: formal definition, implementation and validation of a deployment engine

    Get PDF
    The main purpose of this work was to study the problem of optimal and automated deployment and reconfiguration (at the architectural level) of microservice systems, proving formal properties and realizing an implemented solution. It started from the Aeolus component model, which was used to formally define the problem of deploying component-based software systems and to prove different results about decidability and complexity. In particular, the Aeolus authors formally prove that, in the general case, such problem is undecidable. Starting from these results we expanded on the analysis of automated deployment and scaling, focusing on microservice architecture. Using a model inspired by Aeolus, considering the characteristics of microservices, we formally proved that the optimal and automated deployment and scaling for microservice architectures are algorithmically treatable. However, the decision version of the problem is NP-complete and to obtain the optimal solution it is necessary to solve an NP-optimization problem. To show the applicability of our approach we decided to also realize a model of a simple but realistic case-study. The model is developed using the Abstract Behavioral Specification (ABS) language, and to calculate the different deployment and scaling plans we used an ABS tool called SmartDepl. To solve the problem, SmartDepl relies on Zephyrus2. Zephyrus2 is a configuration optimizer that allows to compute the optimal deployment configuration of described applications. This work resulted in an extended abstract accepted at the Microservices 2019 conference in Dortmund (Germany), a paper accepted at the FASE 2019 (part of ETAPS) conference in Prague (Czech Republic), and an accepted book chapter

    Big Data Analytics and Application Deployment on Cloud Infrastructure

    Get PDF
    This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered

    Quark masses and the chiral condensate with a non-perturbative renormalization procedure

    Get PDF
    We determine the quark masses and the chiral condensate in the MSbar scheme at NNLO from Lattice QCD in the quenched approximation at beta=6.0, beta=6.2 and beta=6.4 using both the Wilson and the tree-level improved SW-Clover fermion action. We extract these quantities using the Vector and the Axial Ward Identities and non-perturbative values of the renormalization constants. We compare the results obtained with the two methods and we study the O(a) dependence of the quark masses for both actions.Comment: LATTICE98(spectrum), 3 pages, 1 figure, Edinburgh 98/1

    Non-Perturbative Renormalisation of the Lattice Δs=2\Delta s=2 Four-Fermion Operator

    Full text link
    We compute the renormalised four-fermion operator OΔS=2O^{\Delta S=2} using a non-perturbative method recently introduced for determining the renormalisation constants of generic lattice composite operators. Because of the presence of the Wilson term, OΔS=2O^{\Delta S=2} mixes with operators of different chiralities. A projection method to determine the mixing coefficients is implemented. The numerical results for the renormalisation constants have been obtained from a simulation performed using the SW-Clover quark action, on a 163×3216^3 \times 32 lattice, at β=6.0\beta=6.0. We show that the use of the constants determined non-perturbatively improves the chiral behaviour of the lattice kaon matrix element \_{\latt}.Comment: LaTeX, 16 pages, 2 postscript figure

    “Decodifica di intenzioni di movimento dalla corteccia parietale posteriore di macaco attraverso il paradigma Deep Learning”

    Get PDF
    Le Brain Computer Interfaces (BCI) invasive permettono di restituire la mobilità a pazienti che hanno perso il controllo degli arti: ciò avviene attraverso la decodifica di segnali bioelettrici prelevati da aree corticali di interesse al fine di guidare un arto prostetico. La decodifica dei segnali neurali è quindi un punto critico nelle BCI, richiedendo lo sviluppo di algoritmi performanti, affidabili e robusti. Tali requisiti sono soddisfatti in numerosi campi dalle Deep Neural Networks, algoritmi adattivi le cui performance scalano con la quantità di dati forniti, allineandosi con il crescente numero di elettrodi degli impianti. Impiegando segnali pre-registrati dalla corteccia di due macachi durante movimenti di reach-to-grasp verso 5 oggetti differenti, ho testato tre basilari esempi notevoli di DNN – una rete densa multistrato, una Convolutional Neural Network (CNN) ed una Recurrent NN (RNN) – nel compito di discriminare in maniera continua e real-time l’intenzione di movimento verso ciascun oggetto. In particolare, è stata testata la capacità di ciascun modello di decodificare una generica intenzione (single-class), la performance della migliore rete risultante nel discriminarle (multi-class) con o senza metodi di ensemble learning e la sua risposta ad un degrado del segnale in ingresso. Per agevolarne il confronto, ciascuna rete è stata costruita e sottoposta a ricerca iperparametrica seguendo criteri comuni. L’architettura CNN ha ottenuto risultati particolarmente interessanti, ottenendo F-Score superiori a 0.6 ed AUC superiori a 0.9 nel caso single-class con metà dei parametri delle altre reti e tuttavia maggior robustezza. Ha inoltre mostrato una relazione quasi-lineare con il degrado del segnale, priva di crolli prestazionali imprevedibili. Le DNN impiegate si sono rivelate performanti e robuste malgrado la semplicità, rendendo eventuali architetture progettate ad-hoc promettenti nello stabilire un nuovo stato dell’arte nel controllo neuroprotesico
    • …
    corecore