462 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ïŹfth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ïŹelds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiïŹed Proportional ConïŹ‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiïŹers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiïŹcation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiïŹcation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiïŹcation, and hybrid techniques mixing deep learning with belief functions as well

    Computational Approaches to Drug Profiling and Drug-Protein Interactions

    Get PDF
    Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a long period of stagnation in drug approvals. Due to the extreme costs associated with introducing a drug to the market, locating and understanding the reasons for clinical failure is key to future productivity. As part of this PhD, three main contributions were made in this respect. First, the web platform, LigNFam enables users to interactively explore similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly, two deep-learning-based binding site comparison tools were developed, competing with the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold relationships and has already been used in multiple projects, including integration into a virtual screening pipeline to increase the tractability of ultra-large screening experiments. Together, and with existing tools, the contributions made will aid in the understanding of drug-protein relationships, particularly in the fields of off-target prediction and drug repurposing, helping to design better drugs faster

    Perspective on unconventional computing using magnetic skyrmions

    Full text link
    Learning and pattern recognition inevitably requires memory of previous events, a feature that conventional CMOS hardware needs to artificially simulate. Dynamical systems naturally provide the memory, complexity, and nonlinearity needed for a plethora of different unconventional computing approaches. In this perspective article, we focus on the unconventional computing concept of reservoir computing and provide an overview of key physical reservoir works reported. We focus on the promising platform of magnetic structures and, in particular, skyrmions, which potentially allow for low-power applications. Moreover, we discuss skyrmion-based implementations of Brownian computing, which has recently been combined with reservoir computing. This computing paradigm leverages the thermal fluctuations present in many skyrmion systems. Finally, we provide an outlook on the most important challenges in this field.Comment: 19 pages and 3 figure

    The Quantum Monadology

    Full text link
    The modern theory of functional programming languages uses monads for encoding computational side-effects and side-contexts, beyond bare-bone program logic. Even though quantum computing is intrinsically side-effectful (as in quantum measurement) and context-dependent (as on mixed ancillary states), little of this monadic paradigm has previously been brought to bear on quantum programming languages. Here we systematically analyze the (co)monads on categories of parameterized module spectra which are induced by Grothendieck's "motivic yoga of operations" -- for the present purpose specialized to HC-modules and further to set-indexed complex vector spaces. Interpreting an indexed vector space as a collection of alternative possible quantum state spaces parameterized by quantum measurement results, as familiar from Proto-Quipper-semantics, we find that these (co)monads provide a comprehensive natural language for functional quantum programming with classical control and with "dynamic lifting" of quantum measurement results back into classical contexts. We close by indicating a domain-specific quantum programming language (QS) expressing these monadic quantum effects in transparent do-notation, embeddable into the recently constructed Linear Homotopy Type Theory (LHoTT) which interprets into parameterized module spectra. Once embedded into LHoTT, this should make for formally verifiable universal quantum programming with linear quantum types, classical control, dynamic lifting, and notably also with topological effects.Comment: 120 pages, various figure

    Quantum Europe, Quantum Poland

    Get PDF
    QIT–Quantum Information Technologies promises are very serious, greatly exceeding only technical and market levels. Development of QIT in Europe, treated as building a new infrastructural civilization level, requires a broader view of coordination, funding and priority-setting policy. Simple measures used in the case of the development of new technologies, but not creating a significant ecosystem, are insufficient in this case. Quantum technologies are poised to create a new information layer of knowledge-based society. In this essay, the author subjectively addresses some of the issues such as: what we already know and what we don't know, and what efforts are being made in Europe. Polish version of this paper was published in Przegl.Telekom.2.23

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Computer-based methods of knowledge generation in science - What can the computer tell us about the world?

    Get PDF
    Der Computer hat die wissenschaftliche Praxis in fast allen Disziplinen signifikant verĂ€ndert. Neben traditionellen Quellen fĂŒr neue Erkenntnisse wie beispielsweise Beobachtungen, deduktiven Argumenten oder Experimenten, werden nun regelmĂ€ĂŸig auch computerbasierte Methoden wie ‚Computersimulationen‘ und ‚Machine Learning‘ als solche Quellen genannt. Dieser Wandel in der Wissenschaft bringt wissenschaftsphilosophische Fragen in Bezug auf diese neuen Methoden mit sich. Eine der naheliegendsten Fragen ist dabei, ob diese neuen Methoden dafĂŒr geeignet sind, als Quellen fĂŒr neue Erkenntnisse zu dienen. Dieser Frage wird in der vorliegenden Arbeit nachgegangen, wobei ein besonderer Fokus auf einem der zentralen Probleme der computerbasierten Methoden liegt: der OpazitĂ€t. Computerbasierte Methoden werden als opak bezeichnet, wenn der kausale Zusammenhang zwischen Input und Ergebnis nicht nachvollziehbar ist. Zentrale Fragen dieser Arbeit sind, ob Computersimulationen und Machine Learning Algorithmen opak sind, ob die OpazitĂ€t bei beiden Methoden von der gleichen Natur ist und ob die OpazitĂ€t verhindert, mit computerbasierten Methoden neue Erkenntnisse zu erlangen. Diese Fragen werden nah an der naturwissenschaftlichen Praxis untersucht; insbesondere die Teilchenphysik und das ATLAS-Experiment am CERN dienen als wichtige Fallbeispiele. Die Arbeit basiert auf fĂŒnf Artikeln. In den ersten beiden Artikeln werden Computersimulationen mit zwei anderen Methoden – Experimenten und Argumenten – verglichen, um sie methodologisch einordnen zu können und herauszuarbeiten, welche Herausforderungen beim Erkenntnisgewinn Computersimulationen von den anderen Methoden unterscheiden. Im ersten Artikel werden Computersimulationen und Experimente verglichen. Aufgrund der Vielfalt an Computersimulationen ist es jedoch nicht sinnvoll, einen pauschalen Vergleich mit Experimenten durchzufĂŒhren. Es werden verschiedene epistemische Aspekte herausgearbeitet, auf deren Basis der Vergleich je nach Anwendungskontext durchgefĂŒhrt werden sollte. Im zweiten Artikel wird eine von Claus Beisbart formulierte Position diskutiert, die Computersimulationen als Argumente versteht. Dieser ‚Argument View‘ beschreibt die Funktionsweise von Computersimulationen sehr gut und ermöglicht es damit, Fragen zur OpazitĂ€t und zum induktiven Charakter von Computersimulationen zu beantworten. Wie mit Computersimulationen neues Wissen erlangt werden kann, kann der Argument View alleine jedoch nicht ausreichend beantworten. Der dritte Artikel beschĂ€ftigt sich mit der Rolle von Modellen in der theoretischen Ökologie. Modelle sind zentraler Bestandteil von Computersimulationen und Machine Learning Algorithmen. Die Fragen ĂŒber die Beziehung von PhĂ€nomenen und Modellen, die hier anhand von Beispielen aus der Ökologie betrachtet werden, sind daher fĂŒr die epistemischen Fragen dieser Arbeit von zentraler Bedeutung. Der vierte Artikel bildet das Bindeglied zwischen den Themen Computersimulation und Machine Learning. In diesem Artikel werden verschiedene Arten von OpazitĂ€t definiert und Computersimulationen und Machine Learning Algorithmen anhand von Beispielen aus der Teilchenphysik daraufhin untersucht, welche Arten von OpazitĂ€t jeweils vorhanden sind. Es wird argumentiert, dass OpazitĂ€t fĂŒr den Erkenntnisgewinn mithilfe von Computer-simulationen kein prinzipielles Problem darstellt, Model-OpazitĂ€t jedoch fĂŒr Machine Learning Algorithmen eine Quelle von fundamentaler OpazitĂ€t sein könnte. Im fĂŒnften Artikel wird dieselbe Terminologie auf den Bereich von Schachcomputern angewandt. Der Vergleich zwischen einem traditionellen Schachcomputer und einem Schachcomputer, der auf einem neuronalen Netz basiert ermöglicht die Illustration der Konsequenzen der unterschiedlichen OpazitĂ€ten. Insgesamt ermöglicht die Arbeit eine methodische Einordnung von Computersimulationen und zeigt, dass sich weder mit einem Bezug auf Experimente noch auf Argumente alleine klĂ€ren lĂ€sst, wie Computersimulationen zu neuen Erkenntnissen fĂŒhren. Eine klare Definition der jeweils vorhanden OpazitĂ€ten ermöglicht eine Abgrenzung von den eng verwandten Machine Learning Algorithmen

    An Efficient Canonical Narrowing Implementation with Irreducibility and SMT Constraints for Generic Symbolic Protocol Analysis

    Full text link
    Narrowing and unification are very useful tools for symbolic analysis of rewrite theories, and thus for any model that can be specified in that way. A very clear example of their application is the field of formal cryptographic protocol analysis, which is why narrowing and unification are used in tools such as Maude-NPA, Tamarin and Akiss. In this work we present the implementation of a canonical narrowing algorithm, which improves the standard narrowing algorithm, extended to be able to process rewrite theories with conditional rules. The conditions of the rules will contain SMT constraints, which will be carried throughout the execution of the algorithm to determine if the solutions have associated satisfiable or unsatisfiable constraints, and in the latter case, discard them.Comment: 41 pages, 7 tables, 1 algorithm, 9 example
    • 

    corecore