1,331 research outputs found

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (ForlĂŹ Campus) in collaboration with the Romagna Chamber of Commerce (ForlĂŹ-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Tackling Universal Properties of Minimal Trap Spaces of Boolean Networks

    Full text link
    Minimal trap spaces (MTSs) capture subspaces in which the Boolean dynamics is trapped, whatever the update mode. They correspond to the attractors of the most permissive mode. Due to their versatility, the computation of MTSs has recently gained traction, essentially by focusing on their enumeration. In this paper, we address the logical reasoning on universal properties of MTSs in the scope of two problems: the reprogramming of Boolean networks for identifying the permanent freeze of Boolean variables that enforce a given property on all the MTSs, and the synthesis of Boolean networks from universal properties on their MTSs. Both problems reduce to solving the satisfiability of quantified propositional logic formula with 3 levels of quantifiers (∃∀∃\exists\forall\exists). In this paper, we introduce a Counter-Example Guided Refinement Abstraction (CEGAR) to efficiently solve these problems by coupling the resolution of two simpler formulas. We provide a prototype relying on Answer-Set Programming for each formula and show its tractability on a wide range of Boolean models of biological networks.Comment: Accepted at 21st International Conference on Computational Methods in Systems Biology (CMSB 2023

    Higher-Order MSL Horn Constraints

    Get PDF

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Participation for health equity: a comparison of citizens’ juries and health impact assessment

    Get PDF
    Despite research demonstrating that the social determinants of health are the primary cause of health inequities, policy efforts in high-income countries have largely failed to produce more equitable health outcomes. Recent initiatives have aimed to create ‘healthier’ policies by incorporating public perspectives into their design, and scholarship has focused on improving participatory technologies. Yet how participation can improve health equity through policymaking for the social determinants of health remains unclear. The thesis addresses this gap by examining how two examples of participatory technologies implemented in Australia and the UK -- citizens’ juries and health impact assessment -- affected health equity. I found that the intersection between context, positionality and process generated a range of direct and distal outcomes for health equity. I conducted a qualitative comparative analysis of four case studies of participatory processes, including interviews and document analysis. In doing so, I examine how processes were contextually designed and delivered, personally experienced, and how their adaptive and interpretive nature produced outcomes relevant to health equity. Though participatory technologies were often designed and promoted as uniform tools, the context in which they were employed profoundly affected their implementation. Processes were embedded within different participatory ecologies -- histories, spaces and practices – that shaped their aims, design and delivery. Similarly, individual characteristics of participants (especially their positionality) affected how they interpreted the process: what the process could achieve and how they should participate. In turn, participants’ experiences resulted from (in)congruence between their expectations and outcomes. The participatory experience led to various personal outcomes, including civic skills, social capital and empowerment, which can benefit health equity. ‘Having a say’ was often described as the vital ingredient for why participants experienced empowerment. Yet what mattered most for generating this outcome was whether or not participants ‘felt heard.’ This dialogic process between participants ‘voicing’ and decision-makers ‘listening’ was core to the experience of empowerment. The processes also led to governance outcomes. The level of impact on the intended decision ranged, with some processes creating direct effects, but more commonly, by being situated in participatory ecologies, the processes affected change through non-linear or diffuse channels. Though public participation is often structured to achieve a technocratic goal, the processes accomplished other participatory, epistemic and institutional aims. These non-technocratic outcomes, combined with decision-making changes, could improve governance for the social determinants of health. Power acted as a mechanism that underpinned other elements of the processes. Public health theories have begun to focus on the role of power as a fundamental determinant of health inequities, and this thesis contributes to this emerging body of evidence by examining how instrumental, structural and discursive forms of power were enacted and influenced how processes were implemented, experienced, and what outcomes they produced. By examining not just what outcomes occurred but how they arose, this research develops a better understanding of the underlying mechanisms that generate outcomes. This shifts evidence from ‘perfecting the form’ toward building an understanding of how to utilise participatory approaches within specific contexts to achieve health equity benefits. The thesis highlights the need for greater consideration of context, positionality and variability of experiences in public participation. If participatory processes seek to achieve specific outcomes (healthy public policy and empowerment) that improve health equity, then consideration must be given to the mechanisms that can produce these effects

    Approximate Inference in Probabilistic Answer Set Programming for Statistical Probabilities

    Get PDF
    Type 1 statements were introduced by Halpern in 1990 with the goal to represent statistical information about a domain of interest. These are of the form ''x of the elements share the same property''. The recently proposed language PASTA (Probabilistic Answer set programming for STAtistical probabilities) extends Probabilistic Logic Programs under the Distribution Semantics and allows the definition of this type of statements. To perform exact inference, PASTA programs are converted into probabilistic answer set programs under the Credal Semantics. However, this algorithm is infeasible for scenarios when more than a few random variables are involved. Here, we propose several algorithms to perform both conditional and unconditional approximate inference in PASTA programs and test them on different benchmarks. The results show that approximate algorithms scale to hundreds of variables and thus can manage real world domains

    China-US Competition

    Get PDF
    This open access edited book brings together a closer examination of European and Asian responses to the escalating rivalry between the US and China. As the new Cold War has surfaced as a perceivable reality in the post-COVID era, the topic itself is of great importance to policymakers, academic researchers, and the interested public. Furthermore, this manuscript makes a valuable contribution to an under-studied and increasingly important phenomenon in international relations: the impact of the growing strategic competition between the United States and China on third parties, such as small and middle powers in the two arguably most affected regions of the world: Europe and East Asia. The European side has been under-studied and explicitly comparative work on Europe and East Asia is extremely rare. Given that the manuscript focuses heavily on recent developments—and because many of these developments have been quite dramatic—there are very few publications that cover the same topics

    If interpretability is the answer, what is the question?

    Get PDF
    Due to the ability to model even complex dependencies, machine learning (ML) can be used to tackle a broad range of (high-stakes) prediction problems. The complexity of the resulting models comes at the cost of transparency, meaning that it is difficult to understand the model by inspecting its parameters. This opacity is considered problematic since it hampers the transfer of knowledge from the model, undermines the agency of individuals affected by algorithmic decisions, and makes it more challenging to expose non-robust or unethical behaviour. To tackle the opacity of ML models, the field of interpretable machine learning (IML) has emerged. The field is motivated by the idea that if we could understand the model's behaviour -- either by making the model itself interpretable or by inspecting post-hoc explanations -- we could also expose unethical and non-robust behaviour, learn about the data generating process, and restore the agency of affected individuals. IML is not only a highly active area of research, but the developed techniques are also widely applied in both industry and the sciences. Despite the popularity of IML, the field faces fundamental criticism, questioning whether IML actually helps in tackling the aforementioned problems of ML and even whether it should be a field of research in the first place: First and foremost, IML is criticised for lacking a clear goal and, thus, a clear definition of what it means for a model to be interpretable. On a similar note, the meaning of existing methods is often unclear, and thus they may be misunderstood or even misused to hide unethical behaviour. Moreover, estimating conditional-sampling-based techniques poses a significant computational challenge. With the contributions included in this thesis, we tackle these three challenges for IML. We join a range of work by arguing that the field struggles to define and evaluate "interpretability" because incoherent interpretation goals are conflated. However, the different goals can be disentangled such that coherent requirements can inform the derivation of the respective target estimands. We demonstrate this with the examples of two interpretation contexts: recourse and scientific inference. To tackle the misinterpretation of IML methods, we suggest deriving formal interpretation rules that link explanations to aspects of the model and data. In our work, we specifically focus on interpreting feature importance. Furthermore, we collect interpretation pitfalls and communicate them to a broader audience. To efficiently estimate conditional-sampling-based interpretation techniques, we propose two methods that leverage the dependence structure in the data to simplify the estimation problems for Conditional Feature Importance (CFI) and SAGE. A causal perspective proved to be vital in tackling the challenges: First, since IML problems such as algorithmic recourse are inherently causal; Second, since causality helps to disentangle the different aspects of model and data and, therefore, to distinguish the insights that different methods provide; And third, algorithms developed for causal structure learning can be leveraged for the efficient estimation of conditional-sampling based IML methods.Aufgrund der FĂ€higkeit, selbst komplexe AbhĂ€ngigkeiten zu modellieren, kann maschinelles Lernen (ML) zur Lösung eines breiten Spektrums von anspruchsvollen Vorhersageproblemen eingesetzt werden. Die KomplexitĂ€t der resultierenden Modelle geht auf Kosten der Interpretierbarkeit, d. h. es ist schwierig, das Modell durch die Untersuchung seiner Parameter zu verstehen. Diese Undurchsichtigkeit wird als problematisch angesehen, da sie den Wissenstransfer aus dem Modell behindert, sie die HandlungsfĂ€higkeit von Personen, die von algorithmischen Entscheidungen betroffen sind, untergrĂ€bt und sie es schwieriger macht, nicht robustes oder unethisches Verhalten aufzudecken. Um die Undurchsichtigkeit von ML-Modellen anzugehen, hat sich das Feld des interpretierbaren maschinellen Lernens (IML) entwickelt. Dieses Feld ist von der Idee motiviert, dass wir, wenn wir das Verhalten des Modells verstehen könnten - entweder indem wir das Modell selbst interpretierbar machen oder anhand von post-hoc ErklĂ€rungen - auch unethisches und nicht robustes Verhalten aufdecken, ĂŒber den datengenerierenden Prozess lernen und die HandlungsfĂ€higkeit betroffener Personen wiederherstellen könnten. IML ist nicht nur ein sehr aktiver Forschungsbereich, sondern die entwickelten Techniken werden auch weitgehend in der Industrie und den Wissenschaften angewendet. Trotz der PopularitĂ€t von IML ist das Feld mit fundamentaler Kritik konfrontiert, die in Frage stellt, ob IML tatsĂ€chlich dabei hilft, die oben genannten Probleme von ML anzugehen, und ob es ĂŒberhaupt ein Forschungsgebiet sein sollte: In erster Linie wird an IML kritisiert, dass es an einem klaren Ziel und damit an einer klaren Definition dessen fehlt, was es fĂŒr ein Modell bedeutet, interpretierbar zu sein. Weiterhin ist die Bedeutung bestehender Methoden oft unklar, so dass sie missverstanden oder sogar missbraucht werden können, um unethisches Verhalten zu verbergen. Letztlich stellt die SchĂ€tzung von auf bedingten Stichproben basierenden Verfahren eine erhebliche rechnerische Herausforderung dar. In dieser Arbeit befassen wir uns mit diesen drei grundlegenden Herausforderungen von IML. Wir schließen uns der Argumentation an, dass es schwierig ist, "Interpretierbarkeit" zu definieren und zu bewerten, weil inkohĂ€rente Interpretationsziele miteinander vermengt werden. Die verschiedenen Ziele lassen sich jedoch entflechten, sodass kohĂ€rente Anforderungen die Ableitung der jeweiligen ZielgrĂ¶ĂŸen informieren. Wir demonstrieren dies am Beispiel von zwei Interpretationskontexten: algorithmischer Regress und wissenschaftliche Inferenz. Um der Fehlinterpretation von IML-Methoden zu begegnen, schlagen wir vor, formale Interpretationsregeln abzuleiten, die ErklĂ€rungen mit Aspekten des Modells und der Daten verknĂŒpfen. In unserer Arbeit konzentrieren wir uns speziell auf die Interpretation von sogenannten Feature Importance Methoden. DarĂŒber hinaus tragen wir wichtige Interpretationsfallen zusammen und kommunizieren sie an ein breiteres Publikum. Zur effizienten SchĂ€tzung auf bedingten Stichproben basierender Interpretationstechniken schlagen wir zwei Methoden vor, die die AbhĂ€ngigkeitsstruktur in den Daten nutzen, um die SchĂ€tzprobleme fĂŒr Conditional Feature Importance (CFI) und SAGE zu vereinfachen. Eine kausale Perspektive erwies sich als entscheidend fĂŒr die BewĂ€ltigung der Herausforderungen: Erstens, weil IML-Probleme wie der algorithmische Regress inhĂ€rent kausal sind; zweitens, weil KausalitĂ€t hilft, die verschiedenen Aspekte von Modell und Daten zu entflechten und somit die Erkenntnisse, die verschiedene Methoden liefern, zu unterscheiden; und drittens können wir Algorithmen, die fĂŒr das Lernen kausaler Struktur entwickelt wurden, fĂŒr die effiziente SchĂ€tzung von auf bindingten Verteilungen basierenden IML-Methoden verwenden
    • 

    corecore