842 research outputs found

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Copyright as a constraint on creating technological value

    Get PDF
    Defence date: 8 January 2019Examining Board: Giovanni Sartor, EUI; Peter Drahos, EUI; Jane C. Ginsburg, Columbia Law School; Raquel Xalabarder, Universitat Oberta de Catalunya.How do we legislate for the unknown? This work tackles the question from the perspective of copyright, analysing the judicial practice emerging from case law on new uses of intellectual property resulting from technological change. Starting off by comparing results of actual innovation-related cases decided in jurisdictions with and without the fair use defence available, it delves deeper into the pathways of judicial reasoning and doctrinal debate arising in the two copyright realities, describing the dark sides of legal flexibility, the attempts to ‘bring order into chaos’ on one side and, on the other, the effort of judges actively looking for ways not to close the door on valuable innovation where inflexible legislation was about to become an impassable choke point. The analysis then moves away from the high-budget, large-scale innovation projects financed by the giants of the Internet era. Instead, building upon the findings of Yochai Benkler on the subject of networked creativity, it brings forth a type of innovation that brings together networked individuals, sharing and building upon each other’s results instead of competing, while often working for non-economic motivations. It is seemingly the same type of innovation, deeply rooted in the so-called ‘nerd culture’, that powered the early years of the 20th century digital revolution. As this culture was put on trial when Oracle famously sued Google for reuse of Java in the Android mobile operating system, the commentary emerging from the surrounding debate allowed to draw more general conclusions about what powers the digital evolution in a networked environment. Lastly, analysing the current trends in European cases, the analysis concludes by offering a rationale as to why a transformative use exception would allow courts to openly engage in the types of reasoning that seem to have become a necessity in cases on the fringes of copyright

    Towards a Peaceful Development of Cyberspace - Challenges and Technical Measures for the De-escalation of State-led Cyberconflicts and Arms Control of Cyberweapons

    Get PDF
    Cyberspace, already a few decades old, has become a matter of course for most of us, part of our everyday life. At the same time, this space and the global infrastructure behind it are essential for our civilizations, the economy and administration, and thus an essential expression and lifeline of a globalized world. However, these developments also create vulnerabilities and thus, cyberspace is increasingly developing into an intelligence and military operational area – for the defense and security of states but also as a component of offensive military planning, visible in the creation of military cyber-departments and the integration of cyberspace into states' security and defense strategies. In order to contain and regulate the conflict and escalation potential of technology used by military forces, over the last decades, a complex tool set of transparency, de-escalation and arms control measures has been developed and proof-tested. Unfortunately, many of these established measures do not work for cyberspace due to its specific technical characteristics. Even more, the concept of what constitutes a weapon – an essential requirement for regulation – starts to blur for this domain. Against this background, this thesis aims to answer how measures for the de-escalation of state-led conflicts in cyberspace and arms control of cyberweapons can be developed. In order to answer this question, the dissertation takes a specifically technical perspective on these problems and the underlying political challenges of state behavior and international humanitarian law in cyberspace to identify starting points for technical measures of transparency, arms control and verification. Based on this approach of adopting already existing technical measures from other fields of computer science, the thesis will provide proof of concepts approaches for some mentioned challenges like a classification system for cyberweapons that is based on technical measurable features, an approach for the mutual reduction of vulnerability stockpiles and an approach to plausibly assure the non-involvement in a cyberconflict as a measure for de-escalation. All these initial approaches and the questions of how and by which measures arms control and conflict reduction can work for cyberspace are still quite new and subject to not too many debates. Indeed, the approach of deliberately self-restricting the capabilities of technology in order to serve a bigger goal, like the reduction of its destructive usage, is yet not very common for the engineering thinking of computer science. Therefore, this dissertation also aims to provide some impulses regarding the responsibility and creative options of computer science with a view to the peaceful development and use of cyberspace

    Avaliação da viabilidade de modelos filogenéticos na classificação de aplicações maliciosas

    Get PDF
    Orientador: André Ricardo Abed GrégioTese (Doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 03/02/2023Inclui referências: p. 150-170Área de concentração: Ciência da ComputaçãoResumo: Milhares de códigos maliciosos são criados, modificados com apoio de ferramentas de automação e liberados diariamente na rede mundial de computadores. Entre essas ameaças, malware são programas projetados especificamente para interromper, danificar ou obter acesso não autorizado a um sistema ou dispositivo. Para facilitar a identificação e a categorização de comportamentos comuns, estruturas e outras características de malware, possibilitando o desenvolvimento de soluções de defesa, existem estratégias de análise que classificam malware em grupos conhecidos como famílias. Uma dessas estratégias é a Filogenia, técnica baseada na Biologia, que investiga o relacionamento histórico e evolutivo de uma espécie ou outro grupo de elementos. Além disso, a utilização de técnicas de agrupamento em conjuntos semelhantes facilita tarefas de engenharia reversa para análise de variantes desconhecidas. Uma variante se refere a uma nova versão de um código malicioso que é criada a partir de modificações de malware existentes. O presente trabalho investiga a viabilidade do uso de filogenias e de métodos de agrupamento na classificação de variantes de malware para plataforma Android. Inicialmente foram analisados 82 trabalhos correlatos para verificação de configurações de experimentos do estado da arte. Após esse estudo, foram realizados quatro experimentos para avaliar uso de métricas de similaridade e de algoritmos de agrupamento na classificação de variantes e na análise de similaridade entre famílias. Propôs-se então um Fluxo de Atividades para Agrupamento de malware com o objetivo de auxiliar na definição de parâmetros para técnicas de agrupamentos, incluindo métricas de similaridade, tipo de algoritmo de agrupamento a ser utilizado e seleção de características. Como prova de conceito, foi proposto o framework Androidgyny para análise de amostras, extração de características e classificação de variantes com base em medóides (elementos representativos médios de cada grupo) e características exclusivas de famílias conhecidas. Para validar o Androidgyny foram feitos dois experimentos: um comparativo com a ferramenta correlata Gefdroid e outro, com exemplares das 25 famílias mais populosas do dataset Androzoo.Abstract: Thousands of malicious codes are created, modified with the support of tools of automation and released daily on the world wide web. Among these threats, malware are programs specifically designed to interrupt, damage, or gain access unauthorized access to a system or device. To facilitate identification and categorization of common behaviors, structures and other characteristics of malware, enabling the development of defense solutions, there are analysis strategies that classify malware into groups known as families. One of these strategies is Phylogeny, a technique based on the Biology, which investigates the historical and evolutionary relationship of a species or other group of elements. In addition, the use of clustering techniques on similar sets facilitates reverse engineering tasks for analysis of unknown variants. a variant refers to a new version of malicious code that is created from modifications of existing malware. The present work investigates the feasibility of using phylogenies and methods of grouping in the classification of malware variants for the Android platform. Initially 82 related works were analyzed to verify experiment configurations of the state of the art. After this study, four experiments were carried out to evaluate the use of similarity measures and clustering algorithms in the classification of variants and in the similarity analysis between families. In addition to these experiments, a Flow of Activities for Malware grouping with five distinct phases. This flow has purpose of helping to define parameters for clustering techniques, including measures of similarity, type of clustering algorithm to be used and feature selection. After defining the flow of activities, the Androidgyny framework was proposed, a prototype for sample analysis, feature extraction and classification of variants based on medoids and unique features of known families. To validate Androidgyny were Two experiments were carried out: a comparison with the related tool Gefdroid and another with copies of the 25 most populous families in the Androzoo dataset

    Language variation, automatic speech recognition and algorithmic bias

    Get PDF
    In this thesis, I situate the impacts of automatic speech recognition systems in relation to sociolinguistic theory (in particular drawing on concepts of language variation, language ideology and language policy) and contemporary debates in AI ethics (especially regarding algorithmic bias and fairness). In recent years, automatic speech recognition systems, alongside other language technologies, have been adopted by a growing number of users and have been embedded in an increasing number of algorithmic systems. This expansion into new application domains and language varieties can be understood as an expansion into new sociolinguistic contexts. In this thesis, I am interested in how automatic speech recognition tools interact with this sociolinguistic context, and how they affect speakers, speech communities and their language varieties. Focussing on commercial automatic speech recognition systems for British Englishes, I first explore the extent and consequences of performance differences of these systems for different user groups depending on their linguistic background. When situating this predictive bias within the wider sociolinguistic context, it becomes apparent that these systems reproduce and potentially entrench existing linguistic discrimination and could therefore cause direct and indirect harms to already marginalised speaker groups. To understand the benefits and potentials of automatic transcription tools, I highlight two case studies: transcribing sociolinguistic data in English and transcribing personal voice messages in isiXhosa. The central role of the sociolinguistic context in developing these tools is emphasised in this comparison. Design choices, such as the choice of training data, are particularly consequential because they interact with existing processes of language standardisation. To understand the impacts of these choices, and the role of the developers making them better, I draw on theory from language policy research and critical data studies. These conceptual frameworks are intended to help practitioners and researchers in anticipating and mitigating predictive bias and other potential harms of speech technologies. Beyond looking at individual choices, I also investigate the discourses about language variation and linguistic diversity deployed in the context of language technologies. These discourses put forward by researchers, developers and commercial providers not only have a direct effect on the wider sociolinguistic context, but they also highlight how this context (e.g., existing beliefs about language(s)) affects technology development. Finally, I explore ways of building better automatic speech recognition tools, focussing in particular on well-documented, naturalistic and diverse benchmark datasets. However, inclusive datasets are not necessarily a panacea, as they still raise important questions about the nature of linguistic data and language variation (especially in relation to identity), and may not mitigate or prevent all potential harms of automatic speech recognition systems as embedded in larger algorithmic systems and sociolinguistic contexts

    Big data analytics in cardiovascular sciences

    Get PDF
    Introduction It has been challenging for researchers to access granular electronic health record (EHR) data at scale in England. The National Institute for Health Research (NIHR) Health Informatics Collaborative (HIC) enables the sharing of routine EHR data across NHS hospitals for research. One emerging prospect is to use big data to traverse the translational spectrum. As an example of an early discovery phase study, I assessed the effect of invasive versus non-invasive management on the survival of patients aged 80 years or older with non-ST elevation myocardial infarction (NSTEMI) (SENIOR-NSTEMI Study). As an example of a later implementation phase study, I determined the relationship between the full spectrum of troponin level and mortality in patients in whom troponin testing was performed for clinical purposes (TROP-RISK Study). Methods Five NHS Trusts contributed data: Imperial, University College London, Oxford, King’s and Guy’s and St Thomas’. Microsoft SQL was used to develop a dataset of 257,948 consecutive patients who had a troponin measured between 2010 and 2017. Phenotypically detailed data were extracted, including patient demographics, blood tests, procedural data, and survival status. All studies conducted were retrospective cohort studies. For the SENIOR-NSTEMI Study, eligible patients were 80 years or older who were diagnosed with NSTEMI. Mortality hazard ratios were estimated comparing invasive with non-invasive management. For the TROP-RISK Study, the relation between peak troponin level and all-cause mortality was modelled using multivariable adjusted restricted cubic spline Cox regression analyses. Results For the SENIOR-NSTEMI Study, 2672 patients with NSTEMI were included who had a median age of 85 (interquartile range (IQR) 82-89) years of whom 59.8% received non-invasive management. During a median follow-up of 2.7 (IQR 1.0-4.5) years, the adjusted cumulative five-year mortality was 40% in the invasive and 63% in the non-invasive group (hazard ratio 0.52, 95% confidence interval 0.43-0.62). For the TROP-RISK Study, during a median follow-up of 1198 days (IQR 514-1866 days), 55,850 (21.7%) deaths occurred. There was an unexpected inverted U-shaped relation between troponin level and mortality in acute coronary syndrome (ACS) patients (n=120,049). The paradoxical decline in mortality at very high troponin levels may be driven in part by the changing case mix as troponin levels increase; a higher proportion of patients with very high troponin levels received invasive management. Conclusion Routinely collected EHR data can be aggregated across multiple sites to create highly granular datasets for research which can be used to answer research questions that cross the translational spectrum. The SENIOR-NSTEMI Study demonstrates a survival advantage of invasive compared with non-invasive management of NSTEMI patients aged 80 years or older, who were underrepresented in previous trials. In the TROP-RISK Study, the inverted U-shaped relationship between troponin level and mortality in ACS patients demonstrates that assembling sufficiently large datasets can cast light on patterns of disease that are impossible to adequately define in single centre studies.Open Acces

    Sistema de Información Web para el Control de Inventarios de la Empresa Chemical Color Nicaragua S.A.

    Get PDF
    El presente estudio es una propuesta a favor del desarrollo de Chemical Color Nicaragua SA, con visión a ser una herramienta valiosa para la toma de decisiones, ajustada a los requerimientos particulares de la empresa y la demanda del entorno, mediante un sistema de información web, integrando toda la información para su adecuada sistematización y uso, proporcionando grandes beneficios apuntando al crecimiento exponencial de la empresa

    Challenges and perspectives of hate speech research

    Get PDF
    This book is the result of a conference that could not take place. It is a collection of 26 texts that address and discuss the latest developments in international hate speech research from a wide range of disciplinary perspectives. This includes case studies from Brazil, Lebanon, Poland, Nigeria, and India, theoretical introductions to the concepts of hate speech, dangerous speech, incivility, toxicity, extreme speech, and dark participation, as well as reflections on methodological challenges such as scraping, annotation, datafication, implicity, explainability, and machine learning. As such, it provides a much-needed forum for cross-national and cross-disciplinary conversations in what is currently a very vibrant field of research
    corecore