1,598 research outputs found

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives

    Concurrency Controls in Event-Driven Programs

    Get PDF
    Functional reactive programming (FRP) is a programming paradigm that utilizes the concepts of functional programming and time-varying data types to create event-driven applications. In this paradigm, data types in which values can change over time are primitives and can be applied to functions. These values are composable and can be combined with functions to create values that react to changes in values from multiple sources. Events can be modeled as values that change in discrete time steps. Computation can be encoded as values that produce events, with combination operators, it enables us to write concurrent event-driven programs by combining the concurrent computation as events. Combined with the denotational approach of functional programming, we can write programs in a concise manner. The style of event-driven programming has been widely adopted for developing graphical user interface applications, since they need to process events concurrently to stay responsive. This makes FRP a fitting approach for managing complex state and handling of events concurrently. In recent years, real-time systems such as IoT (internet of things) applications have become an important field of computation. Applying FRP to real-time systems is still an active area of research.For IoT applications, they are commonly tasked to perform data capturing in real time and transmit them to other devices. They need to exchange data with other applications over the internet and respond in a timely manner. The data needs to be processed, for simple analysis or more computation intensive work such as machine learning. Designing applications that perform these tasks and remain efficient and responsive can be challenging. In this thesis, we demonstrate that FRP is a suitable approach for real-time applications. These applications require soft real-time requirements, where systems can tolerate tasks that fail to meet the deadline and the results of these tasks might still be useful.First, we design the concurrency abstractions needed for supporting asynchronous computation and use it as the basis for building the FRP abstraction. Our implementation is in Haskell, a functional programming language with a rich type system that allows us to model abstractions with ease. The concurrency abstraction is based on some of the ideas from the Haskell solution for asynchronous computation, which elegantly supports cancelation in a composable way. Based on the Haskell implementation, we extend our design with operators that are more suitable for building web applications. We translate our implementation to JavaScript as it is more commonly used for web application development, and implementing the RxJS interface. RxJS is a popular JavaScript library for reactive programming in web applications. By implementing the RxJS interface, we argue that our programming model implemented in Haskell is also applicable in mainstream languages such as JavaScript

    Copyright as a constraint on creating technological value

    Get PDF
    Defence date: 8 January 2019Examining Board: Giovanni Sartor, EUI; Peter Drahos, EUI; Jane C. Ginsburg, Columbia Law School; Raquel Xalabarder, Universitat Oberta de Catalunya.How do we legislate for the unknown? This work tackles the question from the perspective of copyright, analysing the judicial practice emerging from case law on new uses of intellectual property resulting from technological change. Starting off by comparing results of actual innovation-related cases decided in jurisdictions with and without the fair use defence available, it delves deeper into the pathways of judicial reasoning and doctrinal debate arising in the two copyright realities, describing the dark sides of legal flexibility, the attempts to ‘bring order into chaos’ on one side and, on the other, the effort of judges actively looking for ways not to close the door on valuable innovation where inflexible legislation was about to become an impassable choke point. The analysis then moves away from the high-budget, large-scale innovation projects financed by the giants of the Internet era. Instead, building upon the findings of Yochai Benkler on the subject of networked creativity, it brings forth a type of innovation that brings together networked individuals, sharing and building upon each other’s results instead of competing, while often working for non-economic motivations. It is seemingly the same type of innovation, deeply rooted in the so-called ‘nerd culture’, that powered the early years of the 20th century digital revolution. As this culture was put on trial when Oracle famously sued Google for reuse of Java in the Android mobile operating system, the commentary emerging from the surrounding debate allowed to draw more general conclusions about what powers the digital evolution in a networked environment. Lastly, analysing the current trends in European cases, the analysis concludes by offering a rationale as to why a transformative use exception would allow courts to openly engage in the types of reasoning that seem to have become a necessity in cases on the fringes of copyright

    Remote file access over low-speed lines

    Get PDF
    A link between microcomputer and mainframe can be useful in several ways, even when, as is usually the case, the link is only a normal terminal line. One interesting example is the ‘Integrated application’, which divides a task between microcomputer and mainframe and can offer several benefits; in particular, reducing load on the mainframe and permitting a more advanced user interface than possible on a conventional terminal. Because integrated applications consist of two co-operating programs, they are much more difficult to construct than a single program. It would be much easier to implement integrated applications concerned with the display and/or modification of data in mainframe files if the microcomputer could confine its dealings with the mainframe to a suitable file server. However, file servers do not appear practical for use over slow (compared to disc access speed) terminal lines. It was proposed to alleviate the problems caused by the slow link with extended file operations, which would allow time-consuming file operations such as searching or copying between files to be done in the file server. It was discovered after attempting such a system that extended file operations are not, by themselves, sufficient; but, allied to a record-based file model and asynchronous operations (i.e. file operations that do not suspend the user program until they complete), useful results could be obtained. This thesis describes FLAP, a file server for use over terminal lines which incorporates these ideas, and MMMS, an inter-application transport protocol used by FLAP for communication between the microcomputer file interface and the mainframe server. Two simple FLAP applications are presented, a customer records maintenance program and a screen editor. Details are given of their construction and response time in use at various line speeds

    Test Flakiness Prediction Techniques for Evolving Software Systems

    Get PDF

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden

    Developing resilient cyber-physical systems: A review of state-of-the-art malware detection approaches, gaps, and future directions

    Get PDF
    Cyber-physical systems (CPSes) are rapidly evolving in critical infrastructure (CI) domains such as smart grid, healthcare, the military, and telecommunication. These systems are continually threatened by malicious software (malware) attacks by adversaries due to their improvised tactics and attack methods. A minor configuration change in a CPS through malware has devastating effects, which the world has seen in Stuxnet, BlackEnergy, Industroyer, and Triton. This paper is a comprehensive review of malware analysis practices currently being used and their limitations and efficacy in securing CPSes. Using well-known real-world incidents, we have covered the significant impacts when a CPS is compromised. In particular, we have prepared exhaustive hypothetical scenarios to discuss the implications of false positives on CPSes. To improve the security of critical systems, we believe that nature-inspired metaheuristic algorithms can effectively counter the overwhelming malware threats geared toward CPSes. However, our detailed review shows that these algorithms have not been adapted to their full potential to counter malicious software. Finally, the gaps identified through this research have led us to propose future research directions using nature-inspired algorithms that would help in bringing optimization by reducing false positives, thereby increasing the security of such systems

    Deep language models for software testing and optimisation

    Get PDF
    Developing software is difficult. A challenging part of production development is ensuring programs are correct and fast, two properties satisfied with software testing and optimisation. While both tasks still rely on manual effort and expertise, the recent surge in software applications has led them to become tedious and time-consuming. Under this fast-pace environment, manual testing and optimisation hinders productivity significantly and leads to error-prone or sub-optimal programs that waste energy and lead users to frustration. In this thesis, we propose three novel approaches to automate software testing and optimisation with modern language models based on deep learning. In contrast to our methods, existing few techniques in these two domains have limited scalability and struggle when they face real-world applications. Our first contribution lies in the field of software testing and aims to automate the test oracle problem, which is the procedure of determining the correctness of test executions. The test oracle is still largely manual, relying on human experts. Automating the oracle is a non-trivial task that requires software specifications or derived information that are often too difficult to extract. We present the first application of deep language models over program execution traces to predict runtime correctness. Our technique classifies test executions of large-scale codebases used in production as “pass” or “fail”. Our proposed approach reduces by 86% the amount of test inputs an expert has to label by training only on 14% and classifying the rest automatically. Our next two contributions improve the effectiveness of compiler optimisation. Compilers optimise programs by applying heuristic-based transformations constructed by compiler engineers. Selecting the right transformations requires extensive knowledge of the compiler, the subject program and the target architecture. Predictive models have been successfully used to automate heuristics construction but their performance is hindered by a shortage of training benchmarks in quantity and feature diversity. Our next contributions address the scarcity of compiler benchmarks by generating human-likely synthetic programs to improve the performance of predictive models. Our second contribution is BENCHPRESS, the first steerable deep learning synthesizer for executable compiler benchmarks. BENCHPRESS produces human-like programs that compile at a rate of 87%. It targets parts of the feature space previously unreachable by other synthesizers, addressing the scarcity of high-quality training data for compilers. BENCHPRESS improves the performance of a device mapping predictive model by 50% when it introduces synthetic benchmarks into its training data. BENCHPRESS is restricted by a feature-agnostic synthesizer that requires thou sands of random inferences to select a few that target the desired features. Our third contribution addresses this inefficiency. We develop BENCHDIRECT, a directed language model for compiler benchmark generation. BENCHDIRECT synthesizes programs by jointly observing the source code context and the compiler features that are targeted. This enables efficient steerable generation on large scale tasks. Compared to BENCHPRESS, BENCHDIRECT matches successfully 1.8× more Rodinia target benchmarks, while it is up to 36% more accurate and up to 72% faster in targeting three different feature spaces for compilers. All three contributions demonstrate the exciting potential of deep learning and language models to simplify the testing of programs and the construction of better optimi sation heuristics for compilers. The outcomes of this thesis provides developers with tools to keep up with the rapidly evolving landscape of software engineering

    Development of a Software Module for Working with PDF Files Using Qt Framework

    Get PDF
    Дана дипломна робота присвячена розробці програмного модуля для роботи з pdf-файлами. Під час виконання даної дипломної роботи, розглянуто основні поняття розробки програмних модулів з застосуванням мови програмування C++ та Qt, недоліки та переваги цієї мови. Також здійснено огляд загальних відомостей про існуючі рішення, які використовуються для роботи з PDF файлами, описано процес розробки модуля та проведено тестування. Крім того, в пояснювальній записці описано операційні системи та налаштування модуля для даних систем. Результат роботи може бути використаний, будь-яким користувачем, котрий знайомий з програмуванням. This thesis is devoted to the development of a software module for working with pdf files. During the implementation of this thesis, the main concepts of developing software modules using the C++ and Qt programming languages, the disadvantages and advantages of this language were considered. An overview of general information about existing solutions used for working with PDF files was also reviewed, the module development process was described, and testing was conducted. In addition, the explanatory note describes the operating systems and module settings for these systems. The result of the work can be used by any user who is familiar with programming.INTRODUCTION 6 1 JUSTIFICATION OF THE RELEVANCE OF THE DEVELOPMENT 7 1.1 DESCRIPTION OF THE INFORMATIZATION OBJECT 7 1.2 ANALYSIS OF EXISTING SOLUTIONS 10 1.1 FORMULATION OF THE PROBLEM 17 2 DESIGN COMPONENTS. REQUIREMENTS ANALYSIS AND SOFTWARE SELECTION 21 2.1 DESCRIPTION OF THE SUBJECT AREA 21 2.2 ALGORITHM FUNCTIONING COMPONENTS 23 2.3 INTERFACE DESIGN 27 2.4 JUSTIFICATION TECHNOLOGIES AND IMPLEMENTATION MEANS 29 3 TESTING AND DEVELOPMENT OF THE COMPONENT 35 3.1 IMPLEMENTATION OF THE USER INTERFACE 35 3.2 DESCRIPTION AND IMPLEMENTATION OF MODULES 41 3.3 TESTING COMPONENTS 46 4 LIFE SAFETY, BASICS OF LABOR PROTECTION 50 4.1 EFFECTS OF ELECTROMAGNETIC RADIATION ON THE HUMAN BODY 50 4.2 TYPES OF HAZARDS 53 4.3 CONCLUSIONS 56 CONCLUSIONS 57 REFERENCES 58 APPENDI
    corecore