733 research outputs found

    On the Learnability of Programming Language Semantics

    Get PDF
    This is the final version of the article. Available from ICE via the DOI in this record.Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol

    Deep Learning for Audio Signal Processing

    Full text link
    Given the recent surge in developments of deep learning, this article provides a review of the state-of-the-art deep learning techniques for audio signal processing. Speech, music, and environmental sound processing are considered side-by-side, in order to point out similarities and differences between the domains, highlighting general methods, problems, key references, and potential for cross-fertilization between areas. The dominant feature representations (in particular, log-mel spectra and raw waveform) and deep learning models are reviewed, including convolutional neural networks, variants of the long short-term memory architecture, as well as more audio-specific neural network models. Subsequently, prominent deep learning application areas are covered, i.e. audio recognition (automatic speech recognition, music information retrieval, environmental sound detection, localization and tracking) and synthesis and transformation (source separation, audio enhancement, generative models for speech, sound, and music synthesis). Finally, key issues and future questions regarding deep learning applied to audio signal processing are identified.Comment: 15 pages, 2 pdf figure

    Latent semantic analysis of game models using LSTMs

    Get PDF

    Design of a Controlled Language for Critical Infrastructures Protection

    Get PDF
    We describe a project for the construction of controlled language for critical infrastructures protection (CIP). This project originates from the need to coordinate and categorize the communications on CIP at the European level. These communications can be physically represented by official documents, reports on incidents, informal communications and plain e-mail. We explore the application of traditional library science tools for the construction of controlled languages in order to achieve our goal. Our starting point is an analogous work done during the sixties in the field of nuclear science known as the Euratom Thesaurus.JRC.G.6-Security technology assessmen

    Automated IT Service Fault Diagnosis Based on Event Correlation Techniques

    Get PDF
    In the previous years a paradigm shift in the area of IT service management could be witnessed. IT management does not only deal with the network, end systems, or applications anymore, but is more and more concerned with IT services. This is caused by the need of organizations to monitor the efficiency of internal IT departments and to have the possibility to subscribe IT services from external providers. This trend has raised new challenges in the area of IT service management, especially with respect to service level agreements laying down the quality of service to be guaranteed by a service provider. Fault management is also facing new challenges which are related to ensuring the compliance to these service level agreements. For example, a high utilization of network links in the infrastructure can imply a delay increase in the delivery of services with respect to agreed time constraints. Such relationships have to be detected and treated in a service-oriented fault diagnosis which therefore does not deal with faults in a narrow sense, but with service quality degradations. This thesis aims at providing a concept for service fault diagnosis which is an important part of IT service fault management. At first, a motivation of the need of further examinations regarding this issue is given which is based on the analysis of services offered by a large IT service provider. A generalization of the scenario forms the basis for the specification of requirements which are used for a review of related research work and commercial products. Even though some solutions for particular challenges have already been provided, a general approach for service fault diagnosis is still missing. For addressing this issue, a framework is presented in the main part of this thesis using an event correlation component as its central part. Event correlation techniques which have been successfully applied to fault management in the area of network and systems management are adapted and extended accordingly. Guidelines for the application of the framework to a given scenario are provided afterwards. For showing their feasibility in a real world scenario, they are used for both example services referenced earlier

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Space Settlements: A Design Study

    Get PDF
    Nineteen professors of engineering, physical science, social science, and architecture, three volunteers, six students, a technical director, and two co-directors worked for ten weeks to construct a convincing picture of how people might permanently sustain life in space on a large scale, and to design a system for the colonization of space. Because the idea of colonizing space has awakened strong public interest, the document presented is written to be understood by the educated public and specialists in other fields. It also includes considerable background material. A table of units and conversion factors is included to aid the reader in interpreting the units of the metric system used in the report

    On the Learnability of Programming Language Semantics

    Get PDF
    Game semantics is a powerful method of semantic analysis for programming languages. It gives mathematically accurate models ("fully abstract") for a wide variety of programming languages. Game semantic models are combinatorial characterisations of all possible interactions between a term and its syntactic context. Because such interactions can be concretely represented as sets of sequences, it is possible to ask whether they can be learned from examples. Concretely, we are using long short-term memory neural nets (LSTM), a technique which proved effective in learning natural languages for automatic translation and text synthesis, to learn game-semantic models of sequential and concurrent versions of Idealised Algol (IA), which are algorithmically complex yet can be concisely described. We will measure how accurate the learned models are as a function of the degree of the term and the number of free variables involved. Finally, we will show how to use the learned model to perform latent semantic analysis between concurrent and sequential Idealised Algol.Comment: In Proceedings ICE 2017, arXiv:1711.1070

    Adaptive Mid-term and Short-term Scheduling of Mixed-criticality Systems

    Get PDF
    A mixed-criticality real-time system is a real-time system having multiple tasks classified according to their criticality. Research on mixed-criticality systems started to provide an effective and cost efficient a priori verification process for safety critical systems. The higher the criticality of a task within a system and the more the system should guarantee the required level of service for it. However, such model poses new challenges with respect to scheduling and fault tolerance within real-time systems. Currently, mixed-criticality scheduling protocols severely degrade lower criticality tasks in case of resource shortage to provide the required level of service for the most critical ones. The actual research challenge in this field is to devise robust scheduling protocols to minimise the impact on less critical tasks. This dissertation introduces two approaches, one short-term and the other medium-term, to appropriately allocate computing resources to tasks within mixed-criticality systems both on uniprocessor and multiprocessor systems. The short-term strategy consists of a protocol named Lazy Bailout Protocol (LBP) to schedule mixed-criticality task sets on single core architectures. Scheduling decisions are made about tasks that are active in the ready queue and that have to be dispatched to the CPU. LBP minimises the service degradation for lower criticality tasks by providing to them a background execution during the system idle time. After, I refined LBP with variants that aim to further increase the service level provided for lower criticality tasks. However, this is achieved at an increased cost of either system offline analysis or complexity at runtime. The second approach, named Adaptive Tolerance-based Mixed-criticality Protocol (ATMP), decides at runtime which task has to be allocated to the active cores according to the available resources. ATMP permits to optimise the overall system utility by tuning the system workload in case of shortage of computing capacity at runtime. Unlike the majority of current mixed-criticality approaches, ATMP allows to smoothly degrade also higher criticality tasks to keep allocated lower criticality ones

    Translators in the Making: An Empirical Longitudinal Study on Translation Competence and its Development

    Get PDF
    2013/2014ABSTRACT. In the last few decades, research on translation competence (TC) has been quite productive and fostered the conceptualisation and analysis of translation-specific skills. TC is generally assumed to be a non-innate ability (Shreve 1997, 121), which is “qualitatively different from bilingual competence” (PACTE 2002, 44–45) and, as a “basic translation ability[,] is a necessary condition, but no guarantee, for further development of a (professional) competence as a translator” (Englund Dimitrova 2005, 12). However, apart from these agreed-on assumptions, the definition and modelling of TC still remain open questions and have resulted in a wide variety of concurrent (near-synonymic) terms and conceptual frameworks aiming to identify the essential constitutive components of such competence. From the mid-1980s, empirical studies have considerably contributed to the investigation of TC and, in some cases, led to the development of empirically validated definitions and models (e.g. PACTE 2003; Göpferich 2009). However, most empirical analyses focus on the translation process, i.e. the behavioural and procedural features of (un)experienced translators, and aim to identify possible patterns which might be conductive to high (or poor) translation quality. To provide a complementary perspective to this approach, an empirical longitudinal study was designed which is mainly product-oriented but also encompasses process-related data. The aim of the study is to observe whether different levels of competence reflect on different linguistic patterns and common procedural practices, which might be used to define TC and the stages of its development. The study monitored the performances of a sample of professional translators and BA- and MA-level translation trainees, who carried out six translation tasks over a three-year period. Each translation task involved the translation of a non-specialist English source text into the participants’ L1 (i.e. Italian) as well as the compilation of a post-task questionnaire inquiring on their translation processes. The synchronic and diachronic analysis of data mainly adopted a descriptive perspective which considered both product-related data, i.e. mainly lexical and syntactical features, and the process-related data concerning delivery time and the participants’ responses to the post-task questionnaires. Moreover, the assessment of translation acceptability and errors allowed for the association of specific descriptive trends with the different levels of translation quality which have been identified. The findings led to the profiling of three different stages in the acquisition of TC (i.e. novice, intermediate, and professional translator) and to the development of training guidelines, for both translation trainers and trainees, which may help anticipating and preventing possible unsuccessful behaviours and speeding up the learning process.RIASSUNTO. A partire dalla seconda metà del XX secolo, la ricerca sulla competenza traduttiva ha conosciuto un forte sviluppo, che portato all’individuazione di abilità specifiche ai fini della traduzione. La competenza traduttiva viene generalmente concepita come un’abilità non innata (Shreve, 1997, p. 121) e distinta dalla competenza bilingue (PACTE 2002, p.44–45); quest’ultima, nella sua forma embrionale, rimane una condizione necessaria ma non sufficiente allo sviluppo di una competenza traduttiva di tipo professionale (Englund Dimitrova 2005, p. 12). Fatte salve queste premesse, la natura e la struttura della competenza traduttiva rimangono ancora da definire. Nel tentativo di individuarne le componenti, la ricerca ha prodotto un’ampia varietà di termini e concetti simili e spesso sovrapponibili. Dalla metà degli anni ’80, un significativo contributo allo studio della competenza traduttiva è giunto dalla ricerca empirica, grazie alla quale è stato possibile sviluppare e testare alcuni dei modelli e delle definizioni proposti (ad es., PACTE 2003; Göpferich 2009). Gli studi empirici sulla competenza traduttiva hanno generalmente adottato un approccio orientato al processo, ovvero volto a individuare le caratteristiche comportamentali e procedurali di traduttori più o meno esperti che potessero essere associate a determinati livelli di qualità del testo tradotto. Allo scopo di fornire un approccio complementare a quello appena citato, è stato progettato uno studio empirico volto ad indagare la traduzione principalmente come testo tradotto, ma anche, in seconda battuta, come processo. Obiettivo principale dell’analisi è osservare se traduttori con livelli di competenza ed esperienza simili producono traduzioni con caratteristiche simili e/o seguono gli stessi modelli procedurali, così da definire la competenza in base alle tendenze eventualmente emerse dall’analisi sia del testo, sia del processo traduttivo. A questo scopo, l’indagine ha monitorato per tre anni la performance traduttiva di un campione di traduttori professionisti e di studenti dei corsi di Laurea triennale e magistrale in traduzione presso l’Università di Trieste. Sono state svolte in tutto sei prove di traduzione (due per anno accademico), che consistevano nella traduzione di un testo non specialistico dall’inglese all’italiano (la lingua madre dei partecipanti), seguita dalla compilazione di un questionario sul processo traduttivo. Lo studio ha adottato un approccio sincronico e diacronico principalmente di tipo descrittivo e rivolto all’analisi lessicale e sintattica del testo tradotto e dei dati relativi ai tempi di consegna e agli aspetti procedurali analizzati attraverso le risposte al questionario. È stata inoltre svolta un’analisi qualitativa delle traduzioni basata sulla valutazione dell’accettabilità del testo tradotto e degli errori di traduzione, così da associare le tendenze individuate nell’analisi descrittiva a specifici livelli di qualità. I risultati dell’indagine hanno permesso di tracciare il profilo di tre stadi nel processo di sviluppo della competenza traduttiva (‘principiante’, ‘intermedio’ e ‘professionista’) e di sviluppare delle linee guida per docenti e studenti che possono aiutare a prevedere e prevenire errori procedurali e ad accelerare il processo di apprendimento.XXVII Ciclo198
    corecore