839 research outputs found
A corpus-based study of ethically sensitive issues in EU directives, national transposition measures and the press
This paper is set in the framework of the Eurolect Observatory Project, which is studying the differences between the EU varieties of legislative language (Eurolects) and their corresponding national legal varieties in 11 languages (Mori 2018). In this paper, our focus is on ethics and legislation: more specifically, the research question is whether any differences can be detected in the discursive construction of ethically sensitive issues in the English version of EU directives, their related national transposition measures adopted in the UK, and press articles reporting on the introduction, revision or implementation of such laws. In this sense, news reports and comments are seen as sitting at the end of a genre chain covering the whole spectre of knowledge dissemination, from the expert (legislation) to the popularising level (newspaper article). The ethically sensitive issues in question concern human health and animal welfare, and the corpora used for the study were selected from the English section of the EOMC (Eurolect Observatory Multilingual Corpus) and from the Lexis-Nexis database of press articles
Argumentative topoi seen from a discourse analytic perspective
One core aspect of argumentation is the inferential reasoning that justifies the transition from the premises to the conclusion. Classical rhetoric accounted for such inference in terms of topoi (or topics), while contemporary approaches have introduced the notion of argumentation schemes, even if the two concepts still largely coexist. Different approaches exist to the analysis and classification of topoi/schemes. This paper ponders on how two different approaches, the Argomentum Model of Topics (AMT) and the pragma-dialectical account of schemes, can serve the purposes of discourse analysts interested in argumentation. While discourse analysis tends to approach topoi from a content-based perspective, in this paper the view is taken that relying on more formalised accounts may add methodological rigour to the analysis of real-life argumentation, while enhancing points of contact between discourse analysis and argumentation theory. In particular, the AMT and the pragma-dialectical schemes are applied to the analysis of arguments used in editorials on Brexit, with a focus on populism. Building on a previous study in which recurrent topoi were analysed drawing on a content-based approach, this paper will try to establish connections between the topoi thus identified and more formalised classifications of argument schemes, considering the pros and cons of the two approaches
Here you can't: context-aware security.
Adaptive systems improve their eciency by modifying their
behaviour to respond to changes of their operational environment. Also,
security must adapt to these changes and policy enforcement becomes
dependent on the dynamic contexts. We address some issues of context-
aware security from a language-based perspective. More precisely, we
extend a core adaptive functional language, recently introduced by some
of the authors, with primitives to enforce security policies on the code
execution. Then, we accordingly extend the existing static analysis in
order to insert checks in a program. The introduced checks guarantee
that no violation occurs of the required security policies
Context-aware security: Linguistic mechanisms and static analysis
Adaptive systems improve their efficiency by modifying their behaviour to respond to changes in their operational
environment. Also, security must adapt to these changes and policy enforcement becomes dependent on the dynamic contexts.
We study these issues within MLCoDa, (the core of) an adaptive declarative language proposed recently. A main characteristic
of MLCoDa is to have two components: a logical one for handling the context and a functional one for computing. We extend
this language with security policies that are expressed in logical terms. They are of two different kinds: context and application
policies. The first, unknown a priori to an application, protect the context from unwanted changes. The others protect the
applications from malicious actions of the context, can be nested and can be activated and deactivated according to their scope.
An execution step can only occur if all the policies in force hold, under the control of an execution monitor. Beneficial to this is
a type and effect system, which safely approximates the behaviour of an application, and a further static analysis, based on the
computed effect. The last analysis can only be carried on at load time, when the execution context is known, and it enables us to
efficiently enforce the security policies on the code execution, by instrumenting applications. The monitor is thus implemented
within MLCoDa, and it is only activated on those policies that may be infringed, and switched off otherwise
Statically detecting message confusions in a multi-protocol setting
In a multi-protocol setting, different protocols are concurrently
executed, and each principal can participate in more than one.
The possibilities of attacks therefore increase, often due to the presence
of similar patterns in messages. Messages coming from one protocol can
be confused with similar messages coming from another protocol. As a
consequence, data of one type may be interpreted as data of another,
and it is also possible that the type is the expected one, but the message
is addressed to another protocol. In this paper, we shall present
an extension of the LySa calculus [7, 4] that decorates encryption with
tags including the protocol identifier, the protocol step identifier and
the intended types of the encrypted terms. The additional information
allows us to find the messages that can be confused and therefore to
have hints to reconstruct the attack. We extend accordingly the standard
static Control Flow Analysis for LySa, which over-approximates
all the possible behaviour of the studied protocols, included the possible
message confusions that may occur at run-time. Our analysis has been
implemented and successfully applied to small sets of protocols. In particular,
we discovered an undocumented family of attacks, that may arise
when Bauer-Berson-Feiertag and the Woo-Lam authentication protocols
are running in parallel. The implementation complexity of the analysis
is low polynomial
Tracing where IoT data are collected and aggregated
The Internet of Things (IoT) offers the infrastructure of the information society. It hosts smart objects that automatically collect and exchange data of various kinds, directly gathered from sensors or generated by aggregations. Suitable coordination primitives and analysis mechanisms are in order to design and reason about IoT systems, and to intercept the implied technological shifts. We address these issues from a foundational point of view. To study them, we define IoT-LySa, a process calculus endowed with a static analysis that tracks the provenance and the manipulation of IoT data, and how they flow in the system. The results of the analysis can be used by a designer to check the behaviour of smart objects, in particular to verify non-functional properties, among which security
A Step Towards Checking Security in IoT
The Internet of Things (IoT) is smartifying our everyday life. Our starting point is IoT-LySa, a calculus for describing IoT systems, and its static analysis, which will be presented at Coordination 2016. We extend the mentioned proposal in order to begin an investigation about security issues, in particular for the static verification of secrecy and some other security properties
Where Do Your IoT Ingredients Come From?
The Internet of Things (IoT) is here: smart objects are
pervading our everyday life. Smart devices automatically collect and
exchange data of various kinds, directly gathered from sensors or generated
by aggregations. Suitable coordination primitives and analysis
mechanisms are in order to design and reason about IoT systems, and
to intercept the implied technology shifts. We address these issues by
defining IoT-LySa, a process calculus endowed with a static analysis
that tracks the provenance and the route of IoT data, and detects how
they affect the behaviour of smart objects
Last Mile’s Resources
We extend an existing two-phase static analysis for an adaptive programming language to also deal with dynamic resources. The focus of our analysis is on predicting how these are used, in spite of the different, ever changing operating environments to which applications automatically adapt their behaviour. Our approach is based on a type and effect system at compile time, followed by a control flow analysis carried on at loading time. Remarkably, the second analysis cannot be anticipated, because information about availability, implementation and
other aspects of resources are unknown until the application is injected in the current environment
- …