552 research outputs found

    Supervisory Control and Analysis of Partially-observed Discrete Event Systems

    Get PDF
    Nowadays, a variety of real-world systems fall into discrete event systems (DES). In practical scenarios, due to facts like limited sensor technique, sensor failure, unstable network and even the intrusion of malicious agents, it might occur that some events are unobservable, multiple events are indistinguishable in observations, and observations of some events are nondeterministic. By considering various practical scenarios, increasing attention in the DES community has been paid to partially-observed DES, which in this thesis refer broadly to those DES with partial and/or unreliable observations. In this thesis, we focus on two topics of partially-observed DES, namely, supervisory control and analysis. The first topic includes two research directions in terms of system models. One is the supervisory control of DES with both unobservable and uncontrollable events, focusing on the forbidden state problem; the other is the supervisory control of DES vulnerable to sensor-reading disguising attacks (SD-attacks), which is also interpreted as DES with nondeterministic observations, addressing both the forbidden state problem and the liveness-enforcing problem. Petri nets (PN) are used as a reference formalism in this topic. First, we study the forbidden state problem in the framework of PN with both unobservable and uncontrollable transitions, assuming that unobservable transitions are uncontrollable. For ordinary PN subject to an admissible Generalized Mutual Exclusion Constraint (GMEC), an optimal on-line control policy with polynomial complexity is proposed provided that a particular subnet, called observation subnet, satisfies certain conditions in structure. It is then discussed how to obtain an optimal on-line control policy for PN subject to an arbitrary GMEC. Next, we still consider the forbidden state problem but in PN vulnerable to SD-attacks. Assuming the control specification in terms of a GMEC, we propose three methods to derive on-line control policies. The first two lead to an optimal policy but are computationally inefficient for large-size systems, while the third method computes a policy with timely response even for large-size systems but at the expense of optimality. Finally, we investigate the liveness-enforcing problem still assuming that the system is vulnerable to SD-attacks. In this problem, the plant is modelled as a bounded PN, which allows us to off-line compute a supervisor starting from constructing the reachability graph of the PN. Then, based on repeatedly computing a more restrictive liveness-enforcing supervisor under no attack and constructing a basic supervisor, an off-line method that synthesizes a liveness-enforcing supervisor tolerant to an SD-attack is proposed. In the second topic, we care about the verification of properties related to system security. Two properties are considered, i.e., fault-predictability and event-based opacity. The former is a property in the literature, characterizing the situation that the occurrence of any fault in a system is predictable, while the latter is a newly proposed property in the thesis, which describes the fact that secret events of a system cannot be revealed to an external observer within their critical horizons. In the case of fault-predictability, DES are modeled by labeled PN. A necessary and sufficient condition for fault-predictability is derived by characterizing the structure of the Predictor Graph. Furthermore, two rules are proposed to reduce the size of a PN, which allow us to analyze the fault-predictability of the original net by verifying that of the reduced net. When studying event-based opacity, we use deterministic finite-state automata as the reference formalism. Considering different scenarios, we propose four notions, namely, K-observation event-opacity, infinite-observation event-opacity, event-opacity and combinational event-opacity. Moreover, verifiers are proposed to analyze these properties

    Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework

    Full text link
    This paper examines the current landscape of AI regulations, highlighting the divergent approaches being taken, and proposes an alternative contextual, coherent, and commensurable (3C) framework. The EU, Canada, South Korea, and Brazil follow a horizontal or lateral approach that postulates the homogeneity of AI systems, seeks to identify common causes of harm, and demands uniform human interventions. In contrast, the U.K., Israel, Switzerland, Japan, and China have pursued a context-specific or modular approach, tailoring regulations to the specific use cases of AI systems. The U.S. is reevaluating its strategy, with growing support for controlling existential risks associated with AI. Addressing such fragmentation of AI regulations is crucial to ensure the interoperability of AI. The present degree of proportionality, granularity, and foreseeability of the EU AI Act is not sufficient to garner consensus. The context-specific approach holds greater promises but requires further development in terms of details, coherency, and commensurability. To strike a balance, this paper proposes a hybrid 3C framework. To ensure contextuality, the framework categorizes AI into distinct types based on their usage and interaction with humans: autonomous, allocative, punitive, cognitive, and generative AI. To ensure coherency, each category is assigned specific regulatory objectives: safety for autonomous AI; fairness and explainability for allocative AI; accuracy and explainability for punitive AI; accuracy, robustness, and privacy for cognitive AI; and the mitigation of infringement and misuse for generative AI. To ensure commensurability, the framework promotes the adoption of international industry standards that convert principles into quantifiable metrics. In doing so, the framework is expected to foster international collaboration and standardization without imposing excessive compliance costs

    Property Enforcement for Partially-Observed Discrete-Event Systems

    Full text link
    Engineering systems that involve physical elements, such as automobiles, aircraft, or electric power pants, that are controlled by a computational infrastructure that consists of several computers that communicate through a communication network, are called Cyber-Physical Systems. Ever-increasing demands for safety, security, performance, and certi cation of these critical systems put stringent constraints on their design and necessitate the use of formal model-based approaches to synthesize provably-correct feedback controllers. This dissertation aims to tackle these challenges by developing a novel methodology for synthesis of control and sensing strategies for Discrete Event Systems (DES), an important class of cyber-physical systems. First, we develop a uniform approach for synthesizing property enforcing supervisors for a wide class of properties called information-state-based (IS-based) properties. We then consider the enforcement of non-blockingness in addition to IS-based properties. We develop a nite structure called the All Enforcement Structure (AES) that embeds all valid supervisors. Furthermore, we propose novel and general approaches to solve the sensor activation problem for partially-observed DES. We extend our results for the sensor activation problem from the centralized case to the decentralized case. The methodology in the dissertation has the following novel features: (i) it explicitly considers and handles imperfect state information, due to sensor noise, and limited controllability, due to unexpected environmental disturbances; (ii) it is a uniform information-state-based approach that can be applied to a variety of user-speci ed requirements; (iii) it is a formal model-based approach, which results in provably correct solutions; and (iv) the methodology and associated theoretical foundations developed are generic and applicable to many types of networked cyber-physical systems with safety-critical requirements, in particular networked systems such as aircraft electric power systems and intelligent transportation systems.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137097/1/xiangyin_1.pd

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    Complexity in Translation. An English-Norwegian Study of Two Text Types

    Get PDF
    The present study discusses two primary research questions. Firstly, we have tried to investigate to what extent it is possible to compute the actual translation relation found in a selection of English-Norwegian parallel texts. By this we understand the generation of translations with no human intervention, and we assume an approach to machine translation (MT) based on linguistic knowledge. In order to answer this question, a measurement of translational complexity is applied to the parallel texts. Secondly, we have tried to find out if there is a difference in the degree of translational complexity between the two text types, law and fiction, included in the empirical material. The study is a strictly product-oriented approach to complexity in translation: it disregards aspects related to translation methods, and to the cognitive processes behind translation. What we have analysed are intersubjectively available relations between source texts and existing translations. The degree of translational complexity in a given translation task is determined by the types and amounts of information needed to solve it, as well as by the accessibility of these information sources, and the effort required when they are processed. For the purpose of measuring the complexity of the relation between a source text unit and its target correspondent, we apply a set of four correspondence types, organised in a hierarchy reflecting divisions between different linguistic levels, along with a gradual increase in the degree of translational complexity. In type 1, the least complex type, the corresponding strings are pragmatically, semantically, and syntactically equivalent, down to the level of the sequence of word forms. In type 2, source and target string are pragmatically and semantically equivalent, and equivalent with respect to syntactic functions, but there is at least one mismatch in the sequence of constituents or in the use of grammatical form words. Within type 3, source and target string are pragmatically and semantically equivalent, but there is at least one structural difference violating syntactic functional equivalence between the strings. In type 4, there is at least one linguistically non-predictable, semantic discrepancy between source and target string. The correspondence type hierarchy, ranging from 1 to 4, is characterised by an increase with respect to linguistic divergence between source and target string, an increase in the need for information and in the amount of effort required to translate, and a decrease in the extent to which there exist implications between relations of source-target equivalence at different linguistic levels. We assume that there is a translational relation between the inventories of simple and complex linguistic signs in two languages which is predictable, and hence computable, from information about source and target language systems, and about how the systems correspond. Thus, computable translations are predictable from the linguistic information coded in the source text, together with given, general information about the two languages and their interrelations. Further, we regard non-computable translations to be correspondences where it is not possible to predict the target expression from the information encoded in the source expression, together with given, general information about SL and TL and their interrelations. Non-computable translations require access to additional information sources, such as various kinds of general or task-specific extra-linguistic information, or task-specific linguistic information from the context surrounding the source expression. In our approach, correspondences of types 1–3 constitute the domain of linguistically predictable, or computable, translations, whereas type 4 correspondences belong to the non-predictable, or non-computable, domain, where semantic equivalence is not fulfilled. The empirical method involves extracting translationally corresponding strings from parallel texts, and assigning one of the types defined by the correspondence hierarchy to each recorded string pair. The analysis is applied to running text, omitting no parts of it. Thus, the distribution of the four types of translational correspondence within a set of data provides a measurement of the degree of translational complexity in the parallel texts that the data are extracted from. The complexity measurements of this study are meant to show to what extent we assume that an ideal, rule-based MT system could simulate the given translations, and for this reason the finite clause is chosen as the primary unit of analysis. The work of extracting and classifying translational correspondences is done manually as it requires a bilingually competent human analyst. In the present study, the recorded data cover about 68 000 words. They are compiled from six different text pairs: two of them are law texts, and the remaining four are fiction texts. Comparable amounts of text are included for each text type, and both directions of translation are covered. Since the scope of the investigation is limited, we cannot, on the basis of our analysis, generalise about the degree of translational complexity in the chosen text types and in the language pair English-Norwegian. Calculated in terms of string lengths, the complexity measurement across the entire collection of data shows that as little as 44,8% of all recorded string pairs are classified as computable translational correspondences, i.e. as type 1, 2, or 3, and non-computable string pairs of type 4 constitute a majority (55,2%) of the compiled data. On average, the proportion of computable correspondences is 50,2% in the law data, and 39,6% in fiction. In relation to the question whether it would be fruitful to apply automatic translation to the selected texts, we have considered the workload potentially involved in correcting machine output, and in this respect the difference in restrictedness between the two text types is relevant. Within the non-computable correspondences, the frequency of cases exhibiting only one minimal semantic deviation between source and target string is considerably higher among the data extracted from the law texts than among those recorded from fiction. For this reason we tentatively regard the investigated pairs of law texts as representing a text type where tools for automatic translation may be helpful, if the effort required by post-editing is smaller than that of manual translation. This is possibly the case in one of the law text pairs, where 60,9% of the data involve computable translation tasks. In the other pair of law texts the corresponding figure is merely 38,8%, and the potential helpfulness of automatisation would be even more strongly determined by the edit cost. That text might be a task for computer-aided translation, rather than for MT. As regards the investigated fiction texts, it is our view that post-editing of automatically generated translations would be laborious and not cost effective, even in the case of one text pair showing a relatively low degree of translational complexity. Hence, we concur with the common view that the translation of fiction is not a task for MT

    The semantic transparency of English compound nouns

    Get PDF
    What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families. All semantic annotations used in the book are freely available

    The semantic transparency of English compound nouns

    Get PDF
    What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families

    The semantic transparency of English compound nouns

    Get PDF
    What is semantic transparency, why is it important, and which factors play a role in its assessment? This work approaches these questions by investigating English compound nouns. The first part of the book gives an overview of semantic transparency in the analysis of compound nouns, discussing its role in models of morphological processing and differentiating it from related notions. After a chapter on the semantic analysis of complex nominals, it closes with a chapter on previous attempts to model semantic transparency. The second part introduces new empirical work on semantic transparency, introducing two different sets of statistical models for compound transparency. In particular, two semantic factors were explored: the semantic relations holding between compound constituents and the role of different readings of the constituents and the whole compound, operationalized in terms of meaning shifts and in terms of the distribution of specifc readings across constituent families. All semantic annotations used in the book are freely available
    • …
    corecore