368 research outputs found

    Supervisory Control and Analysis of Partially-observed Discrete Event Systems

    Get PDF
    Nowadays, a variety of real-world systems fall into discrete event systems (DES). In practical scenarios, due to facts like limited sensor technique, sensor failure, unstable network and even the intrusion of malicious agents, it might occur that some events are unobservable, multiple events are indistinguishable in observations, and observations of some events are nondeterministic. By considering various practical scenarios, increasing attention in the DES community has been paid to partially-observed DES, which in this thesis refer broadly to those DES with partial and/or unreliable observations. In this thesis, we focus on two topics of partially-observed DES, namely, supervisory control and analysis. The first topic includes two research directions in terms of system models. One is the supervisory control of DES with both unobservable and uncontrollable events, focusing on the forbidden state problem; the other is the supervisory control of DES vulnerable to sensor-reading disguising attacks (SD-attacks), which is also interpreted as DES with nondeterministic observations, addressing both the forbidden state problem and the liveness-enforcing problem. Petri nets (PN) are used as a reference formalism in this topic. First, we study the forbidden state problem in the framework of PN with both unobservable and uncontrollable transitions, assuming that unobservable transitions are uncontrollable. For ordinary PN subject to an admissible Generalized Mutual Exclusion Constraint (GMEC), an optimal on-line control policy with polynomial complexity is proposed provided that a particular subnet, called observation subnet, satisfies certain conditions in structure. It is then discussed how to obtain an optimal on-line control policy for PN subject to an arbitrary GMEC. Next, we still consider the forbidden state problem but in PN vulnerable to SD-attacks. Assuming the control specification in terms of a GMEC, we propose three methods to derive on-line control policies. The first two lead to an optimal policy but are computationally inefficient for large-size systems, while the third method computes a policy with timely response even for large-size systems but at the expense of optimality. Finally, we investigate the liveness-enforcing problem still assuming that the system is vulnerable to SD-attacks. In this problem, the plant is modelled as a bounded PN, which allows us to off-line compute a supervisor starting from constructing the reachability graph of the PN. Then, based on repeatedly computing a more restrictive liveness-enforcing supervisor under no attack and constructing a basic supervisor, an off-line method that synthesizes a liveness-enforcing supervisor tolerant to an SD-attack is proposed. In the second topic, we care about the verification of properties related to system security. Two properties are considered, i.e., fault-predictability and event-based opacity. The former is a property in the literature, characterizing the situation that the occurrence of any fault in a system is predictable, while the latter is a newly proposed property in the thesis, which describes the fact that secret events of a system cannot be revealed to an external observer within their critical horizons. In the case of fault-predictability, DES are modeled by labeled PN. A necessary and sufficient condition for fault-predictability is derived by characterizing the structure of the Predictor Graph. Furthermore, two rules are proposed to reduce the size of a PN, which allow us to analyze the fault-predictability of the original net by verifying that of the reduced net. When studying event-based opacity, we use deterministic finite-state automata as the reference formalism. Considering different scenarios, we propose four notions, namely, K-observation event-opacity, infinite-observation event-opacity, event-opacity and combinational event-opacity. Moreover, verifiers are proposed to analyze these properties

    Sequence-Oriented Diagnosis of Discrete-Event Systems

    Get PDF
    Model-based diagnosis has always been conceived as set-oriented, meaning that a candidate is a set of faults, or faulty components, that explains a collection of observations. This perspective applies equally to both static and dynamical systems. Diagnosis of discrete-event systems (DESs) is no exception: a candidate is traditionally a set of faults, or faulty events, occurring in a trajectory of the DES that conforms with a given sequence of observations. As such, a candidate does not embed any temporal relationship among faults, nor does it account for multiple occurrences of the same fault. To improve diagnostic explanation and support decision making, a sequence-oriented perspective to diagnosis of DESs is presented, where a candidate is a sequence of faults occurring in a trajectory of the DES, called a fault sequence. Since a fault sequence is possibly unbounded, as the same fault may occur an unlimited number of times in the trajectory, the set of (output) candidates may be unbounded also, which contrasts with set-oriented diagnosis, where the set of candidates is bounded by the powerset of the domain of faults. Still, a possibly unbounded set of fault sequences is shown to be a regular language, which can be defined by a regular expression over the domain of faults, a property that makes sequence-oriented diagnosis feasible in practice. The task of monitoring-based diagnosis is considered, where a new candidate set is generated at the occurrence of each observation. The approach is based on three different techniques: (1) blind diagnosis, with no compiled knowledge, (2) greedy diagnosis, with total knowledge compilation, and (3) lazy diagnosis, with partial knowledge compilation. By knowledge we mean a data structure slightly similar to a classical DES diagnoser, which can be generated (compiled) either entirely offline (greedy diagnosis) or incrementally online (lazy diagnosis). Experimental evidence suggests that, among these techniques, only lazy diagnosis may be viable in non-trivial application domains

    Power Distribution System Event Classification Using Fuzzy Logic

    Get PDF
    This dissertation describes an on-line, non-intrusive, classification system for identifying and reporting normal and abnormal power system events occurring on a distribution feeder based on their underlying cause, using signals acquired at the distribution substation. The event classification system extracts features from acquired signals using signal processing and shape analysis techniques. It then analyzes features and classifies events based on their cause using a fuzzy logic expert system based classifier. The classification system also extracts and reports parameters to assist utilities in locating faulty components. A detailed illustration of the classifier design process is presented. Power distribution system event classification problem is shown to be a large scale classification problem. The reasoning behind the choice of a fuzzy logic based hierarchical expert system classifier to solve this problem is explained in detail. The fuzzy logic based expert system classifier uses generic features, shape based features and event specific features extracted from acquired signals. The design of feature extractors for each of these feature categories is explained. A new, fuzzy logic based, modified Dynamic Time Warping (DTW) algorithm was developed for extracting shape based features. Design of event specific feature extractors for capacitor problems, arcing and overcurrent events are discussed in detail. The fuzzy logic based hierarchical expert system classifier required a new fuzzy inference engine that could efficiently handle a large number of rules and rule chaining. A new fuzzy inference engine was designed for this purpose and the design process is explained in detail. To avoid information overload, an intelligent reporting framework that processes raw classification information generated by the fuzzy classifier and reports events of interest in a timely and user friendly manner was developed. Finally, performance studies were carried out to validate the performance of the designed fuzzy logic based expert system classifier and the intelligent reporting system. The data needed to design and validate the classification system were obtained through the Distribution Fault Anticipation (DFA) data collection plat- form developed by Power System Automation Laboratory (PSAL) at Texas A&M University, sponsored by the Electric Power Research Institute (EPRI) and multiple partner utilities

    The 6 April 2009 earthquake at L'Aquila: a preliminary analysis of magnetic field measurements

    Get PDF
    Several investigations reported the possible identification of anomalous geomagnetic field signals prior to earthquake occurrence. In the ULF frequency range, candidates for precursory signatures have been proposed in the increase in the noise background and polarization parameter (i.e. the ratio between the amplitude/power of the vertical component and that one of the horizontal component), in the changing characteristics of the slope of the power spectrum and fractal dimension, in the possible occurrence of short duration pulses. We conducted, with conventional techniques of data processing, a preliminary analysis of the magnetic field observations performed at L'Aquila during three months preceding the 6 April 2009 earthquake, focusing attention on the possible occurrence of features similar to those identified in previous events. Within the limits of this analysis, we do not find compelling evidence for any of the features which have been proposed as earthquake precursors: indeed, most of aspects of our observations (which, in some cases, appear consistent with previous findings) might be interpreted in terms of the general magnetospheric conditions and/or of different sources

    Using strontium isotopes to track Pacific salmon migrations in Alaska

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2014.Pacific salmon (Oncorhynchus spp.) are an important cultural, ecological, and economic natural resource in Alaska. Not only do salmon maintain an important mechanism of nutrient transport between marine, aquatic, and terrestrial ecosystems, but they also provide a sustainable food and economic resource for human communities. A challenging issue in the management, conservation, and research of Pacific salmon is tracking their responses to perturbations across the multiple scales of population structure that characterize these species. Research has shown how the inherent biodiversity of Pacific salmon imparts resiliency to environmental change, and temporal stability to their overall productivity and the human systems dependent upon such productivity (e.g., fisheries). The vast biodiversity of salmon arises primarily via precise natal homing of adults to their rivers of origin, resulting in locally adapted populations. Thus, there have been considerable efforts to develop methods to effectively manage and monitor Pacific salmon biodiversity. One important example is using genetic differentiation among populations to discern the relative contributions of genetically distinct stocks in mixed stock fishery harvests. In the Bristol Bay region, sockeye salmon (O. nerka) harvests can be discerned at the watershed level (i.e., the nine major watersheds contributing to the fishery). However, tens to hundreds of locally adapted populations exist within each of these watersheds and methods to apportion fishery harvests to this finer scale population structure are lacking. This dissertation presents a new method in Alaska to discern fine-scale population structure (i.e., within watersheds) of Chinook salmon (O. tshawytscha) harvests using a naturally occurring geochemical tracer in rivers, strontium (Sr) isotopes (⁸⁷Sr/⁸⁶Sr). To this end, in Chapter 1, I characterize the statewide geographic variation on multiple spatial scales in ⁸⁷Sr/⁸⁶Sr ratios of Alaska's rivers and discuss the geochemical and geological controls of observed ⁸⁷Sr/⁸⁶Sr ratios. In Chapter 2, I approach the persistent problem of evaluating site-specific temporal variation, especially in remote Subarctic and Arctic regions, by employing the non-migratory behavioral ecology of slimy sculpin (Cottus cognatus). Finally, in Chapter 3, I demonstrate how the development of temporally and spatially robust ⁸⁷Sr/⁸⁶Sr baseline datasets within the Nushagak River was able to apportion a mixed stock fishery harvest of Chinook salmon conducted in Nushagak Bay back to natal sources at the sub-basin watershed level. Because of the conservative nature of the ⁸⁷Sr/⁸⁶Sr ratio during physical and biological processes, the development of this method is applicable not only to Chinook salmon, but also to other salmon species (e.g., sockeye and coho salmon, O. kisutch). Additionally, the development of baseline ⁸⁷Sr/⁸⁶Sr information (e.g., waters) and an overall research framework to employ this tracer in provenance studies, have statewide implications for the research and management of other migratory animals

    Identifying the Occurrence Time of the Destructive Kahramanmaraş-Gazientep Earthquake of Magnitude M7.8 in Turkey on 6 February 2023

    Get PDF
    Here, we employ natural time analysis of seismicity together with non-extensive statistical mechanics aiming at shortening the occurrence time window of the Kahramanmaraş-Gazientep M7.8 earthquake. The results obtained are in the positive direction pointing to the fact that after 3 February 2023 at 11:05:58 UTC, a strong earthquake was imminent. Natural time analysis also reveals a minimum fluctuation of the order parameter of seismicity almost three and a half months before the M7.8 earthquake, pointing to the initiation of seismic electrical activity. Moreover, before this earthquake occurrence, the detrended fluctuation analysis of the earthquake magnitude time-series reveals random behavior. Finally, when applying earthquake nowcasting, we find average earthquake potential score values which are compatible with those previously observed before strong (M≥7.1) earthquakes. The results obtained may improve our understanding of the physics of crustal phenomena that lead to strong earthquakes

    Twin‐engined diagnosis of discrete‐event systems

    Get PDF
    Diagnosis of discrete-event systems (DESs) is computationally complex. This is why a variety of knowledge compilation techniques have been proposed, the most notable of them rely on a diagnoser. However, the construction of a diagnoser requires the generation of the whole system space, thereby making the approach impractical even for DESs of moderate size. To avoid total knowledge compilation while preserving efficiency, a twin-engined diagnosis technique is proposed in this paper, which is inspired by the two operational modes of the human mind. If the symptom of the DES is part of the knowledge or experience of the diagnosis engine, then Engine 1 allows for efficient diagnosis. If, instead, the symptom is unknown, then Engine 2 comes into play, which is far less efficient than Engine 1. Still, the experience acquired by Engine 2 is then integrated into the symptom dictionary of the DES. This way, if the same diagnosis problem arises anew, then it will be solved by Engine 1 in linear time. The symptom dic- tionary can also be extended by specialized knowledge coming from scenarios, which are the most critical/probable behavioral patterns of the DES, which need to be diagnosed quickly

    The 6 April 2009 earthquake at L’Aquila: a preliminary analysis of magnetic field measurements

    Get PDF
    Several investigations reported the possible identification of anomalous geomagnetic field signals prior to earthquake occurrence. In the ULF frequency range, candidates for precursory signatures have been proposed in the increase in the noise background and polarization parameter (i.e. the ratio between the amplitude/power of the vertical component and that one of the horizontal component), in the changing characteristics of the slope of the power spectrum and fractal dimension, in the possible occurrence of short duration pulses. We conducted, with conventional techniques of data processing, a preliminary analysis of the magnetic field observations performed at L’Aquila during three months preceding the 6 April 2009 earthquake, focusing attention on the possible occurrence of features similar to those identified in previous events. Within the limits of this analysis, we do not find compelling evidence for any of the features which have been proposed as earthquake precursors: indeed, most of aspects of our observations (which, in some cases, appear consistent with previous findings) might be interpreted in terms of the general magnetospheric conditions and/or of different sources

    Electrical Characterization of Arcing Fault Behavior on 120/208V Secondary Networks

    Get PDF
    Arcing faults have been a persistent problem on power systems for over one hundred years, damaging equipment and creating safety hazards for both utility personnel and the public. On low-voltage secondary networks, arcing faults are known to cause specific hazards collectively called "manhole events", which include smoke and fire in underground structures, and in extreme cases explosions. This research provides the first comprehensive attempt to electrically characterize naturally occurring arcing fault behavior on 120/208V secondary networks. Research was performed in conjunction with the Consolidated Edison Company of New York, whereby a single low-voltage network was instrumented with thirty high-speed, high-fidelity data recording devices. For a nominal one year period, these devices collected detailed, high-speed waveform recordings of arcing faults and other system transients, as well as statistical power system data, offering new insights into the behavior of arcing faults on low-voltage networks. Data obtained in this project have shown the intensity, persistence, and frequency of arcing faults on low-voltage networks to be much higher than commonly believed. Results indicate that arcing faults may persist on a more-or-less continuous basis for hours without self-extinguishing, may recur over a period of hours, days, or weeks without generating enough physical evidence to be reported by the public or operate conventional protective devices, and may draw enough current to be observed at the primary substation serving the network. Additionally, simultaneous fault current measurements recorded at multiple locations across the network suggest the possibility of using multi-point secondary monitoring to detect and locate arcing faults before they cause a manhole event
    corecore