39,810 research outputs found

    LOT: Logic Optimization with Testability - new transformations for logic synthesis

    Get PDF
    A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools

    Assessing the reliability of adaptive power system protection schemes

    Get PDF
    Adaptive power system protection can be used to improve the performance of existing protection schemes under certain network conditions. However, their deployment in the field is impeded by their perceived inferior reliability compared to existing protection arrangements. Moreover, their validation can be problematic due to the perceived high likelihood of the occurrence of failure modes or incorrect setting selection with variable network conditions. Reliability (including risk assessment) is one of the decisive measures that can be used in the process of verifying adaptive protection scheme performance. This paper proposes a generic methodology for assessing the reliability of adaptive protection. The method involves the identification of initiating events and scenarios that lead to protection failures and quantification of the probability of the occurrence of each failure. A numerical example of the methodology for an adaptive distance protection scheme is provided

    An accurate method to correct atmospheric phase delay for InSAR with the ERA5 global atmospheric model

    Get PDF
    Differential SAR Interferometry (DInSAR) has proven its unprecedented ability and merits of monitoring ground deformation on a large scale with centimeter to millimeter accuracy. However, atmospheric artifacts due to spatial and temporal variations of the atmospheric state often affect the reliability and accuracy of its results. The commonly-known Atmospheric Phase Screen (APS) appears in the interferograms as ghost fringes not related to either topography or deformation. Atmospheric artifact mitigation remains one of the biggest challenges to be addressed within the DInSAR community. State-of-the-art research works have revealed that atmospheric artifacts can be partially compensated with empirical models, point-wise GPS zenith path delay, and numerical weather prediction models. In this study, we implement an accurate and realistic computing strategy using atmospheric reanalysis ERA5 data to estimate atmospheric artifacts. With this approach, the Line-of-Sight (LOS) path along the satellite trajectory and the monitored points is considered, rather than estimating it from the zenith path delay. Compared with the zenith delay-based method, the key advantage is that it can avoid errors caused by any anisotropic atmospheric phenomena. The accurate method is validated with Sentinel-1 data in three different test sites: Tenerife island (Spain), Almería (Spain), and Crete island (Greece). The effectiveness and performance of the method to remove APS from interferograms is evaluated in the three test sites showing a great improvement with respect to the zenith-based approach.Peer ReviewedPostprint (published version

    Inter-organizational fault management: Functional and organizational core aspects of management architectures

    Full text link
    Outsourcing -- successful, and sometimes painful -- has become one of the hottest topics in IT service management discussions over the past decade. IT services are outsourced to external service provider in order to reduce the effort required for and overhead of delivering these services within the own organization. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. Splitting a service into several service parts is a non-trivial task as they have to be implemented, operated, and maintained by different providers. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this article we present the results of a thorough use case based requirements analysis for an architecture for inter-organizational fault management (ioFMA). Furthermore, a concept of the organizational respective functional model of the ioFMA is given.Comment: International Journal of Computer Networks & Communications (IJCNC

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Improving InSAR geodesy using global atmospheric models

    Get PDF
    Spatial and temporal variations of pressure, temperature and water vapor content in the atmosphere introduce significant confounding delays in Interferometric Synthetic Aperture Radar (InSAR) observations of ground deformation and bias estimatesof regional strain rates. Producing robust estimates of tropospheric delays remains one of the key challenges in increasing the accuracy of ground deformation measurements using InSAR. Recent studies revealed the efficiency of global atmospheric reanalysis to mitigate the impact of tropospheric delays, motivating further exploration of their potential. Here, we explore the effectiveness of these models in several geographic and tectonic settings on both single interferograms and time series analysis products. Both hydrostatic and wet contributions to the phase delay are important to account for. We validate these path delay corrections by comparing with estimates of vertically integrated atmospheric water vapor content derived from the passive multi-spectral imager MERIS, onboard the ENVISAT satellite. Generally, the performance of the prediction depends on the vigor of atmospheric turbulence. We discuss (1) how separating atmospheric and orbital contributions allows one to better measure long wavelength deformation, (2) how atmospheric delays affect measurements of surface deformation following earthquakes and (3) we show that such a method allows us to reduce biases in multi-year strain rate estimates by reducing the influence of unevenly sampled seasonal oscillations of the tropospheric delay

    Some thoughts on the use of InSAR data to constrain models of surface deformation: Noise structure and data downsampling

    Get PDF
    Repeat-pass Interferometric Synthetic Aperture Radar (InSAR) provides spatially dense maps of surface deformation with potentially tens of millions of data points. Here we estimate the actual covariance structure of noise in InSAR data. We compare the results for several independent interferograms with a large ensemble of GPS observations of tropospheric delay and discuss how the common approaches used during processing of InSAR data affects the inferred covariance structure. Motivated by computational concerns associated with numerical modeling of deformation sources, we then combine the data-covariance information with the inherent resolution of an assumed source model to develop an efficient algorithm for spatially variable data resampling (or averaging). We illustrate these technical developments with two earthquake scenarios at different ends of the earthquake magnitude spectrum. For the larger events, our goal is to invert for the coseismic fault slip distribution. For smaller events, we infer the hypocenter location and moment. We compare the results of inversions using several different resampling algorithms, and we assess the importance of using the full noise covariance matrix
    corecore