695 research outputs found

    Computing Loops With at Most One External Support Rule

    Get PDF
    If a loop has no external support rules, then its loop formula is equivalent to a set of unit clauses; and if it has exactly one external support rule, then its loop formula is equivalent to a set of binary clauses. In this paper, we consider how to compute these loops and their loop formulas in a normal logic program, and use them to derive consequences of a logic program. We show that an iterative procedure based on unit propagation, the program completion and the loop formulas of loops with no external support rules can compute the same consequences as the “Expand ” operator in smodels, which is known to compute the well-founded model when the given normal logic program has no constraints. We also show that using the loop formulas of loops with at most one external support rule, the same procedure can compute more consequences, and these extra consequences can help ASP solvers such as cmodels to find answer sets of certain logic programs

    First-Order Models for Configuration Analysis

    Get PDF
    Our world teems with networked devices. Their configuration exerts an ever-expanding influence on our daily lives. Yet correctly configuring systems, networks, and access-control policies is notoriously difficult, even for trained professionals. Automated static analysis techniques provide a way to both verify a configuration\u27s correctness and explore its implications. One such approach is scenario-finding: showing concrete scenarios that illustrate potential (mis-)behavior. Scenarios even have a benefit to users without technical expertise, as concrete examples can both trigger and improve users\u27 intuition about their system. This thesis describes a concerted research effort toward improving scenario-finding tools for configuration analysis. We developed Margrave, a scenario-finding tool with special features designed for security policies and configurations. Margrave is not tied to any one specific policy language; rather, it provides an intermediate input language as expressive as first-order logic. This flexibility allows Margrave to reason about many different types of policy. We show Margrave in action on Cisco IOS, a common language for configuring firewalls, demonstrating that scenario-finding with Margrave is useful for debugging and validating real-world configurations. This thesis also presents a theorem showing that, for a restricted subclass of first-order logic, if a sentence is satisfiable then there must exist a satisfying scenario no larger than a computable bound. For such sentences scenario-finding is complete: one can be certain that no scenarios are missed by the analysis, provided that one checks up to the computed bound. We demonstrate that many common configurations fall into this subclass and give algorithmic tests for both sentence membership and counting. We have implemented both in Margrave. Aluminum is a tool that eliminates superfluous information in scenarios and allows users\u27 goals to guide which scenarios are displayed. We quantitatively show that our methods of scenario-reduction and exploration are effective and quite efficient in practice. Our work on Aluminum is making its way into other scenario-finding tools. Finally, we describe FlowLog, a language for network programming that we created with analysis in mind. We show that FlowLog can express many common network programs, yet demonstrate that automated analysis and bug-finding for FlowLog are both feasible as well as complete

    Margrave: An Improved Analyzer for Access-Control and Configuration Policies

    Get PDF
    As our society grows more dependent on digital systems, policies that regulate access to electronic resources are becoming more common. However, such policies are notoriously difficult to configure properly, even for trained professionals. An incorrectly written access-control policy can result in inconvenience, financial damage, or even physical danger. The difficulty is more pronounced when multiple types of policy interact with each other, such as in routers on a network. This thesis presents a policy-analysis tool called Margrave. Given a query about a set of policies, Margrave returns a complete collection of scenarios that satisfy the query. Since the query language allows multiple policies to be compared, Margrave can be used to obtain an exhaustive list of the consequences of a seemingly innocent policy change. This feature gives policy authors the benefits of formal analysis without requiring that they state any formal properties about their policies. Our query language is equivalent to order-sorted first-order logic (OSL). Therefore our scenario-finding approach is, in general, only complete up to a user-provided bound on scenario size. To mitigate this limitation, we identify a class of OSL that we call Order-Sorted Effectively Propositional Logic (OS-EPL). We give a linear-time algorithm for testing membership in OS-EPL. Sentences in this class have the Finite Model Property, and thus Margrave\u27s results on such queries are complete without user intervention

    A Framework for Exploring Finite Models

    Get PDF
    This thesis presents a framework for understanding first-order theories by investigating their models. A common application is to help users, who are not necessarily experts in formal methods, analyze software artifacts, such as access-control policies, system configurations, protocol specifications, and software designs. The framework suggests a strategy for exploring the space of finite models of a theory via augmentation. Also, it introduces a notion of provenance information for understanding the elements and facts in models with respect to the statements of the theory. The primary mathematical tool is an information-preserving preorder, induced by the homomorphism on models, defining paths along which models are explored. The central algorithmic ideas consists of a controlled construction of the Herbrand base of the input theory followed by utilizing SMT-solving for generating models that are minimal under the homomorphism preorder. Our framework for model-exploration is realized in Razor, a model-finding assistant that provides the user with a read-eval-print loop for investigating models

    Modeling wildland fire radiance in synthetic remote sensing scenes

    Get PDF
    This thesis develops a framework for implementing radiometric modeling and visualization of wildland fire. The ability to accurately model physical and op- tical properties of wildfire and burn area in an infrared remote sensing system will assist efforts in phenomenology studies, algorithm development, and sensor evaluation. Synthetic scenes are also needed for a Wildland Fire Dynamic Data Driven Applications Systems (DDDAS) for model feedback and update. A fast approach is presented to predict 3D flame geometry based on real time measured heat flux, fuel loading, and wind speed. 3D flame geometry could realize more realistic radiometry simulation. A Coupled Atmosphere-Fire Model is used to de- rive the parameters of the motion field and simulate fire dynamics and evolution. Broad band target (fire, smoke, and burn scar) spectra are synthesized based on ground measurements and MODTRAN runs. Combining the temporal and spa- tial distribution of fire parameters, along with the target spectra, a physics based model is used to generate radiance scenes depicting what the target might look like as seen by the airborne sensor. Radiance scene rendering of the 3D flame includes 2D hot ground and burn scar cooling, 3D flame direct radiation, and 3D indirect reflected radiation. Fire Radiative Energy (FRE) is a parameter defined from infrared remote sensing data that is applied to determine the radiative energy released during a wildland fire. FRE derived with the Bi-spectral method and the MIR radiance method are applied to verify the fire radiance scene synthesized in this research. The results for the synthetic scenes agree well with published values derived from wildland fire images

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable

    Erklärung von Defekten und Identifizierung von Abhängigkeiten in Zusammenhängenden Feature-Modellen

    Get PDF
    Software-Produktlinien gelten als anerkanntes Entwicklungskonzept für variable Software in vielen industriellen Anwendungsbereichen, insbesondere im Automobilbereich. Im Zentrum von Software-Produktlinien steht die Variabilität einer Produktlinie, die eine Variantenvielfalt des Produktes durch die Modellierung auf einer hohen Abstraktionsebene ermöglicht. Jedoch ist die Modellierung von Variabilität eine nicht triviale Aufgabe, die mit der wachsenden Anzahl der Varianten komplexer und dadurch fehleranfälliger wird. Modellierungsfehler können beispielsweise dazu führen, dass Produktkonfigurationen nicht mehr möglich sind. Die Detektion von Modellierungsfehlern einer Softwareproduktlinie ist ein weit verbreitetes Forschungsfeld. Die nutzerfreundliche Erklärung von Fehlerursachen - das Thema der vorliegenden Arbeit - stellt dabei weiterhin eine Herausforderung dar und wird immer wichtiger. Im Rahmen dieser Arbeit wird ein generischer Algorithmus ausgearbeitet, um Modellierungsfehler aller Arten auf Basis der Pradikatenlogik zu erklären. Die resultierende Erklärung wird in der natürlichen Sprache nutzerfreundlich und strukturiert generiert und als Tooltip während der Modellierung angezeigt. Der Basisalgorithmus wird zusätzlich erweitert, um kürzeste Erklärungen zu finden und die Relevanz der einzelnen Teile einer Erklärung zu berechnen und visuell hervorzuheben. Weiterhin detektieren wir versteckte Abhängigkeiten in zusammenhängenden Produktlinien und verwenden den oben erwähnten generischen Algorithmus zur Erklärung dieser Abhängigkeiten. In einer qualitativen und quantitativen Analyse evaluieren wir den Algorithmus und resultierende Erklärungen anhand der vorhandenen Beispiele im Hinblick auf ihre Korrektheit, Verständlichkeit, Performance und Länge. Für versteckte Abhängigkeiten untersuchen wir zusätzlich, in welchen typischen Situationen diese Abhängigkeiten am häufigsten entstehen. Durch die Inspektion von Erklärungen können wir darüber hinaus feststellen, wie viele zusammenhängende Produktlinien in einer versteckten Abhängigkeit involviert sind. Zusammenfassend stellen wir fest, dass Erklärungen korrekt und verständlich sind und die Generierung von Erklärungen für unterschiedlich große Produktlinien skaliert. Die Generierung der Erklärung mit dem Basisalgorithmus verdoppelt dabei ungefähr die Rechenzeit der ursprünglichen Modellanalyse, während der erweiterte Algorithmus (welcher nach der kürzesten Erklärung sucht) die Rechenzeit in etwa verdreifacht. Aufällig ist, dass die erste Erklärung meistens die Kürzeste ist, andernfalls ist die kürzeste Erklärung üblicherweise um 25% - 50% kleiner. Die Erklärungslänge wächst nur geringfügig im Vergleich zu der Größe einer Produktlinie. Zum Schluss zeigen wir, dass anhand der analysierten Produktlinien bis zu fünf zusammenhängende Produktlinien zu einer versteckten Abhängigkeit führen können.Software product line engineering has proven to be a successful develop approach for variable software in different industrial applications, especially in the automotive area. The variability of a product line is at the center of software product lines engineering. By modeling variability on a high abstraction level, software product lines enable a wide variety of a product. Nevertheless, modeling variability is not a trivial task. It gets more complex with a growing number of variants and, hence, becomes more error-prone. For instance, modeling errors may result in product congurations being not possible anymore. The detection of such defects is well researched. A user friendly explanation of the defect causes - the subject of the present work - is still a challenge and is becoming increasingly important. In the scope of this thesis, we elaborate a generic algorithm which is able to generate explanations for any kind of modeling defects based on predicate logic. A resulting explanation is generated in a user-friendly and structural manner and displayed within a tool tip during the modeling phase. Additionally, the basic algorithm is improved to find the shortest explanation und to compute and visualize relevant parts of the explanation. Furthermore, we detect hidden dependencies among interrelated product lines and apply the generic explanation algorithm mentioned above to explain such dependencies. In a quantitative and qualitative analysis, we evaluate the explanation algorithm and resulting explanations concerning their correctness, understandability, performance impact and length using existing examples. For hidden dependencies, we additionally inspect situations in which such dependencies occur most often. By analyzing respective explanations, we can furthermore determine the number of involved product lines in a hidden dependency. To summarize the results, we demonstrate the correctness and understandability of explanations and show the scalability of the explanation algorithm for different sizes of product lines. Generating a first explanation approximately doubles the computational time of the former model analysis while the improved algorithm (which searches for a shortest explanation) approximately triples the computational time. It is notable that the first explanation is most often already the shortest one, otherwise it is usually shorter by 25% - 50%. Explanation length slightly increases compared to the size of a product line. Finally, we observe that up to five interrelated feature models may lead to a hidden dependency based on the evaluated product lines
    corecore