1,941 research outputs found

    Logic programming in the context of multiparadigm programming: the Oz experience

    Full text link
    Oz is a multiparadigm language that supports logic programming as one of its major paradigms. A multiparadigm language is designed to support different programming paradigms (logic, functional, constraint, object-oriented, sequential, concurrent, etc.) with equal ease. This article has two goals: to give a tutorial of logic programming in Oz and to show how logic programming fits naturally into the wider context of multiparadigm programming. Our experience shows that there are two classes of problems, which we call algorithmic and search problems, for which logic programming can help formulate practical solutions. Algorithmic problems have known efficient algorithms. Search problems do not have known efficient algorithms but can be solved with search. The Oz support for logic programming targets these two problem classes specifically, using the concepts needed for each. This is in contrast to the Prolog approach, which targets both classes with one set of concepts, which results in less than optimal support for each class. To explain the essential difference between algorithmic and search programs, we define the Oz execution model. This model subsumes both concurrent logic programming (committed-choice-style) and search-based logic programming (Prolog-style). Instead of Horn clause syntax, Oz has a simple, fully compositional, higher-order syntax that accommodates the abilities of the language. We conclude with lessons learned from this work, a brief history of Oz, and many entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic Programming

    The Uses of Jurisdictional Redundancy: Interest, Ideology, and Innovation

    Get PDF
    Instead of viewing the persistence of concurrency as a dysfunctional relic, one may hypothesize that it is a product of an institutIonal evolution. The persistenceof the anomaly over time requires a search for a strong functional explanatIon. With such an approach, one makes the working assumption that the historical explanation of the origin of the structure of complex concurrency of jurisdiction, even if accurate, does not suffice to explain its persIstence. It is this approachthat I shall pursue here

    An analysis of management control in a complex large-scale endeavor :

    Get PDF
    This study examines management control as it was performed in a large-scale complex endeavor. The analysis assesses the application of integrated management control in the Safeguard Ballistic Missile Defense (BMD) System program. It examines changes both in the management control situation and in the associated managerial response. The technique used for the analysis is the Parameter-Phase-Level (PPL) analysis matrix which is fully developed and defined in the study.This study concludes that management control should be offensive rather than defensive, should be preventive in preference to curative, and should favor preview before the fact in lieu of review after the fact. It should be equally sensitive to quantitative and qualitative management information, should satisfy management needs, and should enhance the decision-making process. Integrated and proactive tools and techniques are the preferred foundation for management control of large-scale complex endeavors.The specific objectives of the study are threefold. First, the need for integrated management control in large-scale complex endeavors is addressed. The reality of integrated control as experienced in the Safeguard BMD System program is considered in the same context and so is the relative importance of the three cardinal program parameters of cost, schedule, and technical performance over time. Secondly, having completed the critical examination of the individual cells in the PPL analysis matrix, the matrix is reassembled and refined in a manner dictated by the results of the analysis. This resulted in a reconfiguration of the matrix that differed from the orginal model. Finally, it is proposed that management can and should have a base line for management control that is transferable, adaptive, and dynamic.This objective centers on the interrelationships among the cost, schedule, and technical performance parameters and the compelling need for proactive management control in large-scale complex endeavors

    Exploiting Fine-Grain Concurrency Analytical Insights in Superscalar Processor Design

    Get PDF
    This dissertation develops analytical models to provide insight into various design issues associated with superscalar-type processors, i.e., the processors capable of executing multiple instructions per cycle. A survey of the existing machines and literature has been completed with a proposed classification of various approaches for exploiting fine-grain concurrency. Optimization of a single pipeline is discussed based on an analytical model. The model-predicted performance curves are found to be in close proximity to published results using simulation techniques. A model is also developed for comparing different branch strategies for single-pipeline processors in terms of their effectiveness in reducing branch delay. The additional instruction fetch traffic generated by certain branch strategies is also studied and is shown to be a useful criterion for choosing between equally well performing strategies. Next, processors with multiple pipelines are modelled to study the tradeoffs associated with deeper pipelines versus multiple pipelines. The model developed can reveal the cause of performance bottleneck: insufficient resources to exploit discovered parallelism, insufficient instruction stream parallelism, or insufficient scope of concurrency detection. The cost associated with speculative (i.e., beyond basic block) execution is examined via probability distributions that characterize the inherent parallelism in the instruction stream. The throughput prediction of the analytic model is shown, using a variety of benchmarks, to be close to the measured static throughput of the compiler output, under resource and scope constraints. Further experiments provide misprediction delay estimates for these benchmarks under scope constraints, assuming beyond-basic-block, out-of-order execution and run-time scheduling. These results were derived using traces generated by the Multiflow TRACE SCHEDULING™(*) compacting C and FORTRAN 77 compilers. A simplified extension to the model to include multiprocessors is also proposed. The extended model is used to analyze combined systems, such as superpipelined multiprocessors and superscalar multiprocessors, both with shared memory. It is shown that the number of pipelines (or processors) at which the maximum throughput is obtained is increasingly sensitive to the ratio of memory access time to network access delay, as memory access time increases. Further, as a function of inter-iteration dependency distance, optimum throughput is shown to vary nonlinearly, whereas the corresponding Optimum number of processors varies linearly. The predictions from the analytical model agree with published results based on simulations. (*)TRACE SCHEDULING is a trademark of Multiflow Computer, Inc

    Establishing a Framework for the Oversight of Major Defense Acquisition Programs - A Historical Analysis

    Get PDF
    The Department of Defense (DoD) has budgeted over $134.5 billion for Fiscal Year 2004 for Acquisition, yet little is written about the personnel responsible for managing and evaluating Major Defense Acquisition Programs (MDAPs) -- those who perform Acquisition Oversight (AO). The AO process has not been studied in a disciplined manner during its 40-year history. Congress, past Administrations, and the DoD Inspector General have commissioned several studies on the AO Process. Recommendations were considered and implemented such that the process evolved to where it stands today. Over 40 years separate the first iteration with the latest version. Commission reports, countless studies, and historians agree on the need for oversight in military acquisitions; they agree that the system takes too much money, takes too long, and does not perform as well as most would wish; yet they disagree on who should perform oversight. This thesis has three objectives: define, document, and utilize available literature to identify the organizations involved with the process as it evolved to its form today; build models of the AO process with emphasis on the chain of command as it existed in the l950s, l960s, l970s, l980s, and today; and evaluate each model on its ability to accomplish seven goals derived from Clinton\u27s 1994 Process Action Team on AO report. The thesis was limited to the DoD AO Process as it historically existed between the Air Force and the Secretary of Defense, or those serving similar positions. The author reviewed relevant literature to model historical oversight hierarchies. Then expert opinions were gathered from that literature on how well the oversight process models performed. As expected, the oversight process has improved over time, but further improvements are currently being sought. Those seeking improvement would do well to study past processes and learn from their mistakes

    Cogitator : a parallel, fuzzy, database-driven expert system

    Get PDF
    The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.KMBT_22
    corecore