1,941 research outputs found
Logic programming in the context of multiparadigm programming: the Oz experience
Oz is a multiparadigm language that supports logic programming as one of its
major paradigms. A multiparadigm language is designed to support different
programming paradigms (logic, functional, constraint, object-oriented,
sequential, concurrent, etc.) with equal ease. This article has two goals: to
give a tutorial of logic programming in Oz and to show how logic programming
fits naturally into the wider context of multiparadigm programming. Our
experience shows that there are two classes of problems, which we call
algorithmic and search problems, for which logic programming can help formulate
practical solutions. Algorithmic problems have known efficient algorithms.
Search problems do not have known efficient algorithms but can be solved with
search. The Oz support for logic programming targets these two problem classes
specifically, using the concepts needed for each. This is in contrast to the
Prolog approach, which targets both classes with one set of concepts, which
results in less than optimal support for each class. To explain the essential
difference between algorithmic and search programs, we define the Oz execution
model. This model subsumes both concurrent logic programming
(committed-choice-style) and search-based logic programming (Prolog-style).
Instead of Horn clause syntax, Oz has a simple, fully compositional,
higher-order syntax that accommodates the abilities of the language. We
conclude with lessons learned from this work, a brief history of Oz, and many
entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic
Programming
The Uses of Jurisdictional Redundancy: Interest, Ideology, and Innovation
Instead of viewing the persistence of concurrency as a dysfunctional relic, one may hypothesize that it is a product of an institutIonal evolution. The persistenceof the anomaly over time requires a search for a strong functional explanatIon. With such an approach, one makes the working assumption that the historical explanation of the origin of the structure of complex concurrency of jurisdiction, even if accurate, does not suffice to explain its persIstence. It is this approachthat I shall pursue here
Recommended from our members
Using visual representations to improve instructional materials for distance education computing students
Understanding how to develop instructional materials for distance education students is a challenging problem, but it is exacerbated when a domain is complex to teach, such as computer science. Visual representations have a history of use in computing as a means to alleviate the difficulties of learning abstract concepts. However, it is not clear whether improvements observed are as a result of improvements in the visual representations used in instructional materials or due to individual differences in students. This research examines the two themes of individual differences and visual representation in order to investigate how they collectively impact on improving instructional materials for distance education students studying computer science. It investigates the impact of different representations on learning while additionally investigating the relationship between individual differences and student learning.The research in this thesis shows that visual representations are important in designing instructional materials. In particular, texts with visual representations have the power to cue students to perceive instructional materials as easier to process and more engaging.Investigation into the impact of concrete high-imagery versus abstract low-imagery visual representations illustrated that concrete visual representations incurred fewer cognitive overheads for computer science students and were able to ameliorate the challenges of learning computing.The research in this thesis into individual differences demonstrated that Imagers did benefit more from studying instructional materials containing text with visual components. However the research indicates that appropriate selection of individual difference tests is dependent upon the application, i.e., whether the results are to be used to assess generalised tendencies or episodes in learning and whether the tests examine underlying approaches to cognition or practices in education.An underlying question was whether students studying instructional materials containing low-imagery visual representations would cope as well as those studying high-imagery ones. Accomplished learners demonstrated that they could perform as well as with those receiving high-imagery visual representations. However, studying and recalling these materials did incur more cognitive processing.This thesis argues that improving instructional materials by including appropriate visual representations is a useful basis for improving learning for distance education computer science students
An analysis of management control in a complex large-scale endeavor :
This study examines management control as it was performed in a large-scale complex endeavor. The analysis assesses the application of integrated management control in the Safeguard Ballistic Missile Defense (BMD) System program. It examines changes both in the management control situation and in the associated managerial response. The technique used for the analysis is the Parameter-Phase-Level (PPL) analysis matrix which is fully developed and defined in the study.This study concludes that management control should be offensive rather than defensive, should be preventive in preference to curative, and should favor preview before the fact in lieu of review after the fact. It should be equally sensitive to quantitative and qualitative management information, should satisfy management needs, and should enhance the decision-making process. Integrated and proactive tools and techniques are the preferred foundation for management control of large-scale complex endeavors.The specific objectives of the study are threefold. First, the need for integrated management control in large-scale complex endeavors is addressed. The reality of integrated control as experienced in the Safeguard BMD System program is considered in the same context and so is the relative importance of the three cardinal program parameters of cost, schedule, and technical performance over time. Secondly, having completed the critical examination of the individual cells in the PPL analysis matrix, the matrix is reassembled and refined in a manner dictated by the results of the analysis. This resulted in a reconfiguration of the matrix that differed from the orginal model. Finally, it is proposed that management can and should have a base line for management control that is transferable, adaptive, and dynamic.This objective centers on the interrelationships among the cost, schedule, and technical performance parameters and the compelling need for proactive management control in large-scale complex endeavors
Exploiting Fine-Grain Concurrency Analytical Insights in Superscalar Processor Design
This dissertation develops analytical models to provide insight into various design issues associated with superscalar-type processors, i.e., the processors capable of executing multiple instructions per cycle. A survey of the existing machines and literature has been completed with a proposed classification of various approaches for exploiting fine-grain concurrency. Optimization of a single pipeline is discussed based on an analytical model. The model-predicted performance curves are found to be in close proximity to published results using simulation techniques. A model is also developed for comparing different branch strategies for single-pipeline processors in terms of their effectiveness in reducing branch delay. The additional instruction fetch traffic generated by certain branch strategies is also studied and is shown to be a useful criterion for choosing between equally well performing strategies. Next, processors with multiple pipelines are modelled to study the tradeoffs associated with deeper pipelines versus multiple pipelines. The model developed can reveal the cause of performance bottleneck: insufficient resources to exploit discovered parallelism, insufficient instruction stream parallelism, or insufficient scope of concurrency detection. The cost associated with speculative (i.e., beyond basic block) execution is examined via probability distributions that characterize the inherent parallelism in the instruction stream. The throughput prediction of the analytic model is shown, using a variety of benchmarks, to be close to the measured static throughput of the compiler output, under resource and scope constraints. Further experiments provide misprediction delay estimates for these benchmarks under scope constraints, assuming beyond-basic-block, out-of-order execution and run-time scheduling. These results were derived using traces generated by the Multiflow TRACE SCHEDULING™(*) compacting C and FORTRAN 77 compilers. A simplified extension to the model to include multiprocessors is also proposed. The extended model is used to analyze combined systems, such as superpipelined multiprocessors and superscalar multiprocessors, both with shared memory. It is shown that the number of pipelines (or processors) at which the maximum throughput is obtained is increasingly sensitive to the ratio of memory access time to network access delay, as memory access time increases. Further, as a function of inter-iteration dependency distance, optimum throughput is shown to vary nonlinearly, whereas the corresponding Optimum number of processors varies linearly. The predictions from the analytical model agree with published results based on simulations. (*)TRACE SCHEDULING is a trademark of Multiflow Computer, Inc
Establishing a Framework for the Oversight of Major Defense Acquisition Programs - A Historical Analysis
The Department of Defense (DoD) has budgeted over $134.5 billion for Fiscal Year 2004 for Acquisition, yet little is written about the personnel responsible for managing and evaluating Major Defense Acquisition Programs (MDAPs) -- those who perform Acquisition Oversight (AO). The AO process has not been studied in a disciplined manner during its 40-year history. Congress, past Administrations, and the DoD Inspector General have commissioned several studies on the AO Process. Recommendations were considered and implemented such that the process evolved to where it stands today. Over 40 years separate the first iteration with the latest version. Commission reports, countless studies, and historians agree on the need for oversight in military acquisitions; they agree that the system takes too much money, takes too long, and does not perform as well as most would wish; yet they disagree on who should perform oversight. This thesis has three objectives: define, document, and utilize available literature to identify the organizations involved with the process as it evolved to its form today; build models of the AO process with emphasis on the chain of command as it existed in the l950s, l960s, l970s, l980s, and today; and evaluate each model on its ability to accomplish seven goals derived from Clinton\u27s 1994 Process Action Team on AO report. The thesis was limited to the DoD AO Process as it historically existed between the Air Force and the Secretary of Defense, or those serving similar positions. The author reviewed relevant literature to model historical oversight hierarchies. Then expert opinions were gathered from that literature on how well the oversight process models performed. As expected, the oversight process has improved over time, but further improvements are currently being sought. Those seeking improvement would do well to study past processes and learn from their mistakes
Cogitator : a parallel, fuzzy, database-driven expert system
The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages.KMBT_22
- …