1,066 research outputs found
Further characterizations of cubic lattice graphs
AbstractA cubic lattice graph with characteristic n is a graph whose points can be identified with the ordered triplets on n symbols and two points are adjacent whenever the corresponding triplets have two coordinates in common. An L2 graph is a graph whose points can be identified with the ordered pairs on n symbols such that two points are adjacent if and only if the corresponding pairs have a common coordinate. The main result of this paper is two new characterizations and shows the relation between cubic lattice and L2 graphs. The main result also suggests a conjecture concerning the characterization of interchange graphs of complete m-partite graphs
A Profile of Decision Making in Federal Contract Management
This paper presents the preliminary findings of a research project conducted by the author in partial fulfillment of the requirements for the Ph.D. degree at George Washington University. The objective of th~ research was to determine what factors contribute to decision-making behavior of contract managers in the federal arena. While the overall study employs a regression model to identify the factors that contribute most· to the multiple prediction of decision-making behavior, this paper reports early findings from the initial univariate analysis in the form of a profile of the random sample of respondents surveyed.
Thirty four variables were studied, three of which were multidimensional: decision type (structured or unstructured), degree of bureaucratization of the work environment, and the decision processes (satisficing or optimizing) used when contract managers face various types of decisions. Other variables, such as sector of employment; job complexity in terms of position held and size and types of contracts managed~ amount of training, education, and experience possessed; certification status, as well as a number of demographic factors, were included in the study
Decision Support in the Source Selection Process
The complexity of federal acquisition is increasing. Our ability as humans to mentally assimilate new laws, regulations, policies, and procedures into the existing body of knowledge in the acquisition field has already been surpassed. This paper will explore an emerging technique, known as the Analytic Hierarchy Process (AHP), that promises to help acquisition managers make rational decisions in the face of this increasing complexity. The advent of inexpensive microcomputers and powerful new decision support systems (DSS) make this possible. One such software product, Expert Choice, is examined and applied to a typical 11 complex Defense Department decision—the task of selecting a source in competitive negotiations. Using Expert Choice, the author developed a DSS to conduct 11 an hypothetical source selection. Selection criteria and alternative proposals were incorporated into the model, as were the judgments made by technical, cost, and management evaluation teams. The DSS synthesizes the judgments into a comprehensive ranking of the proposals and, perhaps most importantly, helps source selection team members communicate their findings to one another and to the Source Selection Authority. The advantages of using decision support systems to help both government and industry decision makers in a variety of complex decision scenarios are discussed
Herpes in Pregnancy
The management of genital herpesvirus infections in pregnancy has seen many
changes over the past decade as we have continued to learn more about the
epidemiology of the disease. This article reviews these changes and highlights
ongoing controversies. Clinical management schemes are proposed based upon this
most recent information
Recommended from our members
Information theory metric for assembly language
This paper presents a study of the distribution of assembly language instructions in two avionics software applications and the development of an information theoretic software complexity metric based on the study. The instructions were grouped into classes , e.g. load and store, arithmetic, jump, compare, etc. As expected the distribution of instruction classes over the modules was generally uniform and nearly three-fourths of the instructions were load, store, or jump instructions. The biggest surprise was that nearly one-half of the 320 different instructions were not used.
The assembly language complexity metric is based on the premise that programs with familiar instructions are much easier to understand and work with than programs with unfamiliar instructions. Familiarity was approximated by frequency of instruction use. The metric is similar to one developed by Berlinger, except we computed the information content per instruction class and the average information content per instruction while he summed the information content of the instructions. Maintenance programmers selected what they felt were the most difficult modules. Our metric gave the highest complexity values to most of these modules
Recommended from our members
Graph theoretic program complexity measures
Recently several simply computed graph theoretic measures of computer program complexity, testing and unstructuredness have been proposed. Most of them are based on a static analysis of the program graph. One of the best known and most widely accepted is the cyclomatic number. Another is the number of knots, or crossings of flow of control arrows in the program text We show that the number of knots is equal to the number of edges in the-overlap graph of the program where each program branch gives rise to an interval. The number of knots is highly dependent on the order of the statements in the program so that equivalent programs could have different values. We show that the two operations of rearranging and replicating program segments can substantially reduce the number of knots and thereby improve the readability and structure of the program. The rearrangement operation is a useful tool in converting "old" Fortran programs into more readable and structured Fortran 77 programs
Recommended from our members
Context-free grammars with graph control
Context-free grammars with graph control provide a general framework for the various types of context-free grammars with regulated rewriting. The vertices or edges of the directed graph are labeled with the productions of the grammar. The only strings in the language generated by the grammar are those whose derivations correspond to labeled paths in the graph. Inclusion relations among the various classes of context-free grammars with regulated rewriting such as programmed grammars (without failure fields), matrix grammars, periodically time-variant grammars, state grammars, and grammars with regular control are easily obtained and the graph provides insight into the nature of the restriction. Adding negative context to context-free grammars with graph control, we obtain a class of grammars equivalent to the context-free programmed grammars
Recommended from our members
A letter oriented minimal perfect hashing function
Cichelli has presented a simple method for constructing minimal perfect hash tables of identifiers for small static word sets. The hash function value for a word is computed as the sum of the length of the word and the values associated with the first and last letters of the word. Cichelli's backtracking algorithm considers one word at a time and performs an exhaustive search to find the letter value assignments. In considering heuristics to improve his algorithm we were led to develop a letter oriented algorithm that handles more than one word per iteration and that frequently outperforms Cichelli's. We also investigate the impact of relaxing the minimality requirement and allowing blank spaces in the constructed table. This substantially reduced the execution time of the algorithm. This relaxation and partitioning data sets are shown to be two useful schemes for handling large data sets.Key words: hash tables, perfect hashing functions, minimal hash tables
Recommended from our members
A note on the style metric of Berry and Meekings
The "style metric" of Berry and Meekings is purported to quantify the lucidity of software written in the C programming language. We used a modification of this metric to try and identify error-prone software. Our results indicate that this metric seems to bear little relationship to the density of errors found in programs
Recommended from our members
A preliminary investigation of the effects of FORTRAN data declaration and type conversion features on program comprehension
FORTRAN permits both explicit and implicit options for both data declaration and type conversion features. This study investigated the effects of explicit and implicit data declaration and type conversion features on a program comprehension task performed by FORTRAN proÂgrammers in an introductory programming course. The results indicated that both factors had significant effects on programmer performance and there was a significant interaction between data declaration and type conversion. In addition, hand-simulation tasks performed by the subÂjects provided insights on low-level errors made by the novices and misconceptions in the minds of novices
- …