7 research outputs found
Sensitivity Analysis of the Thunder Combat Simulation Model to Command and Control Inputs Accomplished in a Parallel Environment
This research had two objectives. The first was to develop a methodology to demonstrate the parallel processing capability provided by Air Force\u27s Aeronautical System\u27s Command (ASC) Major Shared Resource Center (MSRC) and apply that methodology to the SIMAF Proof of Concept project. Secondly, AFSAA/SAAB requested a sensitivity analysis of THUNDER to the modeled command and control (C2) inputs. The power of parallelization can not be overemphasized. The data collection phase of this thesis was accomplished at the MSRC using a script developed to automate the processing of an experimental design, providing the analyst with a launch and leave capability. On average it took 45 minutes to process a single replication of THUNDER. For this thesis we made 1,560 runs in slightly less than 3 days. To accomplish the same number of runs on a single CPU machine would have taken slightly more than 3 months. For our sensitivity analysis we used a Plackett and Burman Resolution III screening design to identify which of 11 input variables had a statistical impact upon THUNDER. The decision to investigate only the significant variables reduced the number of input variables from 11 to 5. This reduced the number of design points necessary to obtain the same Resolution V information from 128 to 16 and eliminated the need for 3,360 THUNDER runs. A significant savings! Using response surface methodology (RSM) techniques, we were then able to generate a response surface depicting the relationships between the input parameters and the output measures
Strategies for building, managing and implementing geographic information systems (GIS) capabilities in transit agencies
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1995.Includes bibliographical references (p. 336-345).by Kamal T. Azar.Ph.D
Development and implementation of models and methods in temporal GIS for spatial network planning decision support
Geographical Information Systems (GIS) are today widely used for management of spatial data, particularly that relating to network infrastructure for telecommunications, utilities and transport. GIS also form a valuable tool for planning the future development of such networks and many organisations use GIS packages for this, despite the fact that it is not necessarily a task for which they have been designed. They may therefore lack many features that are of benefit, or even essential, for efficient storage and analysis of data relating to future designs. This thesis considers what the characteristics of such data may be and what shortcomings exist in current GIS regarding this, and then describes the development, implementation and testing of suitable models and methods to address these shortcomings. Of particular importance is found to be the need for a network-planning GIS application to incorporate an appropriate model of time for handling situations where there may be many alternative scenarios, a subject which has hitherto been largely unaddressed by GIS research despite having obvious applications. Existing temporal models are therefore examined to find the most suitable, which is then developed from a broad conceptual model to a model specifically designed for application to spatial network planning; Temporal Topology. The possibility for automated design optimisation using this model is then introduced, and some appropriate methods for performing this task are given. Issues which may affect the implementation of an application using the Temporal Topology model and these optimisation methods are then considered before the description of an implementation which was used to carry out a network planning case study with the aim of testing the concepts developed in this thesis. The implications of this research on the wider field of GIS, and particularly Temporal GIS are then considered.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Documents as functions
Treating variable data documents as functions over their data bindings opens opportunities for building more powerful, robust and flexible document architectures to meet the needs arising from the confluence of developments in document engineering, digital printing technologies and marketing analysis.
This thesis describes a combination of several XML-based technologies both to represent and to process variable documents and their data, leading to extensible, high-quality and 'higher-order' document generation solutions. The architecture (DDF) uses XML uniformly throughout the documents and their processing tools with interspersing of different semantic spaces being achieved through namespacing.
An XML-based functional programming language (XSLT) is used to describe all intra-document variability and for implementing most of the tools. Document layout intent is declared within a document as a hierarchical set of combinators attached to a tree-based graphical presentation. Evaluation of a document bound to an instance of data involves using a compiler to create an executable from the document, running this with the data instance as argument to create a new document with layout intent described, followed by resolution of that layout by an extensible layout processor.
The use of these technologies, with design paradigms and coding protocols, makes it possible to construct documents that not only have high flexibility and quality, but also perform in higher-order ways. A document can be partially bound to data and evaluated, modifying its presentation and still remaining variably responsive to future data. Layout intent can be re-satisfied as presentation trees are modified by programmatic sections embedded within them. The key enablers are described and illustrated through example
Recommended from our members
An Evaluation of Structured Navigation for Subject Searching in Online Catalogues
Understanding and improving subject searching in online library catalogues is the focus of this study. Against the backdrop of current research and developments in online catalogues an analysis of the problems and prospects for subject access in the expanding online catalogue is presented. Developments in recent information retrieval theory and practice are reviewed, and a case is made for a new model of information seeking and retrieval that more closely describes much of the subject searching and browsing activity actually conducted by library users. The center piece of this study is the experiment that was conducted using an experimental online catalogue developed to investigate and evaluate the effect of alternative browse and navigate search methods on overall retrieval effectiveness and subject searching performance. The objectives, methodology, and findings of this online catalogue search experiment are discussed. The primary aim of the experimental study was to evaluate the usability and retrieval performance of a pre-structured "navigation" approach to subject searching and browsing in library catalogues. The main hypothesis tested was that the provision and use of a navigation search and browse function would significantly improve overall OPAC retrieval effectiveness and the subject searching performance of OPAC users. The OPAC used in the study was designed and implemented by this author using the database management and retrieval software known as "TiNMAN", provided by Information Management & Engineering, Ltd. TINMAN employs an entity-relational database structure which permits the linking of any field in the stored bibliographic record to any other field. These linkages establish browse and navigation pathways among data fields ("entities") and citations to support guided but flexible searching and browsing through the collection by users. Thus, a rudimentary form of hypertext is provided for the users of the OPAC. The test database consisted of 30,000 Library of Congress MARC bibliographic records selected at random from all LC catalog records for publications through 1988 in the English language in the LC classes HB-HJ (Economics, Business, etc.). For each record, the verbal description of the assigned LC class number found in the printed schedules was added as a subject descriptor to augment the subject cataloging provided by the Library of Congress. Three different OPACs were tested for comparison purposes. The control OPAC lacked the navigation feature. The other two OPACs supported related-record navigation, one on title words only, the other on subject headings only. Searchers were encouraged to use the OPAC's features and search options in whatever manner they wished. Subjects in Group-I were permitted to navigate only on the subject headings from the controlled subject vocabulary assigned to the work cited (augmented by the verbal meanings of the Library of Congress class number). Subjects in Group-2 were permitted to navigate, but only from title words of the work cited and displayed. Navigating from one of these title words would result in the retrieval of all works whose titles had at least one occurrence of the selected word. Subjects in the control group were not permitted to navigate; that is, it was not possible for them to point to a selected data element in a displayed citation to move on to related terms or citations associated with that data element. The positive value of related-record navigation in improving subject searching in OPACs was not clearly determined. The navigation groups performed significantly better than the control groupon the first search task, but all three groups performed nearly equally well on the second search task. Navigation on subject headings or title keywords resulted in higher recall scores, especially among first time, novice users of the system, but precision suffered significantly in title-word navigation. In fact, the control group achieved higher precision scores on both search tasks. Navigation did not seem to aid subject searching performance after greater familiarity with the system was achieved, except perhaps to increase recall in persistent searches without much decrease in precision. Online bookshelf browsing seems to improve recall without a significant decrease in precision, and may be a more positive factor than navigation on either subject headings or title words
Recommended from our members
U.S. Commission on Immigration Reform
The U.S. Commission on Immigration Reform was created by Congress to assess U.S. immigration policy and make recommendations regarding its implementation and effects. Mandated in the Immigration Act of 1990 to submit an interim report in 1994 and a final report in 1997, the Commission has undertaken public hearings, fact-finding missions, and expert consultations to identify the major immigration-related issues facing the United States today.LBJ School of Public Affair