86 research outputs found

    NACA documents database project

    Get PDF
    The plan to get all the National Advisory Committee on Aeronautics (NACA) collection online, with quality records, led to the NACA Documents Data base Project. The project has a two fold purpose: (1) to develop the definitive bibliography of NACA produced and/or held documents; and (2) to make that bibliography and the associated documents available to the aerospace community. This study supports the first objective by providing an analysis of the NACA collection and its bibliographic records, and supports the second objective by defining the NACA archive and recommending methodologies for meeting the project objectives

    The Use of Concept Mapping/Pattern Matching to Determine the Content Domain for Information Literacy in Baccalaureate Education

    Get PDF
    This study assessed the relevance of a national association\u27s standards for developing information literacy competency in undergraduate students at a mid-sized, regional university in Maryland. Key stakeholders responsible for ensuring student success in achieving information literacy competency at the institution were solicited for their expertise to identify the outcomes they consider to indicate information literacy competency. The group of 14 participants included six faculty, three librarians, three student affairs professionals, and two students. Trochim\u27s Concept Mapping/Pattern Matching methodology was used for gathering and analyzing the data to conceptualize the domain of information literacy competencies. The key stakeholders generated 80 student learning outcomes for information literacy. Using multidimensional scaling and hierarchical cluster analysis, the outcomes were grouped into eight clusters representing the content domain for information literacy. Following the creation of the concept maps, the resulting priorities and their conceptualization schema were compared to the national organization\u27s standards for similarities and differences in a qualitative document analysis. They were also compared to the learning outcomes for information literacy currently associated with the institution\u27s general education curriculum and the library\u27s instruction program.;The study revealed four conclusions. First, the national standards for information literacy are relevant at the local level. Second, there is a need for academic libraries to reevaluate their existing information literacy outcomes to reflect changes in information dissemination from a textual bias to include multi-media. Third, it is important for academic institutions to include representation of all stakeholders when developing student learning outcomes. Fourth, ambiguity still exists among stakeholders in regard to the effectiveness of teaching information literacy

    ANOPP programmer's reference manual for the executive System

    Get PDF
    Documentation for the Aircraft Noise Prediction Program as of release level 01/00/00 is presented in a manual designed for programmers having a need for understanding the internal design and logical concepts of the executive system software. Emphasis is placed on providing sufficient information to modify the system for enhancements or error correction. The ANOPP executive system includes software related to operating system interface, executive control, and data base management for the Aircraft Noise Prediction Program. It is written in Fortran IV for use on CDC Cyber series of computers

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 313)

    Get PDF
    This bibliography lists 227 reports, articles, and other documents introduced into the NASA scientific and technical information system in July, 1988

    1996 Eighth Annual IMSA Presentation Day

    Get PDF
    Attached is the Eighth Annual Presentation Day 1996 Program and Abstract Packet. We will be showcasing the research and achievements of students and staff of the IMSA community, as well as several off-campus presenters.https://digitalcommons.imsa.edu/archives_sir/1016/thumbnail.jp

    Historical GIS Research in Canada

    Get PDF
    Fundamentally concerned with place, and our ability to understand human relationships with environment over time, Historical Geographic Information Systems (HGIS) as a tool and a subject has direct bearing for the study of contemporary environmental issues and realities. To date, HGIS projects in Canada are few and publications that discuss these projects directly even fewer. This book brings together case studies of HGIS projects in historical geography, social and cultural history, and environmental history from Canada's diverse regions. Projects include religion and ethnicity, migration, indigenous land practices, rebuilding a nineteenth-century neighborhood, and working with Google Earth. With contributions by: Colleen Beard Stephen Bocking Jennifer Bonnell Jim Clifford Joanna Dean François Dufaux Patrick A. Dunae Marcel Fortin Jason Gilliland William M. Glen Megan Harvey Matthew G. Hatvany Sally Hermansen Andrew Hinson Don Lafreniere John S. Lutz Joshua D. MacFadyen Daniel Macfarlane Jennifer Marvin Cameron Metcalf Byron Moldofsky Sherry Olson Jon Pasher Daniel Rueck R. W. Sandwell Henry Yu Barbara Znamirowsk

    Information visualisation and data analysis using web mash-up systems

    Get PDF
    A thesis submitted in partial fulfilment for the degree of Doctor of PhilosophyThe arrival of E-commerce systems have contributed greatly to the economy and have played a vital role in collecting a huge amount of transactional data. It is becoming difficult day by day to analyse business and consumer behaviour with the production of such a colossal volume of data. Enterprise 2.0 has the ability to store and create an enormous amount of transactional data; the purpose for which data was collected could quite easily be disassociated as the essential information goes unnoticed in large and complex data sets. The information overflow is a major contributor to the dilemma. In the current environment, where hardware systems have the ability to store such large volumes of data and the software systems have the capability of substantial data production, data exploration problems are on the rise. The problem is not with the production or storage of data but with the effectiveness of the systems and techniques where essential information could be retrieved from complex data sets in a comprehensive and logical approach as the data questions are asked. Using the existing information retrieval systems and visualisation tools, the more specific questions are asked, the more definitive and unambiguous are the visualised results that could be attained, but when it comes to complex and large data sets there are no elementary or simple questions. Therefore a profound information visualisation model and system is required to analyse complex data sets through data analysis and information visualisation, to make it possible for the decision makers to identify the expected and discover the unexpected. In order to address complex data problems, a comprehensive and robust visualisation model and system is introduced. The visualisation model consists of four major layers, (i) acquisition and data analysis, (ii) data representation, (iii) user and computer interaction and (iv) results repositories. There are major contributions in all four layers but particularly in data acquisition and data representation. Multiple attribute and dimensional data visualisation techniques are identified in Enterprise 2.0 and Web 2.0 environment. Transactional tagging and linked data are unearthed which is a novel contribution in information visualisation. The visualisation model and system is first realised as a tangible software system, which is then validated through different and large types of data sets in three experiments. The first experiment is based on the large Royal Mail postcode data set. The second experiment is based on a large transactional data set in an enterprise environment while the same data set is processed in a non-enterprise environment. The system interaction facilitated through new mashup techniques enables users to interact more fluently with data and the representation layer. The results are exported into various reusable formats and retrieved for further comparison and analysis purposes. The information visualisation model introduced in this research is a compact process for any size and type of data set which is a major contribution in information visualisation and data analysis. Advanced data representation techniques are employed using various web mashup technologies. New visualisation techniques have emerged from the research such as transactional tagging visualisation and linked data visualisation. The information visualisation model and system is extremely useful in addressing complex data problems with strategies that are easy to interact with and integrate
    • …
    corecore