14 research outputs found

    Perceptual Regions of Minnesota

    Get PDF
    Economists, demographers, politicians, recreationalists, and others have divided the state of Minnesota into many different regions. In so doing, each claim to represent a meaningful division of state districts based upon their own particular viewpoint, need, or bias. Whether these regions are logically derived or merely delineated for purposes of convenience can, however, be questioned. Regardless of how the divisions carne about, they do, to a degree, represent an abstraction of space. Whether or not the average Minnesotan agrees with, or can even relate to, such abstractions is debatable. The way in which residents perceive the state and mentally delineate subdivisions, or perceptual regions thereof, is of paramount importance. This study represents an attempt to identify and map these perceptual regions of Minnesota as identified by state residents. Perceptual studies of regions on various levels have been conducted by numerous researchers. The number and scope of such studies reveals their popularity. Inherent in such studies are some basic, though often debated, assumptions. The first such assumption is suggested by Arthur Robinson and Barbara Petchenik who note that, All functioning human beings show evidence of possessing spatial concepts and abilities. The second assumption germane to this study states that a strong relationship exists between perceived or discerned images and actual behavior. Humans respond to their milieu as they perceive, interpret, and categorize it through their own experience and knowledge. They also are decision makers and base their decisions upon personal images (perceptions) of the real world. These perceptions result in the occurrence of spatial behavior, defined as being any form of human behavior that involves or exhibits an interaction between the individual [or group] and one or more points in space

    A Collaborative Clearinghouse for Data Management Training and Education Resources

    Get PDF
    Objective: The main objectives of this breakout session are for the Data Management Training (DMT) Clearinghouse team to: 1) introduce the Clearinghouse and its current design and implementation, 2) solicit submissions to its learning resource inventory, and 3) collect feedback upon its web interface and future development. Features of the Clearinghouse that will be demonstrated include how to search and browse its inventory as well as submit a learning resource to the Clearinghouse using the LRMI (Learning Resource Metadata Initiative) metadata format. The team will also share the roadmap for the Clearinghouse’s upcoming features. In order to provide feedback regarding the Clearinghouse’s usability, the team will invite the session attendees to test the Clearinghouse’s services and will encourage comments to guide its future development. Setting/Participants/Resources: Since the DMT Clearinghouse is entirely accessible via the web, in order to demonstrate the Clearinghouse successfully, a reliable (and preferably free of charge) internet connection, and an overhead projecting capability will need to be available to the presenter. It would also be very useful for the attendees of the session to have access to the same internet connection, so that if they desire, the attendees can follow along with the steps of the demonstration, and contribute to the Clearinghouse inventory. The main presenter will plan to bring her own laptop with built-in standard HDMI and USB ports. As a result, it will be helpful if a HDMI or USB cable could also be provided for the presenter to connect her laptop to the projecting equipment. Method: Many research organizations, government agencies, and academic institutions have been developing excellent learning resources in order to support and meet the needs for data management training. However, these learning resources are often hosted on various websites and spread across various scientific domains. Consequently, these resources can be difficult to locate, especially by those who are not already familiar with the creators/authors. This is a barrier to the use and reuse of these resources, and can have significant impact on the promotion and propagation of best practices for data management. To address this need within the Earth sciences, the U.S. Geological Survey’s (USGS) Community for Data Integration (CDI), the Federation of Earth Science Information Partners (ESIP), and the Data Observation Network for Earth (DataONE) have collaborated to create a web-based Clearinghouse1 for collecting data management learning resources that are focused on the Earth sciences. The initial seed funding for the effort was provided by a grant received from the USGS CDI earlier in 2016, and ESIP’s Drupal site provided the hosting infrastructure for the Clearinghouse. Members from the USGS, DataONE, ESIP’s Data Stewardship Committee and its Data Management Training Working Group, Knowledge Motifs LLC, as well as Blue Dot Lab met regularly between April and October, 2016 in order to discuss, create, and implement the content structure and infrastructure components necessary to build the current revision of the Clearinghouse. 1. http://dmtclearinghouse.esipfed.org Results: As a registry of information about the educational resources on topics related to research data management (initially focused on Earth sciences), the Clearinghouse serves as a centralized location for searching or browsing an inventory of these learning resources. Currently, the Clearinghouse offers search and browse functionality that is open to all, and submission of information about educational resources by login with a free ESIP account. To assist with discoverability, the learning resources are described using Learning Resource Metadata Initiative (LRMI) schema. Additionally, the resources may be associated with the steps of data and research life cycles, such as the USGS CDI’s Science Support Framework2 and DataONE’s Data Life Cycle3. Leveraging the team’s collective experience in creating, presenting and distributing data management learning resources, the Clearinghouse included the learning resources from USGS, ESIP, and DataONE as its initial inventory, but is expanding to resources from NASA and others. Crowdsourcing is currently the main mechanism for sustaining the Clearinghouse. Going forward, in addition to the built-in workflow to allow anyone from the public to submit descriptive information about the data management learning resources that s/he wishes to share, future capabilities will be added to enable contributions to review, edit, and rank the submissions, as desired. 2. https://my.usgs.gov/confluence/display/cdi/CDI+Science+Support+Framework3. https://www.dataone.org/data-life-cycle Discussion/Conclusion: The DMT Clearinghouse team was successful in completing the initial development phase as scheduled for the first six months of its funding, including some informal usability testing of the interface. The team aims to continue to develop and enhance the Clearinghouse’s capabilities, including the evaluation of its usability, through collaboration with additional communities, and if feasible, adding the capability for bulk-loading of learning resources. Being able to present the Clearinghouse at the eScience Symposium would not only allow those who are involved with or would like to learn about data management to leverage the Clearinghouse’s resources, but also connect those who would like to contribute to the project with the Clearinghouse team. Ultimately, the Clearinghouse is designed so that the resources from its inventory could be used in a variety of data management training and education environments. By exposing the Clearinghouse to diverse users and communities, the Clearinghouse team can better assess how the Clearinghouse can be updated and what technological enhancements to pursue in the future in order to improve our support of research data management training needs

    Towards a Holistic Approach to Policy Interoperabilityin Digital Libraries and Digital Repositories

    Get PDF
    Underpinning every digital library and digital repository there is a policy framework, which makes the digital library viable - without a policy framework a digital library is little more than a container for content. Policy governs how a digital library is instantiated and run. It is therefore a meta-domain which is situated both outside the digital library and any technologies used to deliver it, and within the digital library itself. Policy is also a key aspect of digital library and digital repository interoperability in a common and integrated information space. Policy interoperability - that is the exchange and reuse of policies - is a step beyond policy standardisation. Furthermore, effective and efficient policy frameworks are also one of the Digital Curation Center (DCC), DigitalPreservationEurope (DPE), nestor and Center for Research Libraries (CRL) core criteria for digital repositories. In this article, we share our research on policy interoperability levels and the experimental survey on policy interoperability conducted with real-life digital libraries, as a contribution towards the definition of a Policy Interoperability Framework

    Developing Criteria to Establish Trusted Digital Repositories

    No full text

    The challenge of archiving and preserving remotely sensed data

    No full text
    Few would question the need to archive the scientific and technical (S&T) data generated by researchers. At a minimum, the data are needed for change analysis. Likewise, most people would value efforts to ensure the preservation of the archived S&T data. Future generations will use analysis techniques not even considered today. Until recently, archiving and preserving these data were usually accomplished within existing infrastructures and budgets. As the volume of archived data increases, however, organizations charged with archiving S&T data will be increasingly challenged (U.S. General Accounting Office, 2002). The U.S. Geological Survey has had experience in this area and has developed strategies to deal with the mountain of land remote sensing data currently being managed and the tidal wave of expected new data. The Agency has dealt with archiving issues, such as selection criteria, purging, advisory panels, and data access, and has met with preservation challenges involving photographic and digital media. That experience has allowed the USGS to develop management approaches, which this paper outlines

    The Evolution, Approval and Implementation of the U.S. Geological Survey Science Data Lifecycle Model

    Get PDF
    This paper details how the U.S. Geological Survey (USGS) Community for Data Integration (CDI) Data Management Working Group developed a Science Data Lifecycle Model, and the role the Model plays in shaping agency-wide policies and data management applications. Starting with an extensive literature review of existing data lifecycle models, representatives from various backgrounds in USGS attended a two-day meeting where the basic elements for the Science Data Lifecycle Model were determined. Refinements and reviews spanned two years, leading to finalization of the model and documentation in a formal agency publication. The Model serves as a critical framework for data management policy, instructional resources, and tools. The Model helps the USGS address both the Office of Science and Technology Policy (OSTP)2 for increased public access to federally funded research, and the Office of Management and Budget (OMB)3 2013 Open Data directives, as the foundation for a series of agency policies related to data management planning, metadata development, data release procedures, and the long-term preservation of data. Additionally, the agency website devoted to data management instruction and best practices (www2.usgs.gov/datamanagement) is designed around the Model’s structure and concepts. This paper also illustrates how the Model is being used to develop tools for supporting USGS research and data management processes

    Co-Author Affiliation

    No full text
    Organizations that create, preserve, and provide access to large volumes of scientific and technical data must plan strategically to ensure that the collections remain viable and accessible. The variety and number of data choices are expanding quickly. This lends credence to an appraisal process that can help apply resources toward collections that have been approved through a formal records management process. This approach results in decisions about retention that are more defensible, and supports tight budget situations by identifying and prioritizing the more pertinent scientific and technical collections. This paper expands upon this records management principle and is based on two years of practical use and experience with this process, aided by a public software tool developed to assist the appraisal process
    corecore