282 research outputs found

    Hydroponics: Creating Food for Today and for Tomorrow

    Get PDF
    In order to analyze how a plant growing technique, hydroponics is currently being utilized and to determine future possible implications for its usage exist I have examined research pertaining to this topic. From this research, I have selected information generated by several universities, a professor who is considered an authority on the subject, and the National Aeronautics and Space Administration (NASA). Despite there being a plethora of knowledge concerning hydroponics, this paper will cover information regarding the basics. For the survey of the literature, I focused on gathering information related to the history of hydroponics, to gain a better understanding of how it has been used in the past and how it could be used in the future. I also consider the various types of hydroponics. During the survey of the literature, there are three main questions that are considered: what are the various methods of growing vegetables with hydroponics as compared to traditional (soil), what is both the current and projected cost of hydroponics, and what is the public perception of hydroponics, if any, that would influence willingness to participate or support its usage. The results and findings of this paper are relevant for people in every country, and demonstrate that depending on the size of the operation, even people at home could create their own reliable source of food. As the human population is always increasing, it is the hope that this paper will elaborate on not only an effective, but viable option suitable to feeding people. Furthermore, it has implications as a potential avenue for growing food for space travel.Kayla SiddellHonors DiplomaHonors CollegeCunningham Memorial Library, Terre Haute, Indiana State UniversityUndergraduateTitle from document title page. Document formatted into pages: 21

    My Made for TV Life or How We Survived My Psycho-Killer Dad

    Get PDF
    The critical afterword discusses my struggles writing a memoir after a lifetime of primarily fictional influences, the ethics of truth and memory, and my attempts to find a style that would do justice to my mother s struggles. The memoir began its life as a portrayal of my mother\u27s story, but in the end was the story of a girl growing up with the knowledge of her father\u27s attempted murder, and the strength of her mother\u27s guidance. My story

    Repository of NSF Funded Publications and Data Sets: "Back of Envelope" 15 year Cost Estimate

    Get PDF
    In this back of envelope study we calculate the 15 year fixed and variable costs of setting up and running a data repository (or database) to store and serve the publications and datasets derived from research funded by the National Science Foundation (NSF). Costs are computed on a yearly basis using a fixed estimate of the number of papers that are published each year that list NSF as their funding agency. We assume each paper has one dataset and estimate the size of that dataset based on experience. By our estimates, the number of papers generated each year is 64,340. The average dataset size over all seven directorates of NSF is 32 gigabytes (GB). A total amount of data added to the repository is two petabytes (PB) per year, or 30 PB over 15 years. The architecture of the data/paper repository is based on a hierarchical storage model that uses a combination of fast disk for rapid access and tape for high reliability and cost efficient long-term storage. Data are ingested through workflows that are used in university institutional repositories, which add metadata and ensure data integrity. Average fixed costs is approximately .0.90/GBover15yearspan.Variablecostsareestimatedataslidingscaleof.0.90/GB over 15-year span. Variable costs are estimated at a sliding scale of 150 - 100pernewdatasetforupfrontcuration,or100 per new dataset for up-front curation, or 4.87 – 3.22perGB.Variablecostsreflecta3Thetotalprojectedcostofthedataandpaperrepositoryisestimatedat3.22 per GB. Variable costs reflect a 3% annual decrease in curation costs as efficiency and automated metadata and provenance capture are anticipated to help reduce what are now largely manual curation efforts. The total projected cost of the data and paper repository is estimated at 167,000,000 over 15 years of operation, curating close to one million of datasets and one million papers. After 15 years and 30 PB of data accumulated and curated, we estimate the cost per gigabyte at 5.56.This5.56. This 167 million cost is a direct cost in that it does not include federally allowable indirect costs return (ICR). After 15 years, it is reasonable to assume that some datasets will be compressed and rarely accessed. Others may be deemed no longer valuable, e.g., because they are replaced by more accurate results. Therefore, at some point the data growth in the repository will need to be adjusted by use of strategic preservation

    The GAMMA Project: A Cooperative Cataloging Venture

    Get PDF
    Archival and historical organizations have traditionally suffered from a lack of funding and personnel. One way to combat this classic problem is through the development of collaborative grant-funded projects. By bonding like institutions together and creating a cooperative venture with a common goal, institutions can share funds, personnel, and knowledge in an undertaking that provides assistance to all without placing undue stress upon individual organizations

    The Data Capsule for Non-Consumptive Research: Final Report

    Get PDF
    Digital texts with access and use protections form a unique and fast growing collection of materials. Growing equally quickly is the development of text and data mining algorithms that process large text-based collections for purposes of exploring the content computationally. There is a strong need for research to establish the foundations for secure computational and data technologies that can ensure a non-consumptive environment for use-protected texts such as the copyrighted works in the HathiTrust Digital Library. Developing a secure computation and data environment for non-consumptive research for the HathiTrust Research Center is funded through a grant from the Alfred P. Sloan Foundation. In this research, researchers at HTRC and the University of Michigan are developing a “data capsule framework” that is founded on a principle of “trust but verify”. The project has resulted in a novel experimental framework that permits analytical investigation of a corpus but prohibits data from leaving the capsule. The HTRC Data Capsule is both a system architecture and set of policies that enable computational investigation over the protected content of the HT digital repository that is carried out and controlled directly by a researcher.Alfred P. Sloan Foundatio

    Dating of Bush Turkey Rockshelter 3 in the Calvert Ranges establishes Early Holocene Occupation of the Little Sandy Desert, Western Australia

    Get PDF
    Systematic excavation of occupied rockshelters that occur in ranges along the Canning Stock Route of the Western Desert has seen the establishment of both a Pleistocene signal (c.24ka BP) as well as the fleshing out of a Holocene sequence. Recent dating of a perched rockshelter in the Calvert Ranges, east of the Durba Hills, has provided a Holocene record filling in previous occupational gaps from the Calvert Ranges. The extrapolated basal date of the site is in the order of 12,000 BP. Assemblages from this site illustrate repeated occupation through the Holocene with a notable shift in raw materials procured for artefact production and their technology of manufacture in the last 1000 years. Engraved and pigment art is thought to span the length of occupation of the shelter. The site illustrates a significant increase in the discard of cultural materials during the last 800 years, a trend observed at other desert sites. Much of the pigment art in this shelter seems likely to date to this most recent period

    Little Big Stories: Case Studies in Diversifying the Archival Record through Community Oral Histories

    Get PDF
    The use and development of oral history programs has become a popular way for archives to document events and communities, either as a supplement to traditional records or as discrete collections. In particular, projects that focus on involving groups traditionally underrepresented within the archival record are becoming increasingly common in both large institutions and small community archives. This article presents three case studies of oral history projects dedicated to forging ties in the community and increasing diversity in their collections. In these case studies, the authors discuss the inceptions of their projects and the ups and downs of developing community oral history programs, including building trust, engaging community members, participation of volunteers and students, consideration of alternative models such as story circles, establishment of processes and procedures that can be replicated and sustained, lessons learned and future steps. The authors also reflect on the impact of unexpected roadblocks, particularly the COVID-19 pandemic. By understanding the ways these elements shape oral history programs, archivists can find new ways to frame those programs around the communities in question, creating more inclusive collections and better serving both institution and community

    HathiTrust Research Center Data Capsule v1.0: An Overview of Functionality

    Get PDF
    The first mode of access by the community of digital humanities and informatics researchers and educators to the copyrighted content of the HathiTrust digital repository will be to extracted statistical and aggregated information about the copyrighted texts. But can the HathiTrust Research Center support scientific research that allows a researcher to carry out their own analysis and extract their own information? This question is the focus of a 3-year, $606,000 grant from the Alfred P. Sloan Foundation (Plale, Prakash 2011-2014), which has resulted in a novel experimental framework that permits analytical investigation of a corpus but prohibits data from leaving the capsule. The HTRC Data Capsule is both a system architecture and set of policies that enable computational investigation over the protected content of the HT digital repository that is carried out and controlled directly by a researcher. It leverages the foundational security principles of the Data Capsules of A. Prakash of University of Michigan, which allows privileged access to sensitive data while also restricting the channels through which that data can be released. Ongoing work extends the HTRC Data Capsule to give researchers more compute power at their fingertips. The new thrust, HT-DC Cloud, extends existing security guarantees and features to allow researchers to carry out compute-heavy tasks, like LDA topic modeling, on large-scale compute resources. HTRC Data Capsule works by giving a researcher their own virtual machine that runs within the HTRC domain. The researcher can configure the VM as they would their own desktop with their own tools. After they are done, the VM switches into a "secure" mode, where network and other data channels are restricted in exchange for access to the data being protected. Results are emailed to the user. In this talk we discuss the motivations for the HTRC Data Capsule, its successes and challenges. HTRC Data Capsule runs at Indiana University. See more at http://d2i.indiana.edu/non-consumptive-researc
    corecore