1,881 research outputs found

    Essays in Visual History: Making Use of the International Mission Photography Archive

    Get PDF
    An extraordinary resource for comparative research in the humanities can be found in the historical images that comprise the International Mission Photography Archive (IMPA). The 62,000 photographs presently in the database represent cultures across Africa, India, China, Korea, Japan, Oceania, the Caribbean, and Papua New Guinea. The requested NEH Level 1 start-up grant will support a workshop devoted to the design of a series of visual essays authored by accomplished scholars who will use images from IMPA to explore topics in their areas of expertise. Called Essays in Visual History, the series will be hosted by the USC Digital Library and featured on the website of the Center for Religion and Civic Culture (CRCC). The workshop will explore relationships with other publication initiatives at USC, specifically those under development by the Center for Transformative Scholarship and The Alliance for Networking Visual Culture, which offer opportunities to maximize the visibility of the of the proposed series

    Bye

    Get PDF

    Truth

    Get PDF

    Walt Whitman and Abram S. Hewitt: A Previously Unknown Connection

    Full text link

    The Woods

    Get PDF

    Mark Doty. What Is the Grass: Walt Whitman in My Life

    Full text link
    Review of Mark Doty. What Is the Grass: Walt Whitman in My Life

    Three-Dimensional Scene Reconstruction Using Multiple Microsoft Kinects

    Get PDF
    The Microsoft Kinect represents a leap forward in the form of cheap, consumer friendly, depth sensing cameras. Through the use of the depth information as well as the accompanying RGB camera image, it becomes possible to represent the scene, what the camera sees, as a three-dimensional geometric model. In this thesis, we explore how to obtain useful data from the Kinect, and how to use it for the creation of a three-dimensional geometric model of the scene. We develop and test multiple ways of improving the depth information received from the Kinect, in order to create smoother three-dimensional models. We use OpenGL to create a polygonal model combining the RGB camera image and depth values. Finally we explore the possibility of combining the three-dimensional models from two Kinects to create a better representation of the scene
    corecore