5,454 research outputs found

    Single-picture reconstruction and rendering of trees for plausible vegetation synthesis

    Get PDF
    State-of-the-art approaches for tree reconstruction either put limiting constraints on the input side (requiring multiple photographs, a scanned point cloud or intensive user input) or provide a representation only suitable for front views of the tree. In this paper we present a complete pipeline for synthesizing and rendering detailed trees from a single photograph with minimal user effort. Since the overall shape and appearance of each tree is recovered from a single photograph of the tree crown, artists can benefit from georeferenced images to populate landscapes with native tree species. A key element of our approach is a compact representation of dense tree crowns through a radial distance map. Our first contribution is an automatic algorithm for generating such representations from a single exemplar image of a tree. We create a rough estimate of the crown shape by solving a thin-plate energy minimization problem, and then add detail through a simplified shape-from-shading approach. The use of seamless texture synthesis results in an image-based representation that can be rendered from arbitrary view directions at different levels of detail. Distant trees benefit from an output-sensitive algorithm inspired on relief mapping. For close-up trees we use a billboard cloud where leaflets are distributed inside the crown shape through a space colonization algorithm. In both cases our representation ensures efficient preservation of the crown shape. Major benefits of our approach include: it recovers the overall shape from a single tree image, involves no tree modeling knowledge and minimal authoring effort, and the associated image-based representation is easy to compress and thus suitable for network streaming.Peer ReviewedPostprint (author's final draft

    Motion graphics documentary video of Deaf artists of the 21st century

    Get PDF
    Deaf art reflects a unique culture where Deaf people express their life experiences, which are different from those of hearing people. Deaf art also shows the joy and community among Deaf people with their shared language and experiences, expressed through art that includes painting, sculpturing, acting, and writing. In other words, Deaf culture is a celebration where we as Deaf people can bond and share our similar experiences with life struggle in this majority world of hearing people. We often seek out other Deaf artists to connect with and get the sense of “home.” That “sense of home” includes not just gathering in person, but also interacting through communications technologies, such as email, websites, blogs, videos, and chat rooms. However, even though there are many examples of videos of Deaf people expressing their deaf experiences in ASL, these were strictly two-dimensional, very flat because they had limited or no motion graphics. Motion graphics allows for more lifelike, three-dimensional representation of visual images, an appropriate medium to use in representing two Deaf artists who use a three-dimensional means of communication: American Sign Language (ASL). Creating this 30-minutes three-dimensional motion graphics video documentary about two Deaf artists, Jengy Geller and Carl Lil Bear, and their backgrounds and inspirations has brought the language of ASL to where the audience could appreciate the three-dimensional visual images along with special effects that includes a flythrough into virtual worlds of rich contrast colors that portray knowledge

    UNH Offers Series on Native Americans

    Get PDF

    Pedestrian Liveness Detection Based on mmWave Radar and Camera Fusion

    Get PDF

    Image-based tree variations

    Get PDF
    The automatic generation of realistic vegetation closely reproducing the appearance of specific plant species is still a challenging topic in computer graphics. In this paper, we present a new approach to generate new tree models from a small collection of frontal RGBA images of trees. The new models are represented either as single billboards (suitable for still image generation in areas such as architecture rendering) or as billboard clouds (providing parallax effects in interactive applications). Key ingredients of our method include the synthesis of new contours through convex combinations of exemplar countours, the automatic segmentation into crown/trunk classes and the transfer of RGBA colour from the exemplar images to the synthetic target. We also describe a fully automatic approach to convert a single tree image into a billboard cloud by extracting superpixels and distributing them inside a silhouette-defined 3D volume. Our algorithm allows for the automatic generation of an arbitrary number of tree variations from minimal input, and thus provides a fast solution to add vegetation variety in outdoor scenes.Peer ReviewedPostprint (author's final draft

    Who owns native nature? Discourses of rights to land, culture, and knowledge in New Zealand

    Get PDF
    Michael Brown famously asked ‘Who owns native culture?’ This paper revisits that question by analyzing what happens to culture when the culturally defined boundary between it and nature becomes salient in the context of disputes between indigenous and settler populations. My case study is the dispute between the New Zealand government and Maori tribal groupings concerning ownership of the foreshore and seabed. Having been granted the right to test their claims in court in 2003, Maori groups were enraged when the government legislated the right out of existence in 2004. Though the reasons for doing so were clearly political, contrasting cultural assumptions appeared to set Maori and Pakeha (New Zealanders of European origin) at odds. While couching ownership of part of nature as an IPR issue may seem counter-intuitive, I argue that as soon as a property claim destabilizes the nature/culture boundary, IPR discourse becomes pertinent
    corecore