1,070 research outputs found

    Learning Beyond the Classroom: Photography as a major design element in graphic design layouts for print and web

    Get PDF
    Learning Beyond the Classroom: Photography as a major design element in Graphic Design Layouts for print and web, is a thesis which explores alternative methods for learning and the teaching of the design process through the use of shared experiences and expert testimony. The final thesis will provide a resource for designers and students to be able to discuss the effective and ineffective application of design principles and provide some basic lessons in the use of photography in graphic design. Elements of design that are to be discussed include, but are not limited to: typography, hierarchy, color, scale and placement. While discussions may go into additional topics the main categories will be effective design examples, ineffective design examples and lessons. The “Examples” sections will discuss what is effective and ineffective with re-designed advertisements and/or web pages. In addition to discussing what is ineffective, alternative designs for the advertisement will be provided. The “Lessons” section will contain written and/or video demonstrations on how to create a successful advertisement using photography as a primary element of the design. They will also provide some basic tools for choosing and manipulating photographs for your design needs. In addition to on-line resources, this thesis will include a course outline and syllabus for a potential undergraduate course named “The Use of Photography in Design.” A course focused on dealing with all aspects of photography for designers, including designing with photography, working with photographers, the use of stock photography and manipulating photos for use in designs. Overall this thesis is designed to help designers and instructors continue to learn and discuss the basic design principles in an open environment, outside of the traditional classroom

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    Ultrafast Light and Electrons: Imaging the Invisible

    Get PDF
    In this chapter, the evolutionary and revolutionary developments of microscopic imaging are overviewed with focus on ultrashort light and electrons pulses; for simplicity, we shall use the term “ultrafast” for both. From Alhazen’s camera obscura, to Hooke and van Leeuwenhoek’s optical micrography, and on to three- and four-dimensional (4D) electron microscopy, the developments over a millennium have transformed humans’ scope of visualization. The changes in the length and time scales involved are unimaginable, beginning with the visible shadows of candles at the centimeter and second scales, and ending with invisible atoms with space and time dimensions of sub-nanometer and femtosecond, respectively. With these advances it has become possible to determine the structures of matter and to observe their elementary dynamics as they fold and unfold in real time, providing the means for visualizing materials behavior and biological function, with the aim of understanding emergent phenomena in complex systems. Both light and light-generated electrons are now at the forefront of femtosecond and attosecond science and technology, and the scope of applications has reached beyond the nuclear motion as electron dynamics become accessible

    Light field coding with field of view scalability and exemplar-based inter-layer prediction

    Get PDF
    Light field imaging based on microlens arrays—a.k.a. holoscopic, plenoptic, and integral imaging—has currently risen up as a feasible and prospective technology for future image and video applications. However, deploying actual light field applications will require identifying more powerful representations and coding solutions that support arising new manipulation and interaction functionalities. In this context, this paper proposes a novel scalable coding solution that supports a new type of scalability, referred to as field-of-view scalability. The proposed scalable coding solution comprises a base layer compliant with the High Efficiency Video Coding (HEVC) standard, complemented by one or more enhancement layers that progressively allow richer versions of the same light field content in terms of content manipulation and interaction possibilities. In addition, to achieve high-compression performance in the enhancement layers, novel exemplar-based interlayer coding tools are also proposed, namely: 1) a direct prediction based on exemplar texture samples from lower layers and 2) an interlayer compensated prediction using a reference picture that is built relying on an exemplar-based algorithm for texture synthesis. Experimental results demonstrate the advantages of the proposed scalable coding solution to cater to users with different preferences/requirements in terms of interaction functionalities, while providing better rate- distortion performance (independently of the optical setup used for acquisition) compared to HEVC and other scalable light field coding solutions in the literature.info:eu-repo/semantics/acceptedVersio

    Holographic colour prints for enhanced optical security by combined phase and amplitude control.

    Get PDF
    Conventional optical security devices provide authentication by manipulating a specific property of light to produce a distinctive optical signature. For instance, microscopic colour prints modulate the amplitude, whereas holograms typically modulate the phase of light. However, their relatively simple structure and behaviour is easily imitated. We designed a pixel that overlays a structural colour element onto a phase plate to control both the phase and amplitude of light, and arrayed these pixels into monolithic prints that exhibit complex behaviour. Our fabricated prints appear as colour images under white light, while projecting up to three different holograms under red, green, or blue laser illumination. These holographic colour prints are readily verified but challenging to emulate, and can provide enhanced security in anti-counterfeiting applications. As the prints encode information only in the surface relief of a single polymeric material, nanoscale 3D printing of customised masters may enable their mass-manufacture by nanoimprint lithography

    PlenoPatch: patch-based plenoptic image manipulation

    Get PDF
    Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification

    3-D Cinema: Immersive Media Technology

    Get PDF
    Article exploring 3-D cinema.Copyright © Springer Science+Business Media Dordrecht 20153-D cinema is a largely overlooked media within geographical critique. This omission is notable given both the sustained academic consideration afforded to other popular media, the medium’s significant commercial and popular success, and its status as an ‘affective’ and captivating storytelling medium. With reference to film industry advertisements, the experiential dimensions of the 3-D cinematic encounter and its (popular) framing as an ‘immersive’ consumer experience are explored. In particular, the notion of ‘immersion’ is unpacked with reference to the medium’s engineering and production techniques. In so doing, the intertwinement of the industrial desire for ever more ‘immersive’ and ‘realistic’ consumer experience is explored in relation to the engineering techniques exhibiting perceptual mimicry, or what could be termed ‘mimetic engineering’. The association between 3-D cinema and ‘tactile’ images is then explored with reference to geographic literatures on ‘haptics’ and technologies of touch. A number of recent ‘innovations’ in these fields are drawn upon in order to complicate 3-D cinema’s association with ‘tactility’. In so doing, a technological shift towards the increasingly pervasive and sophisticated engagement of the wider multi-sensory palette is explored. Drawing upon recent media technology ‘innovations’, this persistent and relentless desire for ever more ‘immersive’ and perceptually-convincing media technology is explored in light of developing media geographies

    Cross-Platform Methods in Computer Graphics That Boost Experimental Film Making

    Get PDF
    Computer graphics arts such as animations, video games, and special effects in live-action movies have become essential for people seeking entertainment and education. This study aims to explore the potential for experimental film in presenting scientific theory as well as assessing different production strategies in 3D image creation. To invite people into some abstract or complicated scientific topics more readily, non-narrative film form is a viable method to relay this type of information. It\u27s crucial to look at how independent filmmakers employ various ways to fulfill their particular creative purposes. I’ll be demonstrating how these processes worked in making my film, Discontinuity, a short 3-D animated experimental work that attempts to illuminate some of the mysteries of quantum theory for an audience. I plan to use my analysis of the film’s production time, the overall quality and the feedback it received to build ideas for future research as well as an overall vision for computer graphics arts

    THE REALISM OF ALGORITHMIC HUMAN FIGURES A Study of Selected Examples 1964 to 2001

    Get PDF
    It is more than forty years since the first wireframe images of the Boeing Man revealed a stylized hu-man pilot in a simulated pilot's cabin. Since then, it has almost become standard to include scenes in Hollywood movies which incorporate virtual human actors. A trait particularly recognizable in the games industry world-wide is the eagerness to render athletic muscular young men, and young women with hour-glass body-shapes, to traverse dangerous cyberworlds as invincible heroic figures. Tremendous efforts in algorithmic modeling, animation and rendering are spent to produce a realistic and believable appearance of these algorithmic humans. This thesis develops two main strands of research by the interpreting a selection of examples. Firstly, in the computer graphics context, over the forty years, it documents the development of the creation of the naturalistic appearance of images (usually called photorealism ). In particular, it de-scribes and reviews the impact of key algorithms in the course of the journey of the algorithmic human figures towards realism . Secondly, taking a historical perspective, this work provides an analysis of computer graphics in relation to the concept of realism. A comparison of realistic images of human figures throughout history with their algorithmically-generated counterparts allows us to see that computer graphics has both learned from previous and contemporary art movements such as photorealism but also taken out-of-context elements, symbols and properties from these art movements with a questionable naivety. Therefore, this work also offers a critique of the justification of the use of their typical conceptualization in computer graphics. Although the astounding technical achievements in the field of algorithmically-generated human figures are paralleled by an equally astounding disregard for the history of visual culture, from the beginning 1964 till the breakthrough 2001, in the period of the digital information processing machine, a new approach has emerged to meet the apparently incessant desire of humans to create artificial counterparts of themselves. Conversely, the theories of traditional realism have to be extended to include new problems that those active algorithmic human figures present

    The Impossible Qualities Of Illusionary Spaces: Stop Motion Animation, Visual Effects And Metalepsis

    Get PDF
    This thesis examines stop motion animation, its role as a special effect and how the stop motion form impacts on narrative. In particular, it is concerned with the relationship between stop motion animation and the rhetorical concept of metalepsis, as well as the disruption and transgression of narrative spaces in fiction. The studio component of the work is an installation titled All The Nice Things Come From Here which uses an early film special effects technique, the SchĂŒfftan process. The SchĂŒfftan process is a form of in-camera compositing that uses mirrors to align two separate spaces to form the illusion of one cohesive space. The installation uses Newcastle’s light industrial landscape as a backdrop to create impossible miniature narrative spaces that can only be understood when the viewer is aligned to a station point forced by the placement of the mirrors. The theoretical portion of the thesis examines how this exploded view of an animated special effect can be used to explore ideas of narrative, narrative layers and the visual forms of stop motion animation. The thesis argues that object stop motion animation has aspects that are inherently metaleptic, as stop motion’s use of real objects doing impossible things creates its own subtle and impossible metaleptic spaces that simultaneously refer to both the world within the film and the world outside the film
    • 

    corecore