2,823 research outputs found

    EVA agent table

    Get PDF
    The purpose of the EVA agent table project is to develop a tool for supporting architectural and urban design by providing pedestrian feedback information. Although there are many pedestrian simulation programs, none are applied to the physical interface for interaction with designers whilst they are designing. Thus, this project employs a sketching interface by pen and paper, in order to interact with pedestrian simulations. As designers are familiar with this traditional interface, they can naturally sketch the design interaction with pedestrian simulations. One of the advantages of adding this simulation is to reduce time and cost invested in the design process as designers can adjust their design immediately. However, sketching is a thinking process that designers use to communicate with themselves. Accordingly, if this feedback of information of pedestrian movement interferes with designers’ thinking while sketching, it will be not useful at all. Hence, the hypothesis has to be tested to confirm that the movement of pedestrian simulation will not interfere with designers’ thinking but will help designers to evaluate their design. The test in this project will be investigated by using EVA agent table to design. From the experiment, it is also shown how designers' sketching interacts with real time pedestrian simulation. Consequently, adding this feedback information to a sketch has a beneficial effect on designers because it facilitates the design process

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    Holoimages on Diffraction Screens

    Get PDF

    Visualizing individual microtubules using bright-field microscopy

    Full text link
    Microtubules are filament-shaped, polymeric proteins (~25 nm in diameter) involved in cellular structure and organization. We demonstrate the imaging of individual microtubules using a conventional bright-field microscope, without any additional phase or polarization optics. Light scattered by microtubules is discriminated through extensive use of digital image-processing, thus removing background, reducing noise and enhancing contrast. The setup builds on a commercial microscope, with the inclusion of a minimal and inexpensive set of components, suitable for implementation in the student laboratory. We show how this technique can be applied to a demonstrative biophysical assay, by tracking the motions of microtubules driven by the motor protein kinesin

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program

    Segmentation of the glottal space from laryngeal images using the watershed transform

    Full text link
    The present work describes a new method for the automatic detection of the glottal space from laryngeal images obtained either with high speed or with conventional video cameras attached to a laryngoscope. The detection is based on the combination of several relevant techniques in the field of digital image processing. The image is segmented with a watershed transform followed by a region merging, while the final decision is taken using a simple linear predictor. This scheme has successfully segmented the glottal space in all the test images used. The method presented can be considered a generalist approach for the segmentation of the glottal space because, in contrast with other methods found in literature, this approach does not need either initialization or finding strict environmental conditions extracted from the images to be processed. Therefore, the main advantage is that the user does not have to outline the region of interest with a mouse click. In any case, some a priori knowledge about the glottal space is needed, but this a priori knowledge can be considered weak compared to the environmental conditions fixed in former works

    Visualizing the Motion Flow of Crowds

    Get PDF
    In modern cities, massive population causes problems, like congestion, accident, violence and crime everywhere. Video surveillance system such as closed-circuit television cameras is widely used by security guards to monitor human behaviors and activities to manage, direct, or protect people. With the quantity and prolonged duration of the recorded videos, it requires a huge amount of human resources to examine these video recordings and keep track of activities and events. In recent years, new techniques in computer vision field reduce the barrier of entry, allowing developers to experiment more with intelligent surveillance video system. Different from previous research, this dissertation does not address any algorithm design concerns related to object detection or object tracking. This study will put efforts on the technological side and executing methodologies in data visualization to find the model of detecting anomalies. It would like to provide an understanding of how to detect the behavior of the pedestrians in the video and find out anomalies or abnormal cases by using techniques of data visualization

    Application of 3ds Max for 3D Modelling and Rendering

    Get PDF
    In this article, the application of 3ds Max for 3D modelling and rendering of a car model is described. The process of creating a 3D car model is explained as well as setting up the references, working with editable poly, details in car interior, using turbosmooth and symmetry modifier. The manner which materials are applied to the model are described as well as lighting the scene and\ud setting up the render. The rendering methods and techniques are described, too. Final render results from several rendering plugins, such as V-ray, Mental Ray, Iray, Scanline, Maxwell, Corona, Octane and LuxRender are presented and compared

    3D + time blood flow mapping using SPIM-microPIV in the developing zebrafish heart

    Get PDF
    We present SPIM-ÎŒPIV as a flow imaging system, capable of measuring in vivo flow information with 3D micron-scale resolution. Our system was validated using a phantom experiment consisting of a flow of beads in a 50 ÎŒm diameter FEP tube. Then, with the help of optical gating techniques, we obtained 3D + time flow fields throughout the full heartbeat in a ∌3 day old zebrafish larva using fluorescent red blood cells as tracer particles. From this we were able to recover 3D flow fields at 31 separate phases in the heartbeat. From our measurements of this specimen, we found the net pumped blood volume through the atrium to be 0.239 nL per beat. SPIM-ÎŒPIV enables high quality in vivo measurements of flow fields that will be valuable for studies of heart function and fluid-structure interaction in a range of small-animal models
    • 

    corecore