40 research outputs found

    Screen Space Reconfigured

    Get PDF
    Screen Space Reconfigured is the first edited volume that critically and theoretically examines the many novel renderings of space brought to us by 21st century screens. Exploring key cases such as post-perspectival space, 3D, vertical framing, haptics, and layering, this volume takes stock of emerging forms of screen space and spatialities as they move from the margins to the centre of contemporary media practice.Recent years have seen a marked scholarly interest in spatial dimensions and conceptions of moving image culture, with some theorists claiming that a 'spatial turn' has taken place in media studies and screen practices alike. Yet this is the first book-length study dedicated to on-screen spatiality as such.Spanning mainstream cinema, experimental film, video art, mobile screens, and stadium entertainment, the volume includes contributions from such acclaimed authors as Giuliana Bruno and Tom Gunning as well as a younger generation of scholars

    The benefits of an additional practice in descriptive geomerty course: non obligatory workshop at the Faculty of Civil Engineering in Belgrade

    Get PDF
    At the Faculty of Civil Engineering in Belgrade, in the Descriptive geometry (DG) course, non-obligatory workshops named “facultative task” are held for the three generations of freshman students with the aim to give students the opportunity to get higher final grade on the exam. The content of this workshop was a creative task, performed by a group of three students, offering free choice of a topic, i.e. the geometric structure associated with some real or imagery architectural/art-work object. After the workshops a questionnaire (composed by the professors at the course) is given to the students, in order to get their response on teaching/learning materials for the DG course and the workshop. During the workshop students performed one of the common tests for testing spatial abilities, named “paper folding". Based on the results of the questionnairethe investigation of the linkages between:students’ final achievements and spatial abilities, as well as students’ expectations of their performance on the exam, and how the students’ capacity to correctly estimate their grades were associated with expected and final grades, is provided. The goal was to give an evidence that a creative work, performed by a small group of students and self-assessment of their performances are a good way of helping students to maintain motivation and to accomplish their achievement. The final conclusion is addressed to the benefits of additional workshops employment in the course, which confirmhigherfinal scores-grades, achievement of creative results (facultative tasks) and confirmation of DG knowledge adaption

    The contemporary visualization and modelling technologies and the techniques for the design of the green roofs

    Get PDF
    The contemporary design solutions are merging the boundaries between real and virtual world. The Landscape architecture like the other interdisciplinary field stepped in a contemporary technologies area focused on that, beside the good execution of works, designer solutions has to be more realistic and “touchable”. The opportunities provided by Virtual Reality are certainly not negligible, it is common knowledge that the designs in the world are already presented in this way so the Virtual Reality increasingly used. Following the example of the application of virtual reality in landscape architecture, this paper deals with proposals for the use of virtual reality in landscape architecture so that designers, clients and users would have a virtual sense of scope e.g. rooftop garden, urban areas, parks, roads, etc. It is a programming language that creates a series of images creating a whole, so certain parts can be controlled or even modified in VR. Virtual reality today requires a specific gadget, such as Occulus, HTC Vive, Samsung Gear VR and similar. The aim of this paper is to acquire new theoretical and practical knowledge in the interdisciplinary field of virtual reality, the ability to display using virtual reality methods, and to present through a brief overview the plant species used in the design and construction of an intensive roof garden in a Mediterranean climate, the basic characteristics of roofing gardens as well as the benefits they carry. Virtual and augmented reality as technology is a very powerful tool for landscape architects, when modeling roof gardens, parks, and urban areas. One of the most popular technologies used by landscape architects is Google Tilt Brush, which enables fast modeling. The Google Tilt Brush VR app allows modeling in three-dimensional virtual space using a palette to work with the use of a three dimensional brush. The terms of two "programmed" realities - virtual reality and augmented reality - are often confused. One thing they have in common, though, is VRML - Virtual Reality Modeling Language. In this paper are shown the ways on which this issue can be solved and by the way, get closer the term of Virtual Reality (VR), also all the opportunities which the Virtual reality offered us. As well, in this paper are shown the conditions of Mediterranean climate, the conceptual solution and the plant species which will be used by execution of intensive green roof on the motel “Marković”

    6th International Meshing Roundtable '97

    Full text link

    Bringing the Physical to the Digital

    Get PDF
    This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large, horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could be appropriate for this new class of devices was a open question, with many equally promising answers. Many everyday activities rely on our hands ability to skillfully control and manipulate physical objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion. In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style – namely gesture-based interaction and tangible interaction – have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based on these criteria. In a second stage the learnings from these initial explorations are applied to inform the design of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quantities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects. Our model makes digital tabletop interaction even more “natural”. However, because the interaction – the sensed input and the displayed output – is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue, we present a technique that allows users to – conceptually – pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for “as-direct-as possible” interactions. We also present two hardware prototypes capable of sensing the users’ interactions beyond the table’s surface. Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface. This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elements’ physicality. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in such settings. Finally, we extend this model so to enable as direct as possible interactions with 3D data, interacting from above the table’s surface

    Integrative approach to analyze the proximal region of the Trichonympha agilis centriole

    Get PDF
    The centrosome, the major microtubule organizing center in animal cells, is composed of two orthogonally arranged centrioles and surrounding pericentriolar material. Centrioles are evolutionary conserved organelles characterized by a nine-fold symmetry of microtubules and are built around a cartwheel in their proximal region. The overall architecture at ~ 40 Ă… resolution of the exceptionally long proximal region of Trichonympha spp. basal body has been uncovered previously using cryo-electron tomography (cryo-ET) followed by subtomogram averaging. The resulting 3D map revealed not only a central cartwheel hub that can accommodate rings of the evolutionarily conserved SAS-6 proteins, but also novel features, whose identity is unknown. In this thesis, we used T. agilis, which is the only Trichonympha sp. with an assembled genome and transcriptomic data, to gain some more insight into the structure, composition and first steps of cartwheel assembly. With the aim of generating a high-resolution 3D model of the T. agilis proximal region, we used cryo-ET and subtomogram averaging to refine the structure of the cartwheel hub and cartwheel inner densities (CID). We uncovered that the CID is polarized, and discovered inter-cartwheel pillars (ICP) between superimposed cartwheel rings. Moreover, we discovered and analysed a new type of the cartwheel of Teranympha mirabilis with a novel type of cartwheel central densities (CCD). We also analyzed the composition of the proximal region of the T. agilis centriole using a proteomic approach. We optimized a density gradient centrifugation protocol for isolation of the T. agilis rostrum, which harbors ~ 1400 centrioles, and performed protein correlation profiling. In this manner, we have identified 47 candidates of the rostrum proteome, including three new T. agilis SAS-6 homologs, which were further characterized. Furthermore, we addressed the oligomerization properties of human and two T. agilis SAS-6 proteins (HsSAS-6, TaSAS-6_1 and TaSAS-6_2) in vitro. We found that HsSAS-6 requires an interaction partner in order to be stabilized and form rings and stacks. In contrast, TaSAS-6_1 did not oligomerize in our setup, and TaSAS-6_2 is able to oligomerize and form different types of oligomers, rings included. In conclusion, the structure and composition of the proximal region of the T. agilis centriole uncovered here are anticipated to be of a general significance for studies of the biogenesis and structure of centrioles

    Drivers of solar coronal dynamics

    Get PDF
    Multi-wavelength observations from various solar missions have revealed the dynamic nature of the solar corona. The work presented in this thesis represents a contribution towards understanding some of the physical mechanisms that drive the activity observed in the corona and out into the heliosphere. In particular, the role of reconnection in active region (AR) outflows and AR-coronal hole (CH) interactions using observations of the associated plasma flow signatures and their relationship to the underlying magnetic field topology is examined. Persistent outflows discovered by Hinode EUV Imaging Spectrometer (EIS) occur at the boundary of all ARs over monopolar magnetic regions. It is demonstrated that the outflows originate from specific locations of the magnetic topology where field lines display strong gradients of magnetic connectivity, namely quasi-separatrix layers (QSLs). Magnetic reconnection at QSLs is shown to be a viable mechanism for driving AR outflows which are likely sources of the slow solar wind. Observational signatures and consequences of interchange reconnection (IR) are identified and analyzed in a number of solar configurations. Jet light curves of several emission lines show a post-jet enhancement in cooler coronal lines which has not been previously observed. In the case of emerging flux near a CH, it is shown that closed loops forming between the AR and CH leads to the retreat of the CH and a dimming of the corona in the vicinity of the like-polarity region. A filament eruption and coronal mass ejection (CME) from an AR inside a CH are observed from the solar disk into the heliosphere. An anemone structure of the erupting AR and the passage in-situ of an interplanetary CME (ICME) with open magnetic topology are interpreted to be a direct result of IR. Plasma flows resulting from the interaction between an AR embedded in a CH observed by Hinode EIS are investigated. Velocity profiles of hotter coronal lines reveal intensification in outflow velocities prior to a CME. The AR’s plasma flows are compared with 3D magnetohydrodynamic (MHD) numerical simulations which show that expansion of AR loops drives outflows along the neighboring CH field. The intensification of outflows observed prior to the CME is likely to result from the expansion of a flux rope containing a filament further compressing the neighboring CH field

    Automatic video segmentation employing object/camera modeling techniques

    Get PDF
    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not reflected in the technical system, making it difficult to manipulate the video at the object level. The realization of object-based manipulation will introduce many new possibilities for working with videos like composing new scenes from pre-existing video objects or enabling user-interaction with the scene. Moreover, object-based video compression, as defined in the MPEG-4 standard, can provide high compression ratios because the foreground objects can be sent independently from the background. In the case that the scene background is static, the background views can even be combined into a large panoramic sprite image, from which the current camera view is extracted. This results in a higher compression ratio since the sprite image for each scene only has to be sent once. A prerequisite for employing object-based video processing is automatic (or at least user-assisted semi-automatic) segmentation of the input video into semantic units, the video objects. This segmentation is a difficult problem because the computer does not have the vast amount of pre-knowledge that humans subconsciously use for object detection. Thus, even the simple definition of the desired output of a segmentation system is difficult. The subject of this thesis is to provide algorithms for segmentation that are applicable to common video material and that are computationally efficient. The thesis is conceptually separated into three parts. In Part I, an automatic segmentation system for general video content is described in detail. Part II introduces object models as a tool to incorporate userdefined knowledge about the objects to be extracted into the segmentation process. Part III concentrates on the modeling of camera motion in order to relate the observed camera motion to real-world camera parameters. The segmentation system that is described in Part I is based on a background-subtraction technique. The pure background image that is required for this technique is synthesized from the input video itself. Sequences that contain rotational camera motion can also be processed since the camera motion is estimated and the input images are aligned into a panoramic scene-background. This approach is fully compatible to the MPEG-4 video-encoding framework, such that the segmentation system can be easily combined with an object-based MPEG-4 video codec. After an introduction to the theory of projective geometry in Chapter 2, which is required for the derivation of camera-motion models, the estimation of camera motion is discussed in Chapters 3 and 4. It is important that the camera-motion estimation is not influenced by foreground object motion. At the same time, the estimation should provide accurate motion parameters such that all input frames can be combined seamlessly into a background image. The core motion estimation is based on a feature-based approach where the motion parameters are determined with a robust-estimation algorithm (RANSAC) in order to distinguish the camera motion from simultaneously visible object motion. Our experiments showed that the robustness of the original RANSAC algorithm in practice does not reach the theoretically predicted performance. An analysis of the problem has revealed that this is caused by numerical instabilities that can be significantly reduced by a modification that we describe in Chapter 4. The synthetization of static-background images is discussed in Chapter 5. In particular, we present a new algorithm for the removal of the foreground objects from the background image such that a pure scene background remains. The proposed algorithm is optimized to synthesize the background even for difficult scenes in which the background is only visible for short periods of time. The problem is solved by clustering the image content for each region over time, such that each cluster comprises static content. Furthermore, it is exploited that the times, in which foreground objects appear in an image region, are similar to the corresponding times of neighboring image areas. The reconstructed background could be used directly as the sprite image in an MPEG-4 video coder. However, we have discovered that the counterintuitive approach of splitting the background into several independent parts can reduce the overall amount of data. In the case of general camera motion, the construction of a single sprite image is even impossible. In Chapter 6, a multi-sprite partitioning algorithm is presented, which separates the video sequence into a number of segments, for which independent sprites are synthesized. The partitioning is computed in such a way that the total area of the resulting sprites is minimized, while simultaneously satisfying additional constraints. These include a limited sprite-buffer size at the decoder, and the restriction that the image resolution in the sprite should never fall below the input-image resolution. The described multisprite approach is fully compatible to the MPEG-4 standard, but provides three advantages. First, any arbitrary rotational camera motion can be processed. Second, the coding-cost for transmitting the sprite images is lower, and finally, the quality of the decoded sprite images is better than in previously proposed sprite-generation algorithms. Segmentation masks for the foreground objects are computed with a change-detection algorithm that compares the pure background image with the input images. A special effect that occurs in the change detection is the problem of image misregistration. Since the change detection compares co-located image pixels in the camera-motion compensated images, a small error in the motion estimation can introduce segmentation errors because non-corresponding pixels are compared. We approach this problem in Chapter 7 by integrating risk-maps into the segmentation algorithm that identify pixels for which misregistration would probably result in errors. For these image areas, the change-detection algorithm is modified to disregard the difference values for the pixels marked in the risk-map. This modification significantly reduces the number of false object detections in fine-textured image areas. The algorithmic building-blocks described above can be combined into a segmentation system in various ways, depending on whether camera motion has to be considered or whether real-time execution is required. These different systems and example applications are discussed in Chapter 8. Part II of the thesis extends the described segmentation system to consider object models in the analysis. Object models allow the user to specify which objects should be extracted from the video. In Chapters 9 and 10, a graph-based object model is presented in which the features of the main object regions are summarized in the graph nodes, and the spatial relations between these regions are expressed with the graph edges. The segmentation algorithm is extended by an object-detection algorithm that searches the input image for the user-defined object model. We provide two objectdetection algorithms. The first one is specific for cartoon sequences and uses an efficient sub-graph matching algorithm, whereas the second processes natural video sequences. With the object-model extension, the segmentation system can be controlled to extract individual objects, even if the input sequence comprises many objects. Chapter 11 proposes an alternative approach to incorporate object models into a segmentation algorithm. The chapter describes a semi-automatic segmentation algorithm, in which the user coarsely marks the object and the computer refines this to the exact object boundary. Afterwards, the object is tracked automatically through the sequence. In this algorithm, the object model is defined as the texture along the object contour. This texture is extracted in the first frame and then used during the object tracking to localize the original object. The core of the algorithm uses a graph representation of the image and a newly developed algorithm for computing shortest circular-paths in planar graphs. The proposed algorithm is faster than the currently known algorithms for this problem, and it can also be applied to many alternative problems like shape matching. Part III of the thesis elaborates on different techniques to derive information about the physical 3-D world from the camera motion. In the segmentation system, we employ camera-motion estimation, but the obtained parameters have no direct physical meaning. Chapter 12 discusses an extension to the camera-motion estimation to factorize the motion parameters into physically meaningful parameters (rotation angles, focal-length) using camera autocalibration techniques. The speciality of the algorithm is that it can process camera motion that spans several sprites by employing the above multi-sprite technique. Consequently, the algorithm can be applied to arbitrary rotational camera motion. For the analysis of video sequences, it is often required to determine and follow the position of the objects. Clearly, the object position in image coordinates provides little information if the viewing direction of the camera is not known. Chapter 13 provides a new algorithm to deduce the transformation between the image coordinates and the real-world coordinates for the special application of sport-video analysis. In sport videos, the camera view can be derived from markings on the playing field. For this reason, we employ a model of the playing field that describes the arrangement of lines. After detecting significant lines in the input image, a combinatorial search is carried out to establish correspondences between lines in the input image and lines in the model. The algorithm requires no information about the specific color of the playing field and it is very robust to occlusions or poor lighting conditions. Moreover, the algorithm is generic in the sense that it can be applied to any type of sport by simply exchanging the model of the playing field. In Chapter 14, we again consider panoramic background images and particularly focus ib their visualization. Apart from the planar backgroundsprites discussed previously, a frequently-used visualization technique for panoramic images are projections onto a cylinder surface which is unwrapped into a rectangular image. However, the disadvantage of this approach is that the viewer has no good orientation in the panoramic image because he looks into all directions at the same time. In order to provide a more intuitive presentation of wide-angle views, we have developed a visualization technique specialized for the case of indoor environments. We present an algorithm to determine the 3-D shape of the room in which the image was captured, or, more generally, to compute a complete floor plan if several panoramic images captured in each of the rooms are provided. Based on the obtained 3-D geometry, a graphical model of the rooms is constructed, where the walls are displayed with textures that are extracted from the panoramic images. This representation enables to conduct virtual walk-throughs in the reconstructed room and therefore, provides a better orientation for the user. Summarizing, we can conclude that all segmentation techniques employ some definition of foreground objects. These definitions are either explicit, using object models like in Part II of this thesis, or they are implicitly defined like in the background synthetization in Part I. The results of this thesis show that implicit descriptions, which extract their definition from video content, work well when the sequence is long enough to extract this information reliably. However, high-level semantics are difficult to integrate into the segmentation approaches that are based on implicit models. Intead, those semantics should be added as postprocessing steps. On the other hand, explicit object models apply semantic pre-knowledge at early stages of the segmentation. Moreover, they can be applied to short video sequences or even still pictures since no background model has to be extracted from the video. The definition of a general object-modeling technique that is widely applicable and that also enables an accurate segmentation remains an important yet challenging problem for further research

    High-Energy Gamma-Ray Astronomy

    Get PDF
    This volume celebrates the 30th anniversary of the first very-high energy (VHE) gamma-ray Source detection: the Crab Nebula, observed by the pioneering ground-based Cherenkov telescope Whipple, at teraelectronvolts (TeV) energies, in 1989. As we entered a new era in TeV astronomy, with the imminent start of operations of the Cherenkov Telescope Array (CTA) and new facilities such as LHAASO and the proposed Southern Wide-Field Gamma-ray Observatory (SWGO), we conceived of this volume as a broad reflection on how far we have evolved in the astrophysics topics that dominated the field of TeV astronomy for much of recent history.In the past two decades, H.E.S.S., MAGIC and VERITAS pushed the field of TeV astronomy, consolidating the field of TeV astrophysics, from few to hundreds of TeV emitters. Today, this is a mature field, covering almost every topic of modern astrophysics. TeV astrophysics is also at the center of the multi-messenger astrophysics revolution, as the extreme photon energies involved provide an effective probe in cosmic-ray acceleration, propagation and interaction, in dark matter and exotic physics searches. The improvement that CTA will carry forward and the fact that CTA will operate as the first open observatory in the field, mean that gamma-ray astronomy is about to enter a new precision and productive era.This book aims to serve as an introduction to the field and its state of the art, presenting a series of authoritative reviews on a broad range of topics in which TeV astronomy provided essential contributions, and where some of the most relevant questions for future research lie

    Microfluidics and Nanofluidics Handbook

    Get PDF
    The Microfluidics and Nanofluidics Handbook: Two-Volume Set comprehensively captures the cross-disciplinary breadth of the fields of micro- and nanofluidics, which encompass the biological sciences, chemistry, physics and engineering applications. To fill the knowledge gap between engineering and the basic sciences, the editors pulled together key individuals, well known in their respective areas, to author chapters that help graduate students, scientists, and practicing engineers understand the overall area of microfluidics and nanofluidics. Topics covered include Finite Volume Method for Numerical Simulation Lattice Boltzmann Method and Its Applications in Microfluidics Microparticle and Nanoparticle Manipulation Methane Solubility Enhancement in Water Confined to Nanoscale Pores Volume Two: Fabrication, Implementation, and Applications focuses on topics related to experimental and numerical methods. It also covers fabrication and applications in a variety of areas, from aerospace to biological systems. Reflecting the inherent nature of microfluidics and nanofluidics, the book includes as much interdisciplinary knowledge as possible. It provides the fundamental science background for newcomers and advanced techniques and concepts for experienced researchers and professionals
    corecore