4 research outputs found

    Advancing Creative Visual Thinking with Constructive Function-based Modelling.

    Get PDF
    Modern education technologies are destined to reflect the realities of a modern digital age. The juxtaposition of real and synthetic (computer-generated) worlds as well as a greater emphasis on visual dimension are especially important characteristics that have to be taken into account in learning and teaching. We describe the ways in which an approach to constructive shape modelling can be used to advancing creative visual thinking in artistic and technical education. This approach assumes the use of a simple programming language or interactive software tools for creating a shape model, generating its images, and finally fabricating a real object of that model. It can be considered an educational technology suitable not only for children and students but also for researchers, artists, and designers. The corresponding modelling language and software tools are being developed within an international HyperFun Project. These tools are easy to use by students of different age, specialization and abilities, and can easily be extended and adapted for various educational purposes in different areas

    Differential equation-based shape interpolation for surface blending and facial blendshapes.

    Get PDF
    Differential equation-based shape interpolation has been widely applied in geometric modelling and computer animation. It has the advantages of physics-based, good realism, easy obtaining of high- order continuity, strong ability in describing complicated shapes, and small data of geometric models. Among various applications of differential equation-based shape interpolation, surface blending and facial blendshapes are two active and important topics. Differential equation-based surface blending can be time-independent and time-dependent. Existing differential equation-based surface blending only tackles time-dependen

    ImMApp: An immersive database of sound art

    Full text link
    The ImMApp (Immersive Mapping Application) thesis addresses contemporary and historical sound art from a position informed by, on one hand, post-structural critical theory, and on the other, a practice-based exploration of contemporary digital technologies (MySQL, XML, XSLT, X3D). It proposes a critical ontological schema derived from Michel Foucault's Archaeology of Knowledge (1972) and applies this to pre-existing information resources dealing with sound art. Firstly an analysis of print-based discourses (Sound by Artists. Lander and Lexier (1990), Noise, Water, Meat. Kahn (2001) and Background Noise: Perspectives on Sound Art. LaBelle (2006» is carried out according to Foucauldian notions of genealogy, subject positions, the statement, institutional affordances and the productive nature of discursive formation. The discursive field (the archive) presented by these major canonical texts is then contrasted with a formulation derived from Giles Deleuze and Felix Guattari: that of a 'minor' history of sound art practices. This is then extended by media theory (McLuhan, Kittler, Manovich) into a critique of two digital sound art resources (The Australian Sound Design Project (Bandt and Paine (2005) and soundtoys.net Stanza (1998). The divergences between the two forms of information technologies (print vs. digital) are discussed. The means by which such digitised methodologies may enhance Foucauldian discourse analysis points onwards towards the two practice-based elements of the thesis. Surface, the first iterative part, is a web-browser based database built on an Apache/MySQIlXML architecture. It is the most extensive mapping of sound art undertaken to date and extends the theoretical framework discussed above into the digital domain. Immersion, the second part, is a re-presentation of this material in an immersive digital environment, following the transformation of the source material via XSL-T into X3D. Immersion is a real-time, large format video, surround sound (5.ln.l) installation and the thesis concludes with a discussion of how this outcome has articulated Foucauldian archaeological method and unframed pre-existing notions of the nature of sound art

    Visual-auditory visualisation of dynamic multi-scale heterogeneous objects.

    Get PDF
    The multi-scale phenomena analysis is an area of active research that is connecting simulations with experiments to get a correct insight into the compound dynamic structure. Visualisation is a challenging task due to a large amount of data and a wide range of complex data representations. The analysis of dynamic multi-scale phenomena requires a combination of geometric modelling and rendering techniques for the analysis of the changes in the internal structure in the case of data coming from different sources of various nature. Moreover, the area often addresses the limitations of solely visual data representation and considers the introduction of other sensory stimuli as a well-known tool to enhance visual analysis. However, there is a lack of software tools allowing perform an advanced real-time analysis of heterogeneous phenomena properties. The hardware-accelerated volume rendering allows getting insight into the internal structure of complex multi-scale phenomena. The technique is convenient for detailed visual analysis and highlights the features of interest in complex structures and is an area of active research. However, the conventional volume visualisation is limited to the use of transfer functions that operate on homogeneous material and, as a result, does not provide flexibility in geometry and material distribution modelling that is crucial for the analysis of heterogeneous objects. Moreover, the extension to visual-auditory analysis emphasises the necessity to review the entire conventional volume visualisation pipeline. The multi-sensory feedback highly depends on the use of modern hardware and software advances for real-time modelling and evaluation. In this work, we explore the aspects of the design of visual-auditory pipelines for the analysis of dynamic multi-scale properties of heterogeneous objects that can allow overcoming well-known problems of complex representations solely visual analysis. We consider the similarities between light and sound propagation as a solution to the problem. The approach benefits from a combination of GPU accelerated ray-casting, geometry, optical and auditory properties modelling. We discuss how the modern GPU techniques application in those areas allows introducing a unified approach to the visual-auditory analysis of dynamic multi-scale heterogeneous objects. Similarly to the conventional volume rendering technique based on light propagation, we model auditory feedback as a result of initial impulse propagation through 3D space and its digital representation as a sampled sound wave obtained with the ray-casting procedure. The auditory stimuli can complement visual ones in the analysis of the dynamic multi-scale heterogeneous object. We propose a framework that facilitates the design of dynamic multi-scale heterogeneous objects visual-auditory pipeline and discuss the framework application for two case studies. The first is a molecular phenomena study that is a result of molecular dynamics simulation and quantum simulation. The second explores microstructures in digital fabrication with an arbitrary irregular lattice structure. For considered case studies, the visual-auditory techniques facilitate the interactive analysis of both spatial structure and internal multi-scale properties of volume nature in complex heterogeneous objects. A GPU-accelerated framework for visual-auditory analysis of heterogeneous objects can be applied and extend beyond this research. Thus, to specify the main direction of such extension from the point of view of the potential users, strengthen the value of this research as well as to evaluate the vision of the application of the techniques described above, we carry out a preliminary evaluation. The user study aims to compare our expectations on the visual-auditory approach with the views of the potential users of this system if it is implemented as a software product. A preliminary evaluation study was carried out with limitations imposed by 2020/2021 restrictions. However, it confirms that the main direction for the visual-auditory analysis of heterogeneous objects has been identified correctly and visual and auditory stimuli can complement each other in the analysis of both volume and spatial distribution properties of heterogeneous phenomena. The user reviews also highlight the necessary enhancements that should be introduced to the approach in terms of the design of more complex user interfaces and consideration of additional application cases. To provide a more detailed picture on evaluation results and recommendations introduced, we also identify the key factors that define the user vision of the approach further enhancement and its possible application areas, such as users experience in the area of complex physical phenomena analysis or multi-sensory area. The discussed in this work aspects of heterogeneous objects analysis task, theoretical and practical solutions allow considering the application, further development and enhancement of the results in multidisciplinary areas of GPU accelerated High-performance visualisation pipelines design and multi-sensory analysis
    corecore