8,396 research outputs found

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Data, Data Everywhere, Not a Lot in Sync: Reconciling Visual Meaning with Data

    Get PDF
    This study approaches two sets of questions. Firstly, what is visual data; how is it converted into useful information; and where should we look for it? Secondly, is data causing a mismatch between mind and environment? Data has emerged as our modern zeitgeist. Up to 100 billion devices will be seeking to visually map out our existence over the internet by 2020 (UK Government Chief Scientific Adviser 2014). Information provides meaning to human and non-human data in two ways. One way is in how humans convert data into information to understand inanimate objects. But inanimate objects also convert data. Humans now also exist as inanimate constructs; as data points. Both as prey and predator. The second way is in how humans and inanimate objects are both virtual actants: humans as subconscious beings; and inanimate objects as digital constructs. These similarities highlight the allure of data to the individual and vice-versa. Meaning drives us to “discover where the real power lies” (Appleyard 1979, 146) and the power that data possesses appears to be problematic as it is perceived to increasingly blur life’s boundaries. This paper is theoretical; and empirical examples are intended only to illustrate a philosophically driven point highlighting how, to be visually sustainable, our world depends on data. It suggests that data is an unseen and unspent force struggling to meaningfully sync with our visual world. It is centred on the premise that philosophy, not technology underpins visual sustainability. Lastly, it adds to the conversation by exploring three conceptual studies around past, present, and future states of data production; and introducing three new categories: data we get from data; data produced from objects; and how objects can now be produced from data. And what this all might mean for how we are sustained by our visual world

    Progressive refinement rendering of implicit surfaces

    Get PDF
    The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting

    Multi-agent evolutionary systems for the generation of complex virtual worlds

    Full text link
    Modern films, games and virtual reality applications are dependent on convincing computer graphics. Highly complex models are a requirement for the successful delivery of many scenes and environments. While workflows such as rendering, compositing and animation have been streamlined to accommodate increasing demands, modelling complex models is still a laborious task. This paper introduces the computational benefits of an Interactive Genetic Algorithm (IGA) to computer graphics modelling while compensating the effects of user fatigue, a common issue with Interactive Evolutionary Computation. An intelligent agent is used in conjunction with an IGA that offers the potential to reduce the effects of user fatigue by learning from the choices made by the human designer and directing the search accordingly. This workflow accelerates the layout and distribution of basic elements to form complex models. It captures the designer's intent through interaction, and encourages playful discovery

    Combining Procedural and Hand Modeling Techniques for Creating Animated Digital 3D Natural Environments

    Get PDF
    This thesis focuses on a systematic solution for rendering 3D photorealistic natural environments using Maya\u27s procedural methods and ZBrush. The methods used in this thesis started with comparing two industry specific procedural applications, Vue and Maya\u27s Paint Effects, to determine which is better suited for applying animated procedural effects with the highest level of fidelity and expandability. Generated objects from Paint Effects contained the highest potential through object attributes, texturing and lighting. To optimize results further, compatibility with sculpting programs such as ZBrush are required to sculpt higher levels of detail. The final combination workflow produces results used in the short film Fall. The need for producing these effects is attributed to the growth of the visual effect industry\u27s ability to deliver realistic simulated complexities of nature and as such, the public\u27s insatiable need to see them on screen. Usually, however, the requirements for delivering a photorealistic digital environment fall under tight deadlines due to various phases of the visual effects project being interconnected across multiple production houses, thereby requiring the need for effective methods to deliver a high-end visual presentation. The use of a procedural system, such as an L-system, is often an initial step within a workflow leading toward creating photorealistic vegetation for visual effects environments. Procedure-based systems, such as Maya\u27s Paint Effects, feature robust controls that can generate many natural objects. A balance is thus created between being able to model objects quickly, but with limited detail, and control. Other methods outside this system must be used to achieve higher levels of fidelity through the use of attributes, expressions, lighting and texturing. Utilizing the procedural engine within Maya\u27s Paint Effects allows the beginning stages of modeling a 3D natural environment. ZBrush\u27s manual system approach can further bring the aesthetics to a much finer degree of fidelity. The benefit in leveraging both types of systems results in photorealistic objects that preserve all of the procedural and dynamic forces specified within the Paint Effects procedural engine

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851
    corecore