31,176 research outputs found

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    Visual and interactive exploration of point data

    Get PDF
    Point data, such as Unit Postcodes (UPC), can provide very detailed information at fine scales of resolution. For instance, socio-economic attributes are commonly assigned to UPC. Hence, they can be represented as points and observable at the postcode level. Using UPC as a common field allows the concatenation of variables from disparate data sources that can potentially support sophisticated spatial analysis. However, visualising UPC in urban areas has at least three limitations. First, at small scales UPC occurrences can be very dense making their visualisation as points difficult. On the other hand, patterns in the associated attribute values are often hardly recognisable at large scales. Secondly, UPC can be used as a common field to allow the concatenation of highly multivariate data sets with an associated postcode. Finally, socio-economic variables assigned to UPC (such as the ones used here) can be non-Normal in their distributions as a result of a large presence of zero values and high variances which constrain their analysis using traditional statistics. This paper discusses a Point Visualisation Tool (PVT), a proof-of-concept system developed to visually explore point data. Various well-known visualisation techniques were implemented to enable their interactive and dynamic interrogation. PVT provides multiple representations of point data to facilitate the understanding of the relations between attributes or variables as well as their spatial characteristics. Brushing between alternative views is used to link several representations of a single attribute, as well as to simultaneously explore more than one variable. PVT’s functionality shows how the use of visual techniques embedded in an interactive environment enable the exploration of large amounts of multivariate point data

    The importance of being accessible: The graphics calculator in mathematics education

    Get PDF
    The first decade of the availability of graphics calculators in secondary schools has just concluded, although evidence for this is easier to find in some countries and schools than in others, since there are gross socio-economic differences in both cases. It is now almost the end of the second decade since the invention of microcomputers and their appearance in mathematics educational settings. Most of the interest in technology for mathematics education has been concerned with microcomputers. But there has been a steady increase in interest in graphics calculators by students, teachers, curriculum developers and examination authorities, in growing recognition that accessibility of technology at the level of the individual student is the key factor in responding appropriately to technological change; the experience of the last decade suggests very strongly that mathematics teachers are well advised to pay more attention to graphics calculators than to microcomputers. There are clear signs that the commercial marketplace, especially in the United States, is acutely aware of this trend. It was recently reported that current US sales of graphics calculators are around six million units per year, and rising. There are now four major corporations developing products aimed directly at the high school market, with all four producing graphics calculators of high quality and beginning to understand the educational needs of students and their teachers. To get some evidence of this interest, I scanned a recent issue (April 1995) of The Mathematics Teacher, the NCTM journal focussed on high school mathematics. The evidence was very strong: of almost 20 full pages devoted to paid advertising, nine featured graphics calculators, while only two featured computer products, with two more featuring both computers and graphics calculators. The main purposes of this paper are to explain and justify this heightened level of interest in graphics calculators at the secondary school level, and to identify some of the resulting implications for mathematics education, both generally, and in the South-East Asian region

    Cartography, GIS and maps in perspective

    Get PDF

    Numerical Methods for Obtaining Multimedia Graphical Effects

    Get PDF
    This paper is an explanatory document about how several animations effects can be obtained using different numerical methods, as well as investigating the possibility of implementing them on very simple yet powerful massive parallel machines. The methods are clearly described, containing graphical examples of the effects, as well as workflow for the algorithms. All of the methods presented in this paper use only numerical matrix manipulations, which usually are fast, and do not require the use of any other graphical software application.raster graphics, numerical matrix manipulation, animation effects

    Trends and concerns in digital cartography

    Get PDF
    CISRG discussion paper ;

    Knowledge-based systems and geological survey

    Get PDF
    This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records
    • 

    corecore