1,673 research outputs found

    Comparison of JavaScript Graph Frameworks

    Get PDF
    Keeruliste JavaScripti visualisatsioonide tegemine brauserites vĂ”ib olla vĂ€gagi resurssikulukas. Antud töö vĂ”rdleb visualisatsioonide kĂ”ige algelisemat kuju – graafikud. VĂ”rreldes nelja erinevat JavaScripti graafikute loomise raamistiku, saame vastused kĂŒsimustele, milline alus sobib kĂ”ige paremini Internetis graafikute loomiseks ning kuidas antud raamistikud ĂŒksteisest erinevad.Creating JavaScript visualizations with large amount of data can cause big performance issues. Current thesis compares the most simplest form of data visualizations – graphs. By comparing four different JavaScript graphing frameworks, we analyze what is the best platform for rendering graphs on the Web and how the selected frameworks compare to one another

    Teaching Visually Impaired College Students in Introductory Statistics

    Get PDF
    Instructors of postsecondary classes in statistics rely heavily on visuals in their teaching, both within the classroom and in resources like textbooks, handouts, and software, but this information is often inaccessible to students who are blind or visually impaired (BVI). The unique challenges involved in adapting both pedagogy and course materials to accommodate a BVI student may provoke anxiety among instructors teaching a BVI student for the first time, and instructors may end up feeling unprepared or “reinventing the wheel.” We discuss a wide variety of accommodations inside and outside of the classroom grounded in the empirical literature on cognition and learning and informed by our own experience teaching a blind student in an introductory statistics course

    Development of a construction quality assessment tool for houses in South Africa

    Get PDF
    Housing is a critical socio-economic driver in the vast majority of developing countries, including South Africa. It involves many aspects such as construction quality, affordability, geographic location, long-term financing, and the environment. A key research concern is the quantification of the construction quality of houses and how this may be used to assist in the delivery of better quality houses. This article is based on studies undertaken on housing construction sites in South Africa. A construction assessment tool is developed using principles similar to those used by CONQUAS in Singapore and Malaysia. The tool thus developed is capable of measuring the quality of ‘as-built’ construction elements of a house against national technical standards and specifications, within reasonable time and cost. Studies on the quality of houses were then conducted on 700 houses (two low-income projects and one middle-income project). The results showed that the two low-income projects had average quality scores of 58% and 64%, while the middle-income project scored 80%. Details of the sub-elements of the scores indicated the developmental needs of the contractors involved in the projects. Using the construction quality assessment tool, the government and other authorities can make better informed decisions when awarding contracts. If introduced and implemented correctly, the quality of the houses delivered across the entire housing spectrum can be measured and monitored, and improvement measures put in place. The data collected through this quality assessment tool will be invaluable for national authorities, regulators, and Statistics South Africa to evaluate and report if the housing stock being delivered is consistently improving. Risk assessment studies will assist the regulators in developing proper quality management strategies

    Text and Spatial-Temporal Data Visualization

    Get PDF
    In this dissertation, we discuss a text visualization system, a tree drawing algorithm, a spatial-temporal data visualization paradigm and a tennis match visualization system. Corpus and corpus tools have become an important part of language teaching and learning. And yet text visualization is rarely used in this area. We present Text X-Ray, a Web tool for corpus-based language teaching and learning and the interactive text visualizations in Text X-Ray allow users to quickly examine a corpus or corpora at different levels of details: articles, paragraphs, sentences, and words. Level-based tree drawing is a common algorithm that produces intuitive and clear presentations of hierarchically structured information. However, new applications often introduces new aesthetic requirements that call for new tree drawing methods. We present an indented level-based tree drawing algorithm for visualizing parse trees of English language. This algorithm displays a tree with an aspect ratio that fits the aspect ratio of the newer computer displays, while presenting the words in a way that is easy to read. We discuss the design of the algorithm and its application in text visualization for linguistic analysis and language learning. A story is a chain of events. Each event has multiple dimensions, including time, location, characters, actions, and context. Storyline visualizations attempt to visually present the many dimensions of a story’s events and their relationships. Integrating the temporal and spatial dimension in a single visualization view is often desirable but highly challenging. One of the main reasons is that spatial data is inherently 2D while temporal data is inherently 1D. We present a storyline visualization technique that integrate both time and location information in a single view. Sports data visualization can be a useful tool for analyzing or presenting sports data. We present a new technique for visualizing tennis match data. It is designed as a supplement to online live streaming or live blogging of tennis matches and can retrieve data directly from a tennis match live blogging web site and display 2D interactive view of match statistics. Therefore, it can be easily integrated with the current live blogging platforms used by many news organizations. The visualization addresses the limitations of the current live coverage of tennis matches by providing a quick overview and also a great amount of details on demand

    The State of the Art in Cartograms

    Full text link
    Cartograms combine statistical and geographical information in thematic maps, where areas of geographical regions (e.g., countries, states) are scaled in proportion to some statistic (e.g., population, income). Cartograms make it possible to gain insight into patterns and trends in the world around us and have been very popular visualizations for geo-referenced data for over a century. This work surveys cartogram research in visualization, cartography and geometry, covering a broad spectrum of different cartogram types: from the traditional rectangular and table cartograms, to Dorling and diffusion cartograms. A particular focus is the study of the major cartogram dimensions: statistical accuracy, geographical accuracy, and topological accuracy. We review the history of cartograms, describe the algorithms for generating them, and consider task taxonomies. We also review quantitative and qualitative evaluations, and we use these to arrive at design guidelines and research challenges

    Common metrics for cellular automata models of complex systems

    Get PDF
    The creation and use of models is critical not only to the scientific process, but also to life in general. Selected features of a system are abstracted into a model that can then be used to gain knowledge of the workings of the observed system and even anticipate its future behaviour. A key feature of the modelling process is the identification of commonality. This allows previous experience of one model to be used in a new or unfamiliar situation. This recognition of commonality between models allows standards to be formed, especially in areas such as measurement. How everyday physical objects are measured is built on an ingrained acceptance of their underlying commonality. Complex systems, often with their layers of interwoven interactions, are harder to model and, therefore, to measure and predict. Indeed, the inability to compute and model a complex system, except at a localised and temporal level, can be seen as one of its defining attributes. The establishing of commonality between complex systems provides the opportunity to find common metrics. This work looks at two dimensional cellular automata, which are widely used as a simple modelling tool for a variety of systems. This has led to a very diverse range of systems using a common modelling environment based on a lattice of cells. This provides a possible common link between systems using cellular automata that could be exploited to find a common metric that provided information on a diverse range of systems. An enhancement of a categorisation of cellular automata model types used for biological studies is proposed and expanded to include other disciplines. The thesis outlines a new metric, the C-Value, created by the author. This metric, based on the connectedness of the active elements on the cellular automata grid, is then tested with three models built to represent three of the four categories of cellular automata model types. The results show that the new C-Value provides a good indicator of the gathering of active cells on a grid into a single, compact cluster and of indicating, when correlated with the mean density of active cells on the lattice, that their distribution is random. This provides a range to define the disordered and ordered state of a grid. The use of the C-Value in a localised context shows potential for identifying patterns of clusters on the grid

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    Visualizing Multidimensional Data with General Line Coordinates and Pareto Optimization

    Get PDF
    These results will show that the use of Linear General Line Coordinates (GLC-L) can visualize multidimensional data better than typical methods, such as Parallel Coordinates (PC). The results of using GLC-L will display visuals with less clutter than PC and be easier to see changes from one graph to the next. Visualizing the Pareto Frontier with GLC-L allows n-D data to be viewed at once, compared to typical methods that are limited to 2 or 3 objectives at a time. This method details the process of selecting a ”best” case, from a group of equals in the Pareto Subset and comparing it against an optimal solution. Selecting a ”best” case from a Pareto Subset is difficult, because every individual is better in some ways to its peers. The ”best” case is the solution to the specific task for each dataset
    • 

    corecore