109,553 research outputs found

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    Analysis of (iso)surface reconstructions: Quantitative metrics and methods

    Get PDF
    Due to sampling processes volumetric data is inherently discrete and most often knowledge of the underlying continuous model is not available. Surface rendering techniques attempt to reconstruct the continuous model, using isosurfaces, from the discrete data. Therefore, it natural to ask how accurate the reconstructed isosurfaces are with respect to the underlying continuous model. A reconstructed isosurface may look impressive when rendered ( photorealism ), but how well does it reflect reality ( physical realism )?;The users of volume visualization packages must be aware of the short-comings of the algorithms used to produce the images so that they may properly interpret, and interact with, what they see. However, very little work has been done to quantify the accuracy of volumetric data reconstructions. Most analysis to date has been qualitative. Qualitative analysis uses simple visual inspection to determine whether characteristics, known to exist in the real world object, are present in the rendered image. Our research suggests metrics and methods for quantifying the physical realism of reconstructed isosurfaces.;Physical realism is a many faceted notion. In fact, a different metric could be defined for each physical property one wishes to consider. We have defined four metrics--Global Surface Area Preservation (GSAP), Volume Preservation (VP), Point Distance Preservation (PDP), and Isovalue Preservation (IVP). We present experimental results for each of these metrics and discuss their validity with respect to those results.;We also present the Reconstruction Quantification (sub)System (RQS). RQS provides a flexible framework for measuring physical realism. This system can be embedded in existing visualization systems with little modification of the system itself. Two types of analysis can be performed; reconstruction analysis and algorithm analysis. Reconstruction analysis allows users to determine the accuracy of individual surface reconstructions. Algorithm analysis, on the other hand, allows developers of visualization systems to determine the efficacy of the visualization system based on several reconstructions

    A Survey on Economic-driven Evaluations of Information Technology

    Get PDF
    The economic-driven evaluation of information technology (IT) has become an important instrument in the management of IT projects. Numerous approaches have been developed to quantify the costs of an IT investment and its assumed profit, to evaluate its impact on business process performance, and to analyze the role of IT regarding the achievement of enterprise objectives. This paper discusses approaches for evaluating IT from an economic-driven perspective. Our comparison is based on a framework distinguishing between classification criteria and evaluation criteria. The former allow for the categorization of evaluation approaches based on their similarities and differences. The latter, by contrast, represent attributes that allow to evaluate the discussed approaches. Finally, we give an example of a typical economic-driven IT evaluation

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    The interaction of lean and building information modeling in construction

    Get PDF
    Lean construction and Building Information Modeling are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, fifty-six interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete, but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers and developers of IT systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies

    What do faculties specializing in brain and neural sciences think about, and how do they approach, brain-friendly teaching-learning in Iran?

    Get PDF
    Objective: to investigate the perspectives and experiences of the faculties specializing in brain and neural sciences regarding brain-friendly teaching-learning in Iran. Methods: 17 faculties from 5 universities were selected by purposive sampling (2018). In-depth semi-structured interviews with directed content analysis were used. Results: 31 sub-subcategories, 10 subcategories, and 4 categories were formed according to the “General teaching model”. “Mentorship” was a newly added category. Conclusions: A neuro-educational approach that consider the roles of the learner’s brain uniqueness, executive function facilitation, and the valence system are important to learning. Such learning can be facilitated through cognitive load considerations, repetition, deep questioning, visualization, feedback, and reflection. The contextualized, problem-oriented, social, multi-sensory, experiential, spaced learning, and brain-friendly evaluation must be considered. Mentorship is important for coaching and emotional facilitation

    A document-like software visualization method for effective cognition of c-based software systems

    Get PDF
    It is clear that maintenance is a crucial and very costly process in a software life cycle. Nowadays there are a lot of software systems particularly legacy systems that are always maintained from time to time as new requirements arise. One important source to understand a software system before it is being maintained is through the documentation, particularly system documentation. Unfortunately, not all software systems developed or maintained are accompanied with their reliable and updated documents. In this case, source codes will be the only reliable source for programmers. A number of studies have been carried out in order to assist cognition based on source codes. One way is through tool automation via reverse engineering technique in which source codes will be parsed and the information extracted will be visualized using certain visualization methods. Most software visualization methods use graph as the main element to represent extracted software artifacts. Nevertheless, current methods tend to produce more complicated graphs and do not grant an explicit, document-like re-documentation environment. Hence, this thesis proposes a document-like software visualization method called DocLike Modularized Graph (DMG). The method is realized in a prototype tool named DocLike Viewer that targets on C-based software systems. The main contribution of the DMG method is to provide an explicit structural re-document mechanism in the software visualization tool. Besides, the DMG method provides more level of information abstractions via less complex graph that include inter-module dependencies, inter-program dependencies, procedural abstraction and also parameter passing. The DMG method was empirically evaluated based on the Goal/Question/Metric (GQM) paradigm and the findings depict that the method can improve productivity and quality in the aspect of cognition or program comprehension. A usability study was also conducted and DocLike Viewer had the most positive responses from the software practitioners
    corecore