7,240 research outputs found

    A Content Analysis-Based Approach to Explore Simulation Verification and Identify Its Current Challenges

    Get PDF
    Verification is a crucial process to facilitate the identification and removal of errors within simulations. This study explores semantic changes to the concept of simulation verification over the past six decades using a data-supported, automated content analysis approach. We collect and utilize a corpus of 4,047 peer-reviewed Modeling and Simulation (M&S) publications dealing with a wide range of studies of simulation verification from 1963 to 2015. We group the selected papers by decade of publication to provide insights and explore the corpus from four perspectives: (i) the positioning of prominent concepts across the corpus as a whole; (ii) a comparison of the prominence of verification, validation, and Verification and Validation (V&V) as separate concepts; (iii) the positioning of the concepts specifically associated with verification; and (iv) an evaluation of verification\u27s defining characteristics within each decade. Our analysis reveals unique characterizations of verification in each decade. The insights gathered helped to identify and discuss three categories of verification challenges as avenues of future research, awareness, and understanding for researchers, students, and practitioners. These categories include conveying confidence and maintaining ease of use; techniques\u27 coverage abilities for handling increasing simulation complexities; and new ways to provide error feedback to model users

    Interactive 3D visualization for theoretical Virtual Observatories

    Get PDF
    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.Comment: 10 Pages, 13 Figures, Accepted for Publication in Monthly Notices of the Royal Astronomical Societ

    Pedestrian Flow Simulation Validation and Verification Techniques

    Get PDF
    For the verification and validation of microscopic simulation models of pedestrian flow, we have performed experiments for different kind of facilities and sites where most conflicts and congestion happens e.g. corridors, narrow passages, and crosswalks. The validity of the model should compare the experimental conditions and simulation results with video recording carried out in the same condition like in real life e.g. pedestrian flux and density distributions. The strategy in this technique is to achieve a certain amount of accuracy required in the simulation model. This method is good at detecting the critical points in the pedestrians walking areas. For the calibration of suitable models we use the results obtained from analyzing the video recordings in Hajj 2009 and these results can be used to check the design sections of pedestrian facilities and exits. As practical examples, we present the simulation of pilgrim streams on the Jamarat bridge. The objectives of this study are twofold: first, to show through verification and validation that simulation tools can be used to reproduce realistic scenarios, and second, gather data for accurate predictions for designers and decision makers.Comment: 19 pages, 10 figure

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups

    Get PDF
    <div><p>Demonstrability—the extent to which group members can recognize a correct solution to a problem—has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors—the difficulty of solving a problem and the difficulty of verifying the correctness of a solution—on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually <i>decrease</i> group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.</p></div

    The Magic of Vision: Understanding What Happens in the Process

    Get PDF
    How important is the human vision? Simply speaking, it is central for domain\ua0related users to understand a design, a framework, a process, or an application\ua0in terms of human-centered cognition. This thesis focuses on facilitating visual\ua0comprehension for users working with specific industrial processes characterized\ua0by tomography. The thesis illustrates work that was done during the past two\ua0years within three application areas: real-time condition monitoring, tomographic\ua0image segmentation, and affective colormap design, featuring four research papers\ua0of which three published and one under review.The first paper provides effective deep learning algorithms accompanied by\ua0comparative studies to support real-time condition monitoring for a specialized\ua0microwave drying process for porous foams being taken place in a confined chamber.\ua0The tools provided give its users a capability to gain visually-based insights\ua0and understanding for specific processes. We verify that our state-of-the-art\ua0deep learning techniques based on infrared (IR) images significantly benefit condition\ua0monitoring, providing an increase in fault finding accuracy over conventional\ua0methods. Nevertheless, we note that transfer learning and deep residual network\ua0techniques do not yield increased performance over normal convolutional neural\ua0networks in our case.After a drying process, there will be some outputted images which are reconstructed\ua0by sensor data, such as microwave tomography (MWT) sensor. Hence,\ua0how to make users visually judge the success of the process by referring to the\ua0outputted MWT images becomes the core task. The second paper proposes an\ua0automatic segmentation algorithm named MWTS-KM to visualize the desired low\ua0moisture areas of the foam used in the whole process on the MWT images, effectively\ua0enhance users\u27understanding of tomographic image data. We also prove its\ua0performance is superior to two other preeminent methods through a comparative\ua0study.To better boost human comprehension among the reconstructed MWT image,\ua0a colormap deisgn research based on the same segmentation task as in the second\ua0paper is fully elaborated in the third and the fourth papers. A quantitative\ua0evaluation implemented in the third paper shows that different colormaps can\ua0influence the task accuracy in MWT related analytics, and that schemes autumn,\ua0virids, and parula can provide the best performance. As the full extension of\ua0the third paper, the fourth paper introduces a systematic crowdsourced study,\ua0verifying our prior hypothesis that the colormaps triggering affect in the positiveexciting\ua0quadrant in the valence-arousal model are able to facilitate more precise\ua0visual comprehension in the context of MWT than the other three quadrants.\ua0Interestingly, we also discover the counter-finding that colormaps resulting in\ua0affect in the negative-calm quadrant are undesirable. A synthetic colormap design\ua0guideline is brought up to benefit domain related users.In the end, we re-emphasize the importance of making humans beneficial in every\ua0context. Also, we start walking down the future path of focusing on humancentered\ua0machine learning(HCML), which is an emerging subfield of computer\ua0science which combines theexpertise of data-driven ML with the domain knowledge\ua0of HCI. This novel interdisciplinary research field is being explored to support\ua0developing the real-time industrial decision-support system
    • …
    corecore