2,772 research outputs found

    Integrating Algorithmic and Systemic Load Balancing Strategies in Parallel Scientific Applications

    Get PDF
    Load imbalance is a major source of performance degradation in parallel scientific applications. Load balancing increases the efficient use of existing resources and improves performance of parallel applications running in distributed environments. At a coarse level of granularity, advances in runtime systems for parallel programs have been proposed in order to control available resources as efficiently as possible by utilizing idle resources and using task migration. At a finer granularity level, advances in algorithmic strategies for dynamically balancing computational loads by data redistribution have been proposed in order to respond to variations in processor performance during the execution of a given parallel application. Algorithmic and systemic load balancing strategies have complementary set of advantages. An integration of these two techniques is possible and it should result in a system, which delivers advantages over each technique used in isolation. This thesis presents a design and implementation of a system that combines an algorithmic fine-grained data parallel load balancing strategy called Fractiling with a systemic coarse-grained task-parallel load balancing system called Hector. It also reports on experimental results of running N-body simulations under this integrated system. The experimental results indicate that a distributed runtime environment, which combines both algorithmic and systemic load balancing strategies, can provide performance advantages with little overhead, underscoring the importance of this approach in large complex scientific applications

    Aligning Problem Solving and Gameplay : A Model for Future Research and Design

    Get PDF
    Problem solving is often discussed as one of the benefits of games and game-based learning (e.g., Gee, 2007a, Van Eck 2006a), yet little empirical research exists to support this assertion. It will be critical to establish and validate models of problem solving in games (Van Eck, 2007), but this will be difficult if not impossible without a better understanding of problem solving than currently exists in the field of serious games. While games can be used to teach a variety of content across multiple domains (Van Eck, 2006b, 2008), the ability of games to promote problem solving may be more important to the field of serious games because problem solving skills cross all domains and are among the most difficult learning outcomes to achieve. This may be particularly important in science, technology, engineering, and math (STEM), which is why serious game researchers are building games to promote problem solving in science (e.g., Gaydos & Squire, this volume; Van Eck, Hung, Bowman, & Love, 2009). This is perhaps why serious game researchers are building games to promote problem solving in science Current research and design theory in serious games are insufficient to explain the relationship between problem solving and games, nor do they support the design of educational games intended to promote problem solving. Problem solving and problem-based learning (PBL) have been studied intensely in both Europe and the United States for more than 75 years, and while the focus of that study and conceptualization of problem solving have evolved during that time, there is a tremendous body of knowledge to draw from. Most recently, researchers (e.g., Jonassen, 1997, 2000, & 2002; Hung, 2006a; Jonassen & Hung, 2008) have made advances in both the delineation and definition of problem types and models for designing effective problems and PBL. Any models and research on the relation of games and problem solving must build on the existing research base in problem solving and PBL rather than unwittingly covering old ground in these areas. In this chapter, the authors present an overview of the dimensions upon which different problems vary, including domain knowledge, structuredness, and their associated learning outcomes. We then propose a classification of gameplay (as opposed to game genre) that accounts for the cognitive skills encountered during gameplay, relying in part on previous classifications systems (e.g., Apperley, 2006), Mark Wolf’s (2006) concept of grids of interactivity (which we call iGrids), and our own cognitive analysis of gameplay. We then use this classification system, the iGrids, and example games to describe eleven different types of problems, the ways in which they differ, and the gameplay types most likely to support them. We conclude with a description of the ability of problems and games themselves to address specific learning outcomes independent of problem solving, including domain-specific learning, higher-order thinking, psychomotor skills, and attitude change. Implications for future research are also described. We believe that this approach can guide the design of games intended to promote problem solving and points the way toward future research in problem solving and games

    Infrastructuring the Digital Public Sphere

    Get PDF
    The idea of a public sphere --a shared, ideologically neutral domain where ideas and arguments may be shared, encountered, and contested--serves as a powerful imaginary in legal and policy discourse, informing both assumptions about how public communication works and ideals to which inevitably imperfect realities are compared. In debates about feasible and legally permissible content governance mechanisms for digital platforms, the public sphere ideal has counseled attention to questions of ownership and control rather than to other, arguably more pressing questions about systemic configuration. This essay interrogates such debates through the lens of infrastructure, with particular reference to the ways that digital tracking and advertising infrastructures perform systemic content governance functions

    Immersive Learning Environments for Computer Science Education

    Get PDF
    This master\u27s thesis explores the effectiveness of an educational intervention using an interactive notebook to support and supplement instruction in a foundational-level programming course. A quantitative, quasi-experimental group comparison method was employed, where students were placed into either a control or a treatment group. Data was collected from assignment and final grades, as well as self-reported time spent using the notebook. Independent t-tests and correlation were used for data analysis. Results were inconclusive but did indicate that the intervention had a possible effect. Further studies may explore better efficacy, implementation, and satisfaction of interactive notebooks across a larger population and multiple class topics

    Journal of Mathematics and Science: Collaborative Explorations

    Get PDF

    Graph-Theoretical Tools for the Analysis of Complex Networks

    Get PDF
    We are currently experiencing an explosive growth in data collection technology that threatens to dwarf the commensurate gains in computational power predicted by Moore’s Law. At the same time, researchers across numerous domain sciences are finding success using network models to represent their data. Graph algorithms are then applied to study the topological structure and tease out latent relationships between variables. Unfortunately, the problems of interest, such as finding dense subgraphs, are often the most difficult to solve from a computational point of view. Together, these issues motivate the need for novel algorithmic techniques in the study of graphs derived from large, complex, data sources. This dissertation describes the development and application of graph theoretic tools for the study of complex networks. Algorithms are presented that leverage efficient, exact solutions to difficult combinatorial problems for epigenetic biomarker detection and disease subtyping based on gene expression signatures. Extensive testing on publicly available data is presented supporting the efficacy of these approaches. To address efficient algorithm design, a study of the two core tenets of fixed parameter tractability (branching and kernelization) is considered in the context of a parallel implementation of vertex cover. Results of testing on a wide variety of graphs derived from both real and synthetic data are presented. It is shown that the relative success of kernelization versus branching is found to be largely dependent on the degree distribution of the graph. Throughout, an emphasis is placed upon the practicality of resulting implementations to advance the limits of effective computation
    • …
    corecore