28,495 research outputs found

    On standards and values: Between finite actuality and infinite possibility

    Get PDF
    This article explores the relation between subjects and standards in a way that is informed by a process orientation to theoretical psychology. Standards are presented as objectifications of values designed to generalize and stabilize experiences of value. Standards are nevertheless prone to becoming “parodic” in the sense that they can become obstacles to the actualization of the values they were designed to incarnate. Furthermore, much critical social science has mishandled the nature of standards by insisting that values are nothing but local and specific constructions in the mundane world of human activity. To rectify this problem, this article reactivates a sense of the difference between the idea of a finite world of activity and a world of value which points beyond and exceeds passing circumstance. Resources for the reactivation of this difference— which is core to a processual grasp of self, memory, and value—are found in the thinking of A. N. Whitehead, Max Weber, Marcel Proust, and Soren Kierkegaard

    Towards a Generalised Pedagogical Framework for Creating Mixed-Mode Role-Play in a 3D Virtual Environment

    Get PDF
    Role-play has proved itself to be an effective teaching method, and role-play within a virtual environment has been found to be more especially so. Thus, there have been many studies concerned with role-play and computer simulation used together; however, up to this point, limitations may still be found with respect to the work which has been done in this area. Some of the major outstanding problems associated with creating virtual environments for learning are: finding the simplest way to model and represent abstract concepts as 3D objects; and implementing the students’ interactions - with each other, with their instructor, and with the represented objects. Also, many projects have focused on only one pedagogical topic. My vision is to introduce a generalized method that facilitates the construction of learning scenarios and renders them as message-passing role-play activities. Then, these activities could be deployed in a virtual environment (VE) in order to help students to become more immersed in the learning process. Each such activity is to be constructed by humanizing a ‘non-human’ object, whereby the students embody and imitate an (often abstract) object which is part of a technological system and which occurs in a virtual world. This can lead to many benefits, such as being able to better support the students’ ability to imagine and visualize such objects, making them more engaged with their learning, enhancing their conceptual understanding, strengthening their reasoning when solving problems related to the topic area, and reinvigorating their interest in learning. This research presents an evaluation of an approach for the creation of a role-play simulation in a role-play supporting virtual environment, which harnesses the advantages of 3D virtual environments in an effective way - in order to benefit the students’ learning in terms of improving their understanding of abstract concepts. Moreover, this approach is generalized and thus extends the previous studies by offering a system that can be applied to a wide range of topics - that involve message-passing role-play scenarios. The approach is presented within a conceptual pedagogical framework that is supported by an analysis of the findings and results from experiments that were conducted in order to validate the framework from both the learning and technical perspectives

    A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters

    Full text link
    Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to superscheduling in a grid environment are non-coordinated since application level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to suboptimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called \emph{Grid-Federation}, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the QoS based scheduling, but also enhances utility delivered by resources.Comment: 22 pages, extended version of the conference paper published at IEEE Cluster'05, Boston, M

    Parallel Implementation of the PHOENIX Generalized Stellar Atmosphere Program. II: Wavelength Parallelization

    Get PDF
    We describe an important addition to the parallel implementation of our generalized NLTE stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000--300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard MPI library calls and is fully portable between serial and parallel computers.Comment: AAS-TeX, 15 pages, full text with figures available at ftp://calvin.physast.uga.edu/pub/preprints/Wavelength-Parallel.ps.gz ApJ, in pres

    An analytical performance model for the Spidergon NoC

    Get PDF
    Networks on chip (NoC) emerged as a promising alternative to bus-based interconnect networks to handle the increasing communication requirements of the large systems on chip. Employing an appropriate topology for a NoC is of high importance mainly because it typically trade-offs between cross-cutting concerns such as performance and cost. The spidergon topology is a novel architecture which is proposed recently for NoC domain. The objective of the spidergon NoC has been addressing the need for a fixed and optimized topology to realize cost effective multi-processor SoC (MPSoC) development [7]. In this paper we analyze the traffic behavior in the spidergon scheme and present an analytical evaluation of the average message latency in the architecture. We prove the validity of the analysis by comparing the model against the results produced by a discreteevent simulator

    Cooperative Synchronization in Wireless Networks

    Full text link
    Synchronization is a key functionality in wireless network, enabling a wide variety of services. We consider a Bayesian inference framework whereby network nodes can achieve phase and skew synchronization in a fully distributed way. In particular, under the assumption of Gaussian measurement noise, we derive two message passing methods (belief propagation and mean field), analyze their convergence behavior, and perform a qualitative and quantitative comparison with a number of competing algorithms. We also show that both methods can be applied in networks with and without master nodes. Our performance results are complemented by, and compared with, the relevant Bayesian Cram\'er-Rao bounds

    Performance Comparison of Parallel Bees Algorithm on Rosenbrock Function

    Get PDF
    The optimization algorithms that imitate nature have acquired much attention principally mechanisms for solving the difficult issues for example the travelling salesman problem (TSP) which is containing routing and scheduling of the tasks. This thesis presents the parallel Bees Algorithm as a new approach for optimizing the last results for the Bees Algorithm. Bees Algorithm is one of the optimization algorithms inspired from the natural foraging ways of the honey bees of finding the best solution. It is a series of activities based on the searching algorithm in order to access the best solutions. It is an iteration algorithm; therefore, it is suffering from slow convergence. The other downside of the Bee Algorithm is that it has needless computation. This means that it spends a long time for the bees algorithm converge the optimum solution. In this study, the parallel bees algorithm technique is proposed for overcoming of this issue. Due to that, this would lead to reduce the required time to get a solution with faster results accuracy than original Bees Algorithm

    An occam Style Communications System for UNIX Networks

    Get PDF
    This document describes the design of a communications system which provides occam style communications primitives under a Unix environment, using TCP/IP protocols, and any number of other protocols deemed suitable as underlying transport layers. The system will integrate with a low overhead scheduler/kernel without incurring significant costs to the execution of processes within the run time environment. A survey of relevant occam and occam3 features and related research is followed by a look at the Unix and TCP/IP facilities which determine our working constraints, and a description of the T9000 transputer's Virtual Channel Processor, which was instrumental in our formulation. Drawing from the information presented here, a design for the communications system is subsequently proposed. Finally, a preliminary investigation of methods for lightweight access control to shared resources in an environment which does not provide support for critical sections, semaphores, or busy waiting, is made. This is presented with relevance to mutual exclusion problems which arise within the proposed design. Future directions for the evolution of this project are discussed in conclusion
    • 

    corecore