1,653 research outputs found

    Evaluating a virtual learning environment in the context of its community of practice

    Get PDF
    The evaluation of virtual learning environments (VLEs) and similar applications has, to date, largely consisted of checklists of system features, phenomenological studies or measures of specific forms of educational efficacy. Although these approaches offer some value, they are unable to capture the complex and holistic nature of a group of individuals using a common system to support the wide range of activities that make up a course or programme of study over time. This paper employs Wenger's theories of 'communities of practice' to provide a formal structure for looking at how a VLE supports a pre-existing course community. Wenger proposes a Learning Architecture Framework for a learning community of practice, which the authors have taken to provide an evaluation framework. This approach is complementary to both the holistic and complex natures of course environments, in that particular VLE affordances are less important than the activities of the course community in respect of the system. Thus, the VLE's efficacy in its context of use is the prime area of investigation rather than a reductionist analysis of its tools and components. An example of this approach in use is presented, evaluating the VLE that supports the undergraduate medical course at the University of Edinburgh. The paper provides a theoretical grounding, derives an evaluation instrument, analyses the efficacy and validity of the instrument in practice and draws conclusions as to how and where it may best be used

    Computerā€based interactive tutorial versus traditional lecture for teaching introductory aspects of pain

    Get PDF
    In the health sciences, a wide range of computerā€based courseware is now available. The aim of the study described in this paper has been to compare the effectiveness of a computerā€based learning (CBL) software package and a traditional lecture (TL) for the delivery, of introductory material on pain. Nineteen undergraduate nursing students were divided into two groups to attend a oneā€hour learning session which introduced clinical aspects of pain and which was delivered by either CBL or TL. Students were assessed for prior knowledge by a preā€session test, and for knowledge gain by an identical postā€session test. In addition, a multipleā€choice question paper was used to examine differences in pain knowledge between the two groups, and a questionnaire was used to examine the studentsā€™ views on their experience during the learning session. The results demonstrated that both groups showed significant knowledge gain after their respective learning sessions. No significant differences between the groups in the magnitude of knowledge gain were found for clinical aspects of pain delivered during the learning sessions. The attitude questionnaire revealed that students attending CBL reported similar learning experiences to those attending the lecture

    Conceptual and cognitive problems in cybernetics

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Controversies have existed for some time about cybernetics as a subject and difficulties have existed for students in obtaining an overview despite the fact that at some level several cybernetics concepts can be grasped by twelve year olds. An attempt is made to unpack the notion of a subject entity and to indicate how far elements in cybernetics conform to such a concept within a generally acceptable philosophy of science. Ambiguities and controversies among key themes of cybernetics are examined and resolutions offered. How far the nature of cybernetics is likely to create problems of understanding is discussed, along with approaches towards the empirical examination of how cybernetic ideas are understood. An approach to better understanding is formulated and used in an investigation of how and how effectively the concept of feedback is grasped by various groups. Suggestions are offered from the foregoing analysis as to the balance of problems within cybernetics and effective strategies for the future

    Essays on modeling and analysis of dynamic sociotechnical systems

    Get PDF
    A sociotechnical system is a collection of humans and algorithms that interact under the partial supervision of a decentralized controller. These systems often display in- tricate dynamics and can be characterized by their unique emergent behavior. In this work, we describe, analyze, and model aspects of three distinct classes of sociotech- nical systems: financial markets, social media platforms, and elections. Though our work is diverse in subject matter content, it is unified though the study of evolution- and adaptation-driven change in social systems and the development of methods used to infer this change. We first analyze evolutionary financial market microstructure dynamics in the context of an agent-based model (ABM). The ABMā€™s matching engine implements a frequent batch auction, a recently-developed type of price-discovery mechanism. We subject simple agents to evolutionary pressure using a variety of selection mech- anisms, demonstrating that quantile-based selection mechanisms are associated with lower market-wide volatility. We then evolve deep neural networks in the ABM and demonstrate that elite individuals are profitable in backtesting on real foreign ex- change data, even though their fitness had never been evaluated on any real financial data during evolution. We then turn to the extraction of multi-timescale functional signals from large panels of timeseries generated by sociotechnical systems. We introduce the discrete shocklet transform (DST) and associated similarity search algorithm, the shocklet transform and ranking (STAR) algorithm, to accomplish this task. We empirically demonstrate the STAR algorithmā€™s invariance to quantitative functional parameteri- zation and provide use case examples. The STAR algorithm compares favorably with Twitterā€™s anomaly detection algorithm on a feature extraction task. We close by using STAR to automatically construct a narrative timeline of societally-significant events using a panel of Twitter word usage timeseries. Finally, we model strategic interactions between the foreign intelligence service (Red team) of a country that is attempting to interfere with an election occurring in another country, and the domestic intelligence service of the country in which the election is taking place (Blue team). We derive subgame-perfect Nash equilibrium strategies for both Red and Blue and demonstrate the emergence of arms race inter- ference dynamics when either player has ā€œall-or-nothingā€ attitudes about the result of the interference episode. We then confront our model with data from the 2016 U.S. presidential election contest, in which Russian military intelligence interfered. We demonstrate that our model captures the qualitative dynamics of this interference for most of the time under stud

    Some results on a class of functional optimization problems

    Get PDF
    We first describe a general class of optimization problems that describe many natu- ral, economic, and statistical phenomena. After noting the existence of a conserved quantity in a transformed coordinate system, we outline several instances of these problems in statistical physics, facility allocation, and machine learning. A dynamic description and statement of a partial inverse problem follow. When attempting to optimize the state of a system governed by the generalized equipartitioning princi- ple, it is vital to understand the nature of the governing probability distribution. We show that optimiziation for the incorrect probability distribution can have catas- trophic results, e.g., infinite expected cost, and describe a method for continuous Bayesian update of the posterior predictive distribution when it is stationary. We also introduce and prove convergence properties of a time-dependent nonparametric kernel density estimate (KDE) for use in predicting distributions over paths. Finally, we extend the theory to the case of networks, in which an event probability density is defined over nodes and edges and a system resource is to be partitioning among the nodes and edges as well. We close by giving an example of the theoryā€™s application by considering a model of risk propagation on a power grid

    Enabling object storage via shims for grid middleware

    Get PDF
    The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called "Cloud" storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for "POSIX-like" filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow

    Storageless and caching Tier-2 models in the UK context

    Get PDF
    Operational and other pressures have lead to WLCG experiments moving increasingly to a stratified model for Tier-2 resources, where ``fat" Tier-2s (``T2Ds") and ``thin" Tier-2s (``T2Cs") provide different levels of service. In the UK, this distinction is also encouraged by the terms of the current GridPP5 funding model. In anticipation of this, testing has been performed on the implications, and potential implementation, of such a distinction in our resources. In particular, this presentation presents the results of testing of storage T2Cs, where the ``thin" nature is expressed by the site having either no local data storage, or only a thin caching layer; data is streamed or copied from a ``nearby" T2D when needed by jobs. In OSG, this model has been adopted successfully for CMS AAA sites; but the network topology and capacity in the USA is significantly different to that in the UK (and much of Europe). We present the result of several operational tests: the in-production University College London (UCL) site, which runs ATLAS workloads using storage at the Queen Mary University of London (QMUL) site; the Oxford site, which has had scaling tests performed against T2Ds in various locations in the UK (to test network effects); and the Durham site, which has been testing the specific ATLAS caching solution of ``Rucio Cache" integration with ARC's caching layer
    • ā€¦
    corecore