7,065 research outputs found

    The difference between presence-based education and distance learning

    Get PDF
    Attempts to define distance learning always involve comparisons with presence-based education, as the latter is the most direct reference that the former has. It is on this basis that the convergent points, similarities and differences of the two types of approach are established. This article opens with such a comparison, before going on to focus mainly on distance learning and to examine methodological strategies that should be borne in mind when implementing an e-learning system

    Cloud engineering is search based software engineering too

    Get PDF
    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ā€˜SBSE in the cloudā€™. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ā€˜SBSE for the cloudā€™, formulating cloud computing challenges in ways that can be addressed using SBSE

    Offline Specialisation in Prolog Using a Hand-Written Compiler Generator

    No full text
    The so called "cogen approach" to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that the "cogen" approach is also applicable to the specialisation of logic programs (called partial deduction when applied to pure logic programs) and leads to effective specialisers. Moreover, using good binding-time annotations, the speed-ups of the specialised programs are comparable to the speed-ups obtained with online specialisers. The paper first develops a generic approach to offline partial deduction and then a specific offline partial deduction method, leading to the offline system LIX for pure logic programs. While this is a usable specialiser by itself, its specialisation strategy is used to develop the "cogen" system LOGEN. Given a program, a specification of what inputs will be static, and an annotation specifying which calls should be unfolded, LOGEN generates a specialised specialiser for the program at hand. Running this specialiser with particular values for the static inputs results in the specialised program. While this requires two steps instead of one, the efficiency of the specialisation process is improved in situations where the same program is specialised multiple times. The paper also presents and evaluates an automatic binding-time analysis that is able to derive the annotations. While the derived annotations are still suboptimal compared to hand-crafted ones, they enable non-expert users to use the LOGEN system in a fully automated way Finally, LOGEN is extended so as to directly support a large part of Prolog's declarative and non-declarative features and so as to be able to perform so called mixline specialisations. In mixline specialisation some unfolding decisions depend on the outcome of tests performed at specialisation time instead of being hardwired into the specialiser

    Using the online cross-entropy method to learn relational policies for playing different games

    Get PDF
    By defining a video-game environment as a collection of objects, relations, actions and rewards, the relational reinforcement learning algorithm presented in this paper generates and optimises a set of concise, human-readable relational rules for achieving maximal reward. Rule learning is achieved using a combination of incremental specialisation of rules and a modified online cross-entropy method, which dynamically adjusts the rate of learning as the agent progresses. The algorithm is tested on the Ms. Pac-Man and Mario environments, with results indicating the agent learns an effective policy for acting within each environment

    Regional ICT industries growth: Common prejudices and empirical evidence

    Get PDF
    Many advocates of an active regional economic policy tend to consider regional sectoral concentration as something which can be created, as a recipe rather than just a development which sometimes occurs and sometimes not. Whether such a policy can be successful implicitly depends on a number of assumptions which are empirically tested in this paper: The first assumption is that regional concentration and regional specialisation foster growth. We test this hypothesis, using data for nine ICT industries in 97 German regions. According to preliminary results, the effect of concentration is positive as expected, but small and its significance depends on specification and industry. Second, the common perception is that East Germany is still a special case. This is clearly supported for the ICT industry, which lacks behind not only with respect to level, but also with respect to growth rates for 1998 to 2002. Third, if regional sectoral agglomeration is an advantage for regions due to technological externalities, for example, then the same reasons might lead to spillover effects between neighbouring regions. With Myrdal we call this a "spread effect". However, the opposite, called "backwash effect" by Myrdal, is also well possible: If a certain sector is growing in region A, that might be due to firms moving in from the neighbouring regions. While the spread effect is the more popular hypothesis, we find evidence for both effects, using standard spatial econometrics techniques. We also discuss the possible reasons why neighbourhood effects are positive in some ICT industries and negative in others. Forth, we investigate whether regional economic policy has a positive impact. Again using spatial econometrics techniques, not so standard in this case, our preliminary results show that the 16 German Laender seem not able to do change the path which is determined for the ICT industries by spatial and other variables.

    Specializing Interpreters using Offline Partial Deduction

    No full text
    We present the latest version of the Logen partial evaluation system for logic programs. In particular we present new binding-types, and show how they can be used to effectively specialise a wide variety of interpreters.We show how to achieve Jones-optimality in a systematic way for several interpreters. Finally, we present and specialise a non-trivial interpreter for a small functional programming language. Experimental results are also presented, highlighting that the Logen system can be a good basis for generating compilers for high-level languages

    A Polyvariant Binding-Time Analysis for Off-line Partial Deduction

    Full text link
    We study the notion of binding-time analysis for logic programs. We formalise the unfolding aspect of an on-line partial deduction system as a Prolog program. Using abstract interpretation, we collect information about the run-time behaviour of the program. We use this information to make the control decisions about the unfolding at analysis time and to turn the on-line system into an off-line system. We report on some initial experiments.Comment: 19 pages (including appendix) Paper (without appendix) appeared in Programming Languages and Systems, Proceedings of the European Symposium on Programming (ESOP'98), Part of ETAPS'98 (Chris Hankin, eds.), LNCS, vol. 1381, 1998, pp. 27-4

    A study into the feasibility of generic programming for the construction of complex software

    Get PDF
    A high degree of abstraction and capacity for reuse can be obtained in software design through the use of Generic Programming (GP) concepts. Despite widespread use of GP in computing, some areas such as the construction of generic component libraries as the skeleton for complex computing systems with extensive domains have been neglected. Here we consider the design of a library of generic components based on the GP paradigm implemented with Java. Our aim is to investigate the feasibility of using GP paradigm in the construction of complex computer systems where the management of users interacting with the system and the optimisation of the systemā€™s resources is required.Postprint (author's final draft
    • ā€¦
    corecore