63,353 research outputs found

    Towards European-wide Quality and Benchmarking of Open Educational Resources

    Get PDF
    Empowered by a multi-partner Consortium, MORIL will deliver high-quality Open Educational Resources (OER) with pedagogically-rich content, specifically designed and developed for distance learning. MORIL refers to "Multilingual Open Educational Resources for Independent Learning". It constitutes a New Generation of open resources, having a strong focus on development and delivery of quality-assured materials for off-campus target groups. MORIL is value added, as face-to-face didactics are not obligatory, contrary to on-campus education. Besides open offers, formal offers are fronted as well, establishing a transparent prospective learning path into higher education for those that seek recognition and/or certification. MORIL will provide a single European access point for lifelong open and flexible learning: a referatory to participating local repository portals. For courses of interest to domestic markets, universities can utilise multilingual versioning and localisation. Blending MORIL with leading edge quality assurance and benchmarking, truly provides the Consortium with a head start. European-wide quality and benchmarking is enabled by E-xcellence: a web-based instrument to assess the quality of e-learning in higher education. Although many instruments already exist, which cover the organisational and content-related quality assurance of higher education institutions and programmes, only few exist which have developed a focus on the parameters of quality assurance that govern e-learning and even fewer or none, have their focus on OER. E-xcellence as such being supplemented to MORIL, is to cater for open and accessible quality and benchmarking. MORIL is supported by the William and Flora Hewlett Foundation

    Labour Market Dynamics in Greek Regions: a Bayesian Markov Chain Approach Using Proportions Data

    Get PDF
    This paper focuses on Greek labour market dynamics at a regional base, which comprises of 16 provinces, as defined by NUTS levels 1 and 2 (Eurostat, 2008), using Markov Chains for proportions data for the first time in the literature. We apply a Bayesian approach, which employs a Monte Carlo Integration procedure that uncovers the entire empirical posterior distribution of transition probabilities from full employment to part employment, unemployment and economically unregistered unemployment and vice a versa. Our results show that there are disparities in the transition probabilities across regions, implying that the convergence of the Greek labour market at a regional base is far from being considered as completed. However, some common patterns are observed as regions in the south of the country exhibit similar transition probabilities between different states of the labour marketGreek Regions, Employment, Unemployment, Markov Chains

    Constrained LQR for Low-Precision Data Representation

    Get PDF
    Performing computations with a low-bit number representation results in a faster implementation that uses less silicon, and hence allows an algorithm to be implemented in smaller and cheaper processors without loss of performance. We propose a novel formulation to efficiently exploit the low (or non-standard) precision number representation of some computer architectures when computing the solution to constrained LQR problems, such as those that arise in predictive control. The main idea is to include suitably-defined decision variables in the quadratic program, in addition to the states and the inputs, to allow for smaller roundoff errors in the solver. This enables one to trade off the number of bits used for data representation against speed and/or hardware resources, so that smaller numerical errors can be achieved for the same number of bits (same silicon area). Because of data dependencies, the algorithm complexity, in terms of computation time and hardware resources, does not necessarily increase despite the larger number of decision variables. Examples show that a 10-fold reduction in hardware resources is possible compared to using double precision floating point, without loss of closed-loop performance
    corecore