12,059 research outputs found
Comprehensibility of UML-based Formal Model â A Series of Controlled Experiments
This paper summarises two controlled experiments conducted on a model that integrates the use of semi-formal notation, the Unified Modelling Language (UML) and a formal notation, B. The experiments assessed the comprehensibility of the model, namely UML-B. The first experiment compared the comprehensibility of a UML-B model and a B model. In the second experiment, the model was compared with an Event-B model, a new generation of B. The experiments assessed the ability of the model to present information and to promote problem domain understanding. The measurement focused on the efficiency in performing the comprehension tasks. The experiments employed a cross-over design and were conducted on third-year and masters students. The results suggest that the integration of semi-formal and formal notations expedites the subjectsâ comprehension tasks with accuracy even with limited hours of training
Kaleidoscope JEIRP on Learning Patterns for the Design and Deployment of Mathematical Games: Final Report
Project deliverable (D40.05.01-F)Over the last few years have witnessed a growing recognition of the educational potential of computer games. However, it is generally agreed that the process of designing and deploying TEL resources generally and games for mathematical learning specifically is a difficult task. The Kaleidoscope project, "Learning patterns for the design and deployment of mathematical games", aims to investigate this problem. We work from the premise that designing and deploying games for mathematical learning requires the assimilation and integration of deep knowledge from diverse domains of expertise including mathematics, games development, software engineering, learning and teaching. We promote the use of a design patterns approach to address this problem. This deliverable reports on the project by presenting both a connected account of the prior deliverables and also a detailed description of the methodology involved in producing those deliverables. In terms of conducting the future work which this report envisages, the setting out of our methodology is seen by us as very significant. The central deliverable includes reference to a large set of learning patterns for use by educators, researchers, practitioners, designers and software developers when designing and deploying TEL-based mathematical games. Our pattern language is suggested as an enabling tool for good practice, by facilitating pattern-specific communication and knowledge sharing between participants. We provide a set of trails as a "way-in" to using the learning pattern language. We report in this methodology how the project has enabled the synergistic collaboration of what started out as two distinct strands: design and deployment, even to the extent that it is now difficult to identify those strands within the processes and deliverables of the project. The tools and outcomes from the project can be found at: http://lp.noe-kaleidoscope.org
Ongoing Emergence: A Core Concept in Epigenetic Robotics
We propose ongoing emergence as a core concept in
epigenetic robotics. Ongoing emergence refers to the
continuous development and integration of new skills
and is exhibited when six criteria are satisfied: (1)
continuous skill acquisition, (2) incorporation of new
skills with existing skills, (3) autonomous development
of values and goals, (4) bootstrapping of initial skills, (5)
stability of skills, and (6) reproducibility. In this paper
we: (a) provide a conceptual synthesis of ongoing
emergence based on previous theorizing, (b) review
current research in epigenetic robotics in light of ongoing
emergence, (c) provide prototypical examples of ongoing
emergence from infant development, and (d) outline
computational issues relevant to creating robots
exhibiting ongoing emergence
Grounding Language for Transfer in Deep Reinforcement Learning
In this paper, we explore the utilization of natural language to drive
transfer for reinforcement learning (RL). Despite the wide-spread application
of deep RL techniques, learning generalized policy representations that work
across domains remains a challenging problem. We demonstrate that textual
descriptions of environments provide a compact intermediate channel to
facilitate effective policy transfer. Specifically, by learning to ground the
meaning of text to the dynamics of the environment such as transitions and
rewards, an autonomous agent can effectively bootstrap policy learning on a new
domain given its description. We employ a model-based RL approach consisting of
a differentiable planning module, a model-free component and a factorized state
representation to effectively use entity descriptions. Our model outperforms
prior work on both transfer and multi-task scenarios in a variety of different
environments. For instance, we achieve up to 14% and 11.5% absolute improvement
over previously existing models in terms of average and initial rewards,
respectively.Comment: JAIR 201
Tea: A High-level Language and Runtime System for Automating Statistical Analysis
Though statistical analyses are centered on research questions and
hypotheses, current statistical analysis tools are not. Users must first
translate their hypotheses into specific statistical tests and then perform API
calls with functions and parameters. To do so accurately requires that users
have statistical expertise. To lower this barrier to valid, replicable
statistical analysis, we introduce Tea, a high-level declarative language and
runtime system. In Tea, users express their study design, any parametric
assumptions, and their hypotheses. Tea compiles these high-level specifications
into a constraint satisfaction problem that determines the set of valid
statistical tests, and then executes them to test the hypothesis. We evaluate
Tea using a suite of statistical analyses drawn from popular tutorials. We show
that Tea generally matches the choices of experts while automatically switching
to non-parametric tests when parametric assumptions are not met. We simulate
the effect of mistakes made by non-expert users and show that Tea automatically
avoids both false negatives and false positives that could be produced by the
application of incorrect statistical tests.Comment: 11 page
Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web
The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the authorâs and shouldnât be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very
instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that
they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our
technologies is still barely visible. McLuhanâs predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet
the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the
services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge
management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The
combination of this expertise, and the time and space afforded the consortium by the
IRC structure, suggested the opportunity for a concerted effort to develop an approach
to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to
the knowledge management services AKT tries to provide. As a medium for the
semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the
provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different
applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing
ontologies to create a third). Ontology mapping, and the elimination of conflicts of
reference, will be important tasks. All of these issues are discussed along with our
proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices
that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which
semantic hygiene prevails interesting enough to reason in? These and many other
questions need to be addressed if we are to provide effective knowledge technologies
for our content on the web
- âŠ