306,055 research outputs found

    The Impact of Cultural Familiarity on Students’ Social Media Usage in Higher Education

    Get PDF
    Using social media (SM) in Higher education (HE) becomes unavoidable in the new teaching and learning pedagogy. The current generation of students creates their groups on SM for collaboration. However, SM can be a primary source of learning distraction due to its nature, which does not support structured learning. Hence, derived from the literature, this study proposes three learning customised system features, to be implemented on SM when used in Higher Education HE. Nevertheless, some psychological factors appear to have a stronger impact on students’ adoption of SM in learning than the proposed features. A Quantitative survey was conducted at a university in Uzbekistan to collect 52 undergraduate students’ perception of proposed SM learning customised features in Moodle. These features aim to provide localised, personalised, and privacy control self-management environment for collaboration in Moodle. These features could be significant in predicting students’ engagement with SM in HE. The data analysis showed a majority of positive feedback towards the proposed learning customised SM. However, the surveyed students’ engagement with these features was observed as minimal. The course leader initiated a semi-structured interview to investigate the reason. Although the students confirmed their acceptance of the learning customised features, their preferences to alternate SM, which is Telegram overridden their usage of the proposed learning customized SM, which is Twitter. The students avoided the Moodle integrated Twitter (which provided highly accepted features) and chose to use the Telegram as an external collaboration platform driven by their familiarity and social preferences with the Telegram since it is the popular SM in Uzbekistan. This study is part of an ongoing PhD research which involves deeper frame of learners’ cognitive usage of the learning management system. However, this paper exclusively discusses the cultural familiarity impact of student’s adoption of SM in HE

    The Requirements for Ontologies in Medical Data Integration: A Case Study

    Full text link
    Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence-based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health-e-Child project.Comment: 6 pages, 1 figure. Presented at the 11th International Database Engineering & Applications Symposium (Ideas2007). Banff, Canada September 200

    Transforming pedagogy using mobile Web 2.0

    Get PDF
    Blogs, wikis, podcasting, and a host of free, easy to use Web 2.0 social software provide opportunities for creating social constructivist learning environments focusing on student-centred learning and end-user content creation and sharing. Building on this foundation, mobile Web 2.0 has emerged as a viable teaching and learning tool, facilitating engaging learning environments that bridge multiple contexts. Today’s dual 3G and wifi-enabled smartphones provide a ubiquitous connection to mobile Web 2.0 social software and the ability to view, create, edit, upload, and share user generated Web 2.0 content. This article outlines how a Product Design course has moved from a traditional face-to-face, studio-based learning environment to one using mobile Web 2.0 technologies to enhance and engage students in a social constructivist learning paradigm. Keywords: m-learning; Web 2.0; pedagogy 2.0; social constructivism; product desig

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Modelling of Multi-Agent Systems: Experiences with Membrane Computing and Future Challenges

    Full text link
    Formal modelling of Multi-Agent Systems (MAS) is a challenging task due to high complexity, interaction, parallelism and continuous change of roles and organisation between agents. In this paper we record our research experience on formal modelling of MAS. We review our research throughout the last decade, by describing the problems we have encountered and the decisions we have made towards resolving them and providing solutions. Much of this work involved membrane computing and classes of P Systems, such as Tissue and Population P Systems, targeted to the modelling of MAS whose dynamic structure is a prominent characteristic. More particularly, social insects (such as colonies of ants, bees, etc.), biology inspired swarms and systems with emergent behaviour are indicative examples for which we developed formal MAS models. Here, we aim to review our work and disseminate our findings to fellow researchers who might face similar challenges and, furthermore, to discuss important issues for advancing research on the application of membrane computing in MAS modelling.Comment: In Proceedings AMCA-POP 2010, arXiv:1008.314

    ON THE APPLICATIONS OF INTERACTIVE THEOREM PROVING IN COMPUTATIONAL SCIENCES AND ENGINEERING

    Get PDF
    Interactive Theorem Proving (ITP) is one of the most rigorous methods used in formal verification of computing systems. While ITP provides a high level of confidence in the correctness of the system under verification, it suffers from a steep learning curve and the laborious nature of interaction with a theorem prover. As such, it is desirable to investigate whether ITP can be used in unexplored (but high-impact) domains where other verification methods fail to deliver. To this end, the focus of this dissertation is on two important domains, namely design of parameterized self-stabilizing systems, and mechanical verification of numerical approximations for Riemann integration. Self-stabilization is an important property of distributed systems that enables recovery from any system configuration/state. There are important applications for self-stabilization in network protocols, game theory, socioeconomic systems, multi-agent systems and robust data structures. Most existing techniques for the design of self-stabilization rely on a ‘manual design and after-the-fact verification’ method. In a paradigm shift, we present a novel hybrid method of ‘synthesize in small scale and generalize’ where we combine the power of a finite-state synthesizer with theorem proving. We have used our method for the design of network protocols that are self-stabilizing irrespective of the number of network nodes (i.e., parameterized protocols). The second domain of application of ITP that we are investigating concentrates on formal verification of the numerical propositions of Riemann integral in formal proofs. This is a high-impact problem as Riemann Integral is considered one of the most indispensable tools of modern calculus. That has significant applications in the development of mission-critical systems in many Engineering fields that require rigorous computations such as aeronautics, space mechanics, and electrodynamics. Our contribution to this problem is three fold: first, we formally specify and verify the fundamental Riemann Integral inclusion theorem in interval arithmetic; second, we propose a general method to verify numerical propositions on Riemann Integral for a large class of integrable functions; third, we develop a set of practical automatic proof strategies based on formally verified theorems. The contributions of Part II have become part of the ultra-reliable NASA PVS standard library

    Analysing the visual dynamics of spatial morphology

    Get PDF
    Recently there has been a revival of interest in visibility analysis of architectural configurations. The new analyses rely heavily on computing power and statistical analysis, two factors which, according to the postpositivist school of geography, should immediately cause us to be wary. Thedanger, they would suggest, is in the application of a reductionist formal mathematical description in order to `explain' multilayered sociospatial phenomena. The author presents an attempt to rationalise how we can use visibility analysis to explore architecture in this multilayered context by considering the dynamics that lead to the visual experience. In particular, it is recommended that we assess the visualprocess of inhabitation, rather than assess the visibility in vacuo. In order to investigate the possibilities and limitations of the methodology, an urban environment is analysed by means of an agent-based model of visual actors within the configuration. The results obtained from the model are compared with actual pedestrian movement and other analytic measurements of the area: the agents correlate well both with human movement patterns and with configurational relationship as analysed by space-syntax methods. The application of both methods in combination improves on the correlation with observed movement of either, which in turn implies that an understanding of both the process of inhabitation and the principles of configuration may play a crucial role in determining the social usage of space

    Non-continuous and variable rate processes: Optimisation for energy use

    Get PDF
    The need to develop new and improved ways of reducing energy use and increasing energy intensity in industrial processes is currently a major issue in New Zealand. Little attention has been given to optimisation of non-continuous processes in the past, due to their complexity, yet they remain an essential and often energy intensive component of many industrial sites. Novel models based on pinch analysis that aid in minimising utility usage have been constructed here through the adaptation of proven continuous techniques. The knowledge has been integrated into a user friendly software package, and allows the optimisation of processes under variable operating rates and batch conditions. An example problem demonstrates the improvements in energy use that can be gained when using these techniques to analyse non-continuous data. A comparison with results achieved using a pseudo-continuous method show that the method described can provide simultaneous reductions in capital and operating costs
    • 

    corecore