43,641 research outputs found

    Mobile support in CSCW applications and groupware development frameworks

    No full text
    Computer Supported Cooperative Work (CSCW) is an established subset of the field of Human Computer Interaction that deals with the how people use computing technology to enhance group interaction and collaboration. Mobile CSCW has emerged as a result of the progression from personal desktop computing to the mobile device platforms that are ubiquitous today. CSCW aims to not only connect people and facilitate communication through using computers; it aims to provide conceptual models coupled with technology to manage, mediate, and assist collaborative processes. Mobile CSCW research looks to fulfil these aims through the adoption of mobile technology and consideration for the mobile user. Facilitating collaboration using mobile devices brings new challenges. Some of these challenges are inherent to the nature of the device hardware, while others focus on the understanding of how to engineer software to maximize effectiveness for the end-users. This paper reviews seminal and state-of-the-art cooperative software applications and development frameworks, and their support for mobile devices

    Generating collaborative systems for digital libraries: A model-driven approach

    Get PDF
    This is an open access article shared under a Creative Commons Attribution 3.0 Licence (http://creativecommons.org/licenses/by/3.0/). Copyright @ 2010 The Authors.The design and development of a digital library involves different stakeholders, such as: information architects, librarians, and domain experts, who need to agree on a common language to describe, discuss, and negotiate the services the library has to offer. To this end, high-level, language-neutral models have to be devised. Metamodeling techniques favor the definition of domainspecific visual languages through which stakeholders can share their views and directly manipulate representations of the domain entities. This paper describes CRADLE (Cooperative-Relational Approach to Digital Library Environments), a metamodel-based framework and visual language for the definition of notions and services related to the development of digital libraries. A collection of tools allows the automatic generation of several services, defined with the CRADLE visual language, and of the graphical user interfaces providing access to them for the final user. The effectiveness of the approach is illustrated by presenting digital libraries generated with CRADLE, while the CRADLE environment has been evaluated by using the cognitive dimensions framework

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Personalised trails and learner profiling within e-learning environments

    Get PDF
    This deliverable focuses on personalisation and personalised trails. We begin by introducing and defining the concepts of personalisation and personalised trails. Personalisation requires that a user profile be stored, and so we assess currently available standard profile schemas and discuss the requirements for a profile to support personalised learning. We then review techniques for providing personalisation and some systems that implement these techniques, and discuss some of the issues around evaluating personalisation systems. We look especially at the use of learning and cognitive styles to support personalised learning, and also consider personalisation in the field of mobile learning, which has a slightly different take on the subject, and in commercially available systems, where personalisation support is found to currently be only at quite a low level. We conclude with a summary of the lessons to be learned from our review of personalisation and personalised trails

    Authentication and authorisation in entrusted unions

    Get PDF
    This paper reports on the status of a project whose aim is to implement and demonstrate in a real-life environment an integrated eAuthentication and eAuthorisation framework to enable trusted collaborations and delivery of services across different organisational/governmental jurisdictions. This aim will be achieved by designing a framework with assurance of claims, trust indicators, policy enforcement mechanisms and processing under encryption to address the security and confidentiality requirements of large distributed infrastructures. The framework supports collaborative secure distributed storage, secure data processing and management in both the cloud and offline scenarios and is intended to be deployed and tested in two pilot studies in two different domains, viz, Bio-security incident management and Ambient Assisted Living (eHealth). Interim results in terms of security requirements, privacy preserving authentication, and authorisation are reported

    Semantic Component Composition

    Full text link
    Building complex software systems necessitates the use of component-based architectures. In theory, of the set of components needed for a design, only some small portion of them are "custom"; the rest are reused or refactored existing pieces of software. Unfortunately, this is an idealized situation. Just because two components should work together does not mean that they will work together. The "glue" that holds components together is not just technology. The contracts that bind complex systems together implicitly define more than their explicit type. These "conceptual contracts" describe essential aspects of extra-system semantics: e.g., object models, type systems, data representation, interface action semantics, legal and contractual obligations, and more. Designers and developers spend inordinate amounts of time technologically duct-taping systems to fulfill these conceptual contracts because system-wide semantics have not been rigorously characterized or codified. This paper describes a formal characterization of the problem and discusses an initial implementation of the resulting theoretical system.Comment: 9 pages, submitted to GCSE/SAIG '0

    A story environment for learning object annotation and collection : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    With the increase in computer power, network bandwidth and availability, e-learning is used more and more widely. In practice e-learning can be applied in a variety of ways, such as providing electronic resources to support teaching and learning, developing computer based tutoring programs or building computer supported collaborative learning environments. Nowadays e-learning becomes significantly important because it can improve the quality of learning through using interactive computers, online communications and information systems in ways that other teaching methods cannot achieve. The important advantage of e-learning is that it offers learners a large amount of sharable and reusable learning resources. The current approaches such as Internet search and learning object repository does not effectively help users to search for appropriate learning objects. The original story concept introduces a new semantic layer between collections of learning objects and learning material. The basic idea of the story concept is to add an interpretative, semantically rich layer, informally called 'Story' between learning objects and learning material that links learning objects according to specific themes and subjects (Heinrich & Andres, 2003a). One motivation behind this approach is to put a more focused, semantic layer on top of untargeted metadata that are commonly used to describe a single learning object. Speaking from an e-learning context the stories build on learning objects and become information resources for learning material. The overall aim of this project was to design and build a story environment to realize the above story concept. The development of the story environment includes story metadata, story environment components, the story browsing and authoring processes, and tools involved in story browsing and authoring. The story concept suggests different types of metadata should be used in a story. This project developed those different metadata specifications to support story environment. Two prototypes of tools have been designed and implemented in this project to allow users to evaluate the story concept and story environment. The story browser helps story readers to read the story narrative and look at a story from different perspectives. The story authoring tool is used by the story authors to author a story. The future work of this project has been identified in the area of adding features of current tools, user testing and further implementation of the story environment

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time

    Open-TEE - An Open Virtual Trusted Execution Environment

    Full text link
    Hardware-based Trusted Execution Environments (TEEs) are widely deployed in mobile devices. Yet their use has been limited primarily to applications developed by the device vendors. Recent standardization of TEE interfaces by GlobalPlatform (GP) promises to partially address this problem by enabling GP-compliant trusted applications to run on TEEs from different vendors. Nevertheless ordinary developers wishing to develop trusted applications face significant challenges. Access to hardware TEE interfaces are difficult to obtain without support from vendors. Tools and software needed to develop and debug trusted applications may be expensive or non-existent. In this paper, we describe Open-TEE, a virtual, hardware-independent TEE implemented in software. Open-TEE conforms to GP specifications. It allows developers to develop and debug trusted applications with the same tools they use for developing software in general. Once a trusted application is fully debugged, it can be compiled for any actual hardware TEE. Through performance measurements and a user study we demonstrate that Open-TEE is efficient and easy to use. We have made Open- TEE freely available as open source.Comment: Author's version of article to appear in 14th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2015, Helsinki, Finland, August 20-22, 201
    • 

    corecore