51,843 research outputs found

    Improving sustainability through intelligent cargo and adaptive decision making

    Get PDF
    In the current society, logistics is faced with the challenge to meet more stringent sustainability goals. Shippers and transport service providers both aim to reduce the carbon footprint of their logistic operations. To do so, optimal use of logistics resources and physical infrastructure should be aimed for. An adaptive decision making process for the selection of a specific transport modality, transport provider and timeslot (aimed at minimisation of the carbon footprint) enables shippers to achieve this. This requires shippers to have access to up-to-date capacity information from transport providers (e.g. current and scheduled loading status of the various transport means and information on carbon footprint) and traffic information (e.g. city logistics and current traffic information). A prerequisite is an adequate infrastructure for collaboration and open exchange of information between the various stakeholders in the logistics value chain to obtain the up-to-date information. This paper gives a view on how such an advanced information infrastructure can be realised, currently being developed within the EU iCargo project. The paper describes a reference logistics value chain, including business benefits for each of the roles in the logistics value chain of aiming for sustainability. A case analysis is presented that reflects a practical situation in which the various roles collaborate and exchange information for realizing sustainability goals, using adaptive decision making for selecting a transport modality, transport provider, and timeslot. A high-level overview is provided of the requirements on and technical implementation of the supporting advanced infrastructure for collaboration and open information exchange.In the current society, logistics is faced with the challenge to meet more stringent sustainability goals. Shippers and transport service providers both aim to reduce the carbon footprint of their logistic operations. To do so, optimal use of logistics resources and physical infrastructure should be aimed for. An adaptive decision making process for the selection of a specific transport modality, transport provider and timeslot (aimed at minimisation of the carbon footprint) enables shippers to achieve this. This requires shippers to have access to up-to-date capacity information from transport providers (e.g. current and scheduled loading status of the various transport means and information on carbon footprint) and traffic information (e.g. city logistics and current traffic information). A prerequisite is an adequate infrastructure for collaboration and open exchange of information between the various stakeholders in the logistics value chain to obtain the up-to-date information. This paper gives a view on how such an advanced information infrastructure can be realised, currently being developed within the EU iCargo project. The paper describes a reference logistics value chain, including business benefits for each of the roles in the logistics value chain of aiming for sustainability. A case analysis is presented that reflects a practical situation in which the various roles collaborate and exchange information for realizing sustainability goals, using adaptive decision making for selecting a transport modality, transport provider, and timeslot. A high-level overview is provided of the requirements on and technical implementation of the supporting advanced infrastructure for collaboration and open information exchange.In the current society, logistics is faced with the challenge to meet more stringent sustainability goals. Shippers and transport service providers both aim to reduce the carbon footprint of their logistic operations. To do so, optimal use of logistics resources and physical infrastructure should be aimed for. An adaptive decision making process for the selection of a specific transport modality, transport provider and timeslot (aimed at minimisation of the carbon footprint) enables shippers to achieve this. This requires shippers to have access to up-to-date capacity information from transport providers (e.g. current and scheduled loading status of the various transport means and information on carbon footprint) and traffic information (e.g. city logistics and current traffic information). A prerequisite is an adequate infrastructure for collaboration and open exchange of information between the various stakeholders in the logistics value chain to obtain the up-to-date information. This paper gives a view on how such an advanced information infrastructure can be realised, currently being developed within the EU iCargo project. The paper describes a reference logistics value chain, including business benefits for each of the roles in the logistics value chain of aiming for sustainability. A case analysis is presented that reflects a practical situation in which the various roles collaborate and exchange information for realizing sustainability goals, using adaptive decision making for selecting a transport modality, transport provider, and timeslot. A high-level overview is provided of the requirements on and technical implementation of the supporting advanced infrastructure for collaboration and open information exchange

    A Semantic Grid Oriented to E-Tourism

    Full text link
    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.Comment: 12 PAGES, 7 Figure

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Toward a script theory of guidance in computer-supported collaborative learning

    Get PDF
    This article presents an outline of a script theory of guidance for computer-supported collaborative learning (CSCL). With its four types of components of internal and external scripts (play, scene, role, and scriptlet) and seven principles, this theory addresses the question how CSCL practices are shaped by dynamically re-configured internal collaboration scripts of the participating learners. Furthermore, it explains how internal collaboration scripts develop through participation in CSCL practices. It emphasizes the importance of active application of subject matter knowledge in CSCL practices, and it prioritizes transactive over non-transactive forms of knowledge application in order to facilitate learning. Further, the theory explains how external collaboration scripts modify CSCL practices and how they influence the development of internal collaboration scripts. The principles specify an optimal scaffolding level for external collaboration scripts and allow for the formulation of hypotheses about the fading of external collaboration scripts. Finally, the article points towards conceptual challenges and future research questions

    Mobile support in CSCW applications and groupware development frameworks

    No full text
    Computer Supported Cooperative Work (CSCW) is an established subset of the field of Human Computer Interaction that deals with the how people use computing technology to enhance group interaction and collaboration. Mobile CSCW has emerged as a result of the progression from personal desktop computing to the mobile device platforms that are ubiquitous today. CSCW aims to not only connect people and facilitate communication through using computers; it aims to provide conceptual models coupled with technology to manage, mediate, and assist collaborative processes. Mobile CSCW research looks to fulfil these aims through the adoption of mobile technology and consideration for the mobile user. Facilitating collaboration using mobile devices brings new challenges. Some of these challenges are inherent to the nature of the device hardware, while others focus on the understanding of how to engineer software to maximize effectiveness for the end-users. This paper reviews seminal and state-of-the-art cooperative software applications and development frameworks, and their support for mobile devices

    Universal Resource Lifecycle Management

    Get PDF
    This paper presents a model and a tool that allows Web users to define, execute, and manage lifecycles for any artifact available on the Web. In the paper we show the need for lifecycle management of Web artifacts, and we show in particular why it is important that non-programmers are also able to do this. We then discuss why current models do not allow this, and we present a model and a system implementation that achieves lifecycle management for any URI-identifiable and accessible object. The most challenging parts of the work lie in the definition of a simple but universal model and system (and in particular in allowing universality and simplicity to coexist) and in the ability to hide from the lifecycle modeler the complexity intrinsic in having to access and manage a variety of resources, which differ in nature, in the operations that are allowed on them, and in the protocols and data formats required to access them

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances
    corecore