1,444 research outputs found
Current trends on ICT technologies for enterprise information s²ystems
The proposed paper discusses the current trends on ICT technologies for Enterprise Information Systems. The paper starts by defining four big challenges of the next generation of information systems: (1) Data Value Chain Management; (2) Context Awareness; (3) Interaction and Visualization; and (4) Human Learning. The major contributions towards the next generation of information systems are elaborated based on the work and experience of the authors and their teams. This includes: (1) Ontology based solutions for semantic interoperability; (2) Context aware infrastructures; (3) Product Avatar based interactions; and (4) Human learning. Finally the current state of research is discussed highlighting the impact of these solutions on the economic and social landscape
The Design and Use of a Smartphone Data Collection Tool and Accompanying Configuration Language
Understanding human behaviour is key to understanding the spread of epidemics, habit dispersion, and the efficacy of health interventions. Investigation into the patterns of and drivers for human behaviour has often been facilitated by paper tools such as surveys, journals, and diaries. These tools have drawbacks in that they can be forgotten, go unfilled, and depend on often unreliable human memories. Researcher-driven data collection mechanisms, such as interviews and direct observation, alleviate some of these problems while introducing others, such as bias and observer effects. In response to this, technological means such as special-purpose data collection hardware, wireless sensor networks, and apps for smart devices have been built to collect behavioural data. These technologies further reduce the problems experienced by more traditional behavioural research tools, but often experience problems of reliability, generality, extensibility, and ease of configuration.
This document details the construction of a smartphone-based app designed to collect data on human behaviour such that the difficulties of traditional tools are alleviated while still addressing the problems faced by modern supplemental technology. I describe the app's main data collection engine and its construction, architecture, reliability, generality, and extensibility, as well as the programming language developed to configure it and its feature set. To demonstrate the utility of the tool and its configuration language, I describe how they have been used to collect data in the field. Specifically, eleven case studies are presented in which the tool's architecture, flexibility, generality, extensibility, modularity, and ease of configuration have been exploited to facilitate a variety of behavioural monitoring endeavours. I further explain how the engine performs data collection, the major abstractions it employs, how its design and the development techniques used ensure ongoing reliability, and how the engine and its configuration language could be extended in the future to facilitate a greater range of experiments that require behavioural data to be collected. Finally, features and modules of the engine's encompassing system, iEpi, are presented that have not otherwise been documented to give the reader an understanding of where the work fits into the larger data collection and processing endeavour that spawned it
Adaptive object management for distributed systems
This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
"You Tube and I Find" - personalizing multimedia content access
Recent growth in broadband access and proliferation of small personal devices that capture images and videos has led to explosive growth of multimedia content available everywhereVfrom personal disks to the Web. While digital media capture and upload has become nearly universal with newer device technology, there is still a need for better tools and technologies to search large collections of multimedia data and to find and deliver the right content to a user according to her current needs and preferences. A renewed focus on the subjective dimension in the multimedia lifecycle, fromcreation, distribution, to delivery and consumption, is required to address this need beyond what is feasible today. Integration of the subjective aspects of the media itselfVits affective, perceptual, and physiological potential (both intended and achieved), together with those of the users themselves will allow for personalizing the content access, beyond today’s facility. This integration, transforming the traditional multimedia information retrieval (MIR) indexes to more effectively answer specific user needs, will allow a richer degree of personalization predicated on user intention and mode of interaction, relationship to the producer, content of the media, and their history and lifestyle. In this paper, we identify the challenges in achieving this integration, current approaches to interpreting content creation processes, to user modelling and profiling, and to personalized content selection, and we detail future directions. The structure of the paper is as follows: In Section I, we introduce the problem and present some definitions. In Section II, we present a review of the aspects of personalized content and current approaches for the same. Section III discusses the problem of obtaining metadata that is required for personalized media creation and present eMediate as a case study of an integrated media capture environment. Section IV presents the MAGIC system as a case study of capturing effective descriptive data and putting users first in distributed learning delivery. The aspects of modelling the user are presented as a case study in using user’s personality as a way to personalize summaries in Section V. Finally, Section VI concludes the paper with a discussion on the emerging challenges and the open problems
Recommended from our members
An intelligent framework for dynamic web services composition in the semantic web
As Web services are being increasingly adopted as the distributed computing technology of choice to securely publish application services beyond the firewall, the importance of composing them to create new, value-added service, is increasing. Thus far, the most successful practical approach to Web services composition, largely endorsed by the industry falls under the static composition category where the service selection and flow management are done a priori and manually. The second approach to web-services composition aspires to achieve more dynamic composition by semantically describing the process model of Web services and thus making it comprehensible to reasoning engines or software agents. The practical implementation of the dynamic composition approach is still in its infancy and many complex problems need to be resolved before it can be adopted outside the research communities.
The investigation of automatic discovery and composition of Web services in this thesis resulted in the development of the eXtended Semantic Case Based Reasoner (XSCBR), which utilizes semantic web and AI methodology of Case Based Reasoning (CBR). Our framework uses OWL semantic descriptions extensively for implementing both the matchmaking profiles of the Web services and the components of the CBR engine.
In this research, we have introduced the concept of runtime behaviour of services and consideration of that in Web services selection. The runtime behaviour of a service is a result of service execution and how the service will behave under different circumstances, which is difficult to presume prior to service execution. Moreover, we demonstrate that the accuracy of automatic matchmaking of Web services can be further improved by taking into account the adequacy of past matchmaking experiences for the requested task. Our XSCBR framework allows annotating such runtime experiences in terms of storing execution values of non-functional Web services parameters such as availability and response time into a case library. The XSCBR algorithm for matchmaking and discovery considers such stored Web services execution experiences to determine the adequacy of services for a particular task.
We further extended our fundamental discovery and matchmaking algorithm to cater for web services composition. An intensive knowledge-based substitution approach was proposed to adapt the candidate service experiences to the requested solution before suggesting more complex and computationally taxing AI-based planning-based transformations. The inconsistency problem that occurs while adapting existing service composition solutions is addressed with a novel methodology based on Constraint Satisfaction Problem (CSP).
From the outset, we adopted a pragmatic approach that focused on delivering an automated Web services discovery and composition solution with the minimum possible involvement of all composition participants: the service provider, the requestor and the service composer. The qualitative evaluation of the framework and the composition tools, together with the performance study of the XSCBR framework has verified that we were successful in achieving our goal
Proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET 2013)
"This book contains the proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET) 2013 which was held on 16.-17.September 2013 in Paphos (Cyprus) in conjunction with the EC-TEL conference. The workshop and hence the proceedings are divided in two parts: on Day 1 the EuroPLOT project and its results are introduced, with papers about the specific case studies and their evaluation. On Day 2, peer-reviewed papers are presented which address specific topics and issues going beyond the EuroPLOT scope. This workshop is one of the deliverables (D 2.6) of the EuroPLOT project, which has been funded from November 2010 – October 2013 by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission through the Lifelong Learning Programme (LLL) by grant #511633. The purpose of this project was to develop and evaluate Persuasive Learning Objects and Technologies (PLOTS), based on ideas of BJ Fogg. The purpose of this workshop is to summarize the findings obtained during this project and disseminate them to an interested audience. Furthermore, it shall foster discussions about the future of persuasive technology and design in the context of learning, education and teaching. The international community working in this area of research is relatively small. Nevertheless, we have received a number of high-quality submissions which went through a peer-review process before being selected for presentation and publication. We hope that the information found in this book is useful to the reader and that more interest in this novel approach of persuasive design for teaching/education/learning is stimulated. We are very grateful to the organisers of EC-TEL 2013 for allowing to host IWEPLET 2013 within their organisational facilities which helped us a lot in preparing this event. I am also very grateful to everyone in the EuroPLOT team for collaborating so effectively in these three years towards creating excellent outputs, and for being such a nice group with a very positive spirit also beyond work. And finally I would like to thank the EACEA for providing the financial resources for the EuroPLOT project and for being very helpful when needed. This funding made it possible to organise the IWEPLET workshop without charging a fee from the participants.
Development of a context-aware internet of things framework for remote monitoring services
Asset management is concerned with the management practices necessary to
maximise the value delivered by physical engineering assets. Internet of Things
(IoT)-generated data are increasingly considered as an asset and the data asset
value needs to be maximised too. However, asset-generated data in practice are
often collected in non-actionable form. Moreover, IoT data create challenges for
data management and processing. One way to handle challenges is to introduce
context information management, wherein data and service delivery are
determined through resolving the context of a service or data request.
This research was aimed at developing a context awareness framework and
implementing it in an architecture integrating IoT with cloud computing for
industrial monitoring services. The overall aim was achieved through a
methodological investigation consisting of four phases: establish the research
baseline, define experimentation materials and methods, framework design and
development, as well as case study validation and expert judgment. The
framework comprises three layers: the edge, context information management,
and application. Moreover, a maintenance context ontology for the framework
has developed focused on modelling failure analysis of mechanical components,
so as to drive monitoring services adaptation. The developed context-awareness
architecture is expressed business, usage, functional and implementation
viewpoints to frame concerns of relevant stakeholders. The developed framework
was validated through a case study and expert judgement that provided
supporting evidence for its validity and applicability in industrial contexts.
The outcomes of the work can be used in other industrially-relevant application
scenarios to drive maintenance service adaptation. Context adaptive services
can help manufacturing companies in better managing the value of their assets,
while ensuring that they continue to function properly over their lifecycle.Manufacturin
AI-native Interconnect Framework for Integration of Large Language Model Technologies in 6G Systems
The evolution towards 6G architecture promises a transformative shift in
communication networks, with artificial intelligence (AI) playing a pivotal
role. This paper delves deep into the seamless integration of Large Language
Models (LLMs) and Generalized Pretrained Transformers (GPT) within 6G systems.
Their ability to grasp intent, strategize, and execute intricate commands will
be pivotal in redefining network functionalities and interactions. Central to
this is the AI Interconnect framework, intricately woven to facilitate
AI-centric operations within the network. Building on the continuously evolving
current state-of-the-art, we present a new architectural perspective for the
upcoming generation of mobile networks. Here, LLMs and GPTs will
collaboratively take center stage alongside traditional pre-generative AI and
machine learning (ML) algorithms. This union promises a novel confluence of the
old and new, melding tried-and-tested methods with transformative AI
technologies. Along with providing a conceptual overview of this evolution, we
delve into the nuances of practical applications arising from such an
integration. Through this paper, we envisage a symbiotic integration where AI
becomes the cornerstone of the next-generation communication paradigm, offering
insights into the structural and functional facets of an AI-native 6G network
- …