121 research outputs found

    Data provisioning in simulation workflows

    Get PDF
    Computer-based simulations become more and more important, e.g., to imitate real-world experiments such as crash tests, which would otherwise be too expensive or not feasible at all. Thereby, simulation workflows may be used to control the interaction with simulation tools performing necessary numerical calculations. The input data needed by these tools often come from diverse data sources that manage their data in a multiplicity of proprietary formats. Hence, simulation workflows additionally have to carry out many complex data provisioning tasks. These tasks filter and transform heterogeneous input data in such a way that underlying simulation tools can properly ingest them. Furthermore, some simulations use different tools that need to exchange data between each other. Here, even more complex data transformations are needed to cope with the differences in data formats and data granularity as they are expected by involved tools. Nowadays, scientists conducting simulations typically have to design their simulation workflows on their own. So, they have to implement many low-level data transformations that realize the data provisioning for and the data exchange between simulation tools. In doing so, they waste time for workflow design, which hinders them to concentrate on their core issue, i.e., the simulation itself. This thesis introduces several novel concepts and methods that significantly alleviate the design of the complex data provisioning in simulation workflows. Firstly, it addresses the issue that most existing workflow systems offer multiple and diverse data provisioning techniques. So, scientists are frequently overwhelmed with selecting certain techniques that are appropriate for their workflows. This thesis discusses how to conquer the multiplicity and diversity of available techniques by their systematic classification. The resulting classes of techniques are then compared with each other considering relevant functional and non-functional requirements for data provisioning in simulation workflows. The major outcome of this classification and comparison is a set of guidelines that assist scientists in choosing proper data provisioning techniques. Another problem with existing workflow systems is that they often do not support all kinds of data resources or data management operations required by concrete computer-based simulations. So, this thesis proposes extensions of conventional workflow languages that offer a generic solution to data provisioning in arbitrary simulation workflows. These extensions allow for specifying any data management operation that may be described via the query or command languages of involved data resources, e.g., arbitrary SQL statements or shell commands. The proposed extensions of workflow languages still do not remove the burden from scientists to specify many complex data management operations using low-level query and command languages. Hence, this thesis introduces a novel pattern-based approach that even further enhances the abstraction support for simulation workflow design. Instead of specifying many workflow tasks, scientists only need to select a small number of abstract patterns to describe the high-level simulation process they have in mind. Furthermore, scientists are familiar with the parameters to be specified for the patterns, because these parameters correspond to terms or concepts that are related to their domain-specific simulation methodology. A rule-based transformation approach offers flexible means to finally map high-level patterns onto executable simulation workflows. Another major contribution is a pattern hierarchy arranging different kinds of patterns according to clearly distinguished abstraction levels. This facilitates a holistic separation of concerns and provides a systematic framework to incorporate different kinds of persons and their various skills into workflow design, e.g., not only scientists, but also data engineers. Altogether, the pattern-based approach conquers the data complexity associated with simulation workflows, which allows scientists to concentrate on their core issue again, namely on the simulation itself. The last contribution is a complementary optimization method to increase the performance of local data processing in simulation workflows. This method introduces various techniques that partition relevant local data processing tasks between the components of a workflow system in a smart way. Thereby, such tasks are either assigned to the workflow execution engine or to a tightly integrated local database system. Corresponding experiments revealed that, even for a moderate data size of about 0.5 MB, this method is able to reduce workflow duration by nearly a factor of 9

    Privacy in Mobile Agent Systems: Untraceability

    Get PDF
    Agent based Internet environments are an interesting alternative to existing approaches of building software systems. The enabling feature of agents is that they allow software development based on the abstraction (a "metaphor") of elements of the real world. In other words, they allow building software systems, which work as human societies, in which members share products and services, cooperate or compete with each other. Organisational, behavioural and functional models etc applied into the systems can be copied from the real world. The growing interest in agent technologies in the European Union was expressed through the foundation of the Coordination Action for Agent-Based Computing, funded under the European Commission's Sixth Framework Programme (FP6). The action, called AgentLink III is run by the Information Society Technologies (IST) programme. The long-term goal of AgentLink is to put Europe at the leading edge of international competitiveness in this increasingly important area. According to AgentLink "Roadmap for Agent Based Computing"; agent-based systems are perceived as "one of the most vibrant and important areas of research and development to have emerged in information technology in recent years, underpinning many aspects of broader information society technologies"; However, with the emergence of the new paradigm, came also new challenges. One of them is that agent environments, especially those which allow for mobility of agents, are much more difficult to protect from intruders than conventional systems. Agent environments still lack sufficient and effective solutions to assure their security. The problem which till now has not been addressed sufficiently in agent-based systems is privacy, and particularly the anonymity of agent users. Although anonymity was studied extensively for traditional message-based communication for which during the past twenty five years various techniques have been proposed, for agent systems this problem has never been directly addressed. The research presented in this report aimed at filling this gap. This report summarises results of studies aiming at the identification of threats to privacy in agent-based systems and the methods of their protection.JRC.G.6-Sensors, radar technologies and cybersecurit

    Distributed control of reconfigurable mobile network agents for resource coordination

    Get PDF
    Includes abstract.Includes bibliographical references.Considering the tremendous growth of internet applications and network resource federation proposed towards future open access network (FOAN), the need to analyze the robustness of the classical signalling mechanisms across multiple network operators cannot be over-emphasized. It is envisaged, there will be additional challenges in meeting the bandwidth requirements and network management...The first objective of this project is to describe the networking environment based on the support for heterogeneity of network components..

    Building the Hyperconnected Society- Internet of Things Research and Innovation Value Chains, Ecosystems and Markets

    Get PDF
    This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment.The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade

    Building the Hyperconnected Society- Internet of Things Research and Innovation Value Chains, Ecosystems and Markets

    Get PDF
    This book aims to provide a broad overview of various topics of Internet of Things (IoT), ranging from research, innovation and development priorities to enabling technologies, nanoelectronics, cyber-physical systems, architecture, interoperability and industrial applications. All this is happening in a global context, building towards intelligent, interconnected decision making as an essential driver for new growth and co-competition across a wider set of markets. It is intended to be a standalone book in a series that covers the Internet of Things activities of the IERC – Internet of Things European Research Cluster from research to technological innovation, validation and deployment.The book builds on the ideas put forward by the European Research Cluster on the Internet of Things Strategic Research and Innovation Agenda, and presents global views and state of the art results on the challenges facing the research, innovation, development and deployment of IoT in future years. The concept of IoT could disrupt consumer and industrial product markets generating new revenues and serving as a growth driver for semiconductor, networking equipment, and service provider end-markets globally. This will create new application and product end-markets, change the value chain of companies that creates the IoT technology and deploy it in various end sectors, while impacting the business models of semiconductor, software, device, communication and service provider stakeholders. The proliferation of intelligent devices at the edge of the network with the introduction of embedded software and app-driven hardware into manufactured devices, and the ability, through embedded software/hardware developments, to monetize those device functions and features by offering novel solutions, could generate completely new types of revenue streams. Intelligent and IoT devices leverage software, software licensing, entitlement management, and Internet connectivity in ways that address many of the societal challenges that we will face in the next decade

    An Agent-Based Variogram Modeller: Investigating Intelligent, Distributed-Component Geographical Information Systems

    Get PDF
    Geo-Information Science (GIScience) is the field of study that addresses substantive questions concerning the handling, analysis and visualisation of spatial data. Geo- Information Systems (GIS), including software, data acquisition and organisational arrangements, are the key technologies underpinning GIScience. A GIS is normally tailored to the service it is supposed to perform. However, there is often the need to do a function that might not be supported by the GIS tool being used. The normal solution in these circumstances is to go out and look for another tool that can do the service, and often an expert to use that tool. This is expensive, time consuming and certainly stressful to the geographical data analyses. On the other hand, GIS is often used in conjunction with other technologies to form a geocomputational environment. One of the complex tools in geocomputation is geostatistics. One of its functions is to provide the means to determine the extent of spatial dependencies within geographical data and processes. Spatial datasets are often large and complex. Currently Agent system are being integrated into GIS to offer flexibility and allow better data analysis. The theis will look into the current application of Agents in within the GIS community, determine if they are used to representing data, process or act a service. The thesis looks into proving the applicability of an agent-oriented paradigm as a service based GIS, having the possibility of providing greater interoperability and reducing resource requirements (human and tools). In particular, analysis was undertaken to determine the need to introduce enhanced features to agents, in order to maximise their effectiveness in GIS. This was achieved by addressing the software agent complexity in design and implementation for the GIS environment and by suggesting possible solutions to encountered problems. The software agent characteristics and features (which include the dynamic binding of plans to software agents in order to tackle the levels of complexity and range of contexts) were examined, as well as discussing current GIScience and the applications of agent technology to GIS, agents as entities, objects and processes. These concepts and their functionalities to GIS are then analysed and discussed. The extent of agent functionality, analysis of the gaps and the use these technologies to express a distributed service providing an agent-based GIS framework is then presented. Thus, a general agent-based framework for GIS and a novel agent-based architecture for a specific part of GIS, the variogram, to examine the applicability of the agent- oriented paradigm to GIS, was devised. An examination of the current mechanisms for constructing variograms, underlying processes and functions was undertaken, then these processes were embedded into a novel agent architecture for GIS. Once the successful software agent implementation had been achieved, the corresponding tool was tested and validated - internally for code errors and externally to determine its functional requirements and whether it enhances the GIS process of dealing with data. Thereafter, its compared with other known service based GIS agents and its advantages and disadvantages analysed

    Social shaping of digital publishing: exploring the interplay between culture and technology

    Get PDF
    The processes and forms of electronic publishing have been changing since the advent of the Web. In recent years, the open access movement has been a major driver of scholarly communication, and change is also evident in other fields such as e-government and e-learning. Whilst many changes are driven by technological advances, an altered social reality is also pushing the boundaries of digital publishing. With 23 articles and 10 posters, Elpub 2012 focuses on the social shaping of digital publishing and explores the interplay between culture and technology. This book contains the proceedings of the conference, consisting of 11 accepted full articles and 12 articles accepted as extended abstracts. The articles are presented in groups, and cover the topics: digital scholarship and publishing; special archives; libraries and repositories; digital texts and readings; and future solutions and innovations. Offering an overview of the current situation and exploring the trends of the future, this book will be of interest to all those whose work involves digital publishing

    A Study of Information Fragment Association in Information Management and Retrieval Applications

    Get PDF
    As we strive to identify useful information sifting through the vast number of resources available to us, we often find that the desired information is residing in a small section within a larger body of content which does not necessarily contain similar information. This can make this Information Fragment difficult to find. A Web search engine may not provide a good ranking to a page of unrelated content if it contains only a very small yet invaluable piece of relevant information. This means that our processes often fail to bring together related Information Fragments. We can easily conceive of two Information Fragments which according to a scholar bear a strong association with each other, yet contain no common keywords enabling them to be collocated by a keyword search.This dissertation attempts to address this issue by determining the benefits of enhancing information management and retrieval applications by providing users with the capability of establishing and storing associations between Information Fragments. It estimates the extent to which the efficiency and quality of information retrieval can be improved if users are allowed to capture mental associations they form while reading Information Fragments and share these associations with others using a functional registry-based design. In order to test these benefits three subject groups were recruited and assigned tasks involving Information Fragments. The first two tasks compared the performance and usability of a mainstream social bookmarking tool with a tool enhanced with Information Fragment Association capabilities. The tests demonstrated that the use of Information Fragment Association offers significant advantages both in the efficiency of retrieval and user satisfaction. Analysis of the results of the third task demonstrated that a mainstream Web search engine performed poorly in collocating interrelated fragments when a query designed to retrieve the one of these fragments was submitted. The fourth task demonstrated that Information Fragment Association improves the precision and recall of searches performed on Information Fragment datasets.The results of this study indicate that mainstream information management and retrieval applications provide inadequate support for Information Fragment retrieval and that their enhancement with Information Fragment Association capabilities would be beneficial

    Multiple Media Correlation: Theory and Applications

    Get PDF
    This thesis introduces multiple media correlation, a new technology for the automatic alignment of multiple media objects such as text, audio, and video. This research began with the question: what can be learned when multiple multimedia components are analyzed simultaneously? Most ongoing research in computational multimedia has focused on queries, indexing, and retrieval within a single media type. Video is compressed and searched independently of audio, text is indexed without regard to temporal relationships it may have to other media data. Multiple media correlation provides a framework for locating and exploiting correlations between multiple, potentially heterogeneous, media streams. The goal is computed synchronization, the determination of temporal and spatial alignments that optimize a correlation function and indicate commonality and synchronization between media objects. The model also provides a basis for comparison of media in unrelated domains. There are many real-world applications for this technology, including speaker localization, musical score alignment, and degraded media realignment. Two applications, text-to-speech alignment and parallel text alignment, are described in detail with experimental validation. Text-to-speech alignment computes the alignment between a textual transcript and speech-based audio. The presented solutions are effective for a wide variety of content and are useful not only for retrieval of content, but in support of automatic captioning of movies and video. Parallel text alignment provides a tool for the comparison of alternative translations of the same document that is particularly useful to the classics scholar interested in comparing translation techniques or styles. The results presented in this thesis include (a) new media models more useful in analysis applications, (b) a theoretical model for multiple media correlation, (c) two practical application solutions that have wide-spread applicability, and (d) Xtrieve, a multimedia database retrieval system that demonstrates this new technology and demonstrates application of multiple media correlation to information retrieval. This thesis demonstrates that computed alignment of media objects is practical and can provide immediate solutions to many information retrieval and content presentation problems. It also introduces a new area for research in media data analysis

    Proceedings, MSVSCC 2011

    Get PDF
    Proceedings of the 5th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 14, 2011 at VMASC in Suffolk, Virginia. 186 pp
    • 

    corecore