166 research outputs found

    Overview of Digital Library Components and Developments

    Get PDF
    Digital libraries are being built upon a firm foundation of prior work as the high-end information systems of the future. A component architecture approach is becoming popular, with well established support for key components like the repository, especially through the Open Archives Initiative. We consider digital objects, metadata, harvesting, indexing, searching, browsing, rights management, linking, and powerful interfaces. Flexible interaction will be possible through a variety of architectures, using buses, agents, and other technologies. The field as a whole is undergoing rapid growth, supported by advances in storage, processing, networking, algorithms, and interaction. There are many initiatives and developments, including those supporting education, and these will certainly be of benefit in Latin America

    An EAI based integration solution for science and research outcomes information management

    Get PDF
    Open Access articleIn this paper we present an Enterprise Application Integration (EAI) based proposal for research outcomes information management. The proposal is contextualized in terms of national and international science and research outcomes information management, corresponding supporting information systems and ecosystems. Information systems interoperability problems, approaches, technologies and tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the orchestration engine to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case

    A Platform Independent Game Technology Model for Model Driven Serious Games Development

    Get PDF
    Game‑based learning (GBL) combines pedagogy and interactive entertainment to create a virtual learning environment in an effort to motivate and regain the interest of a new generation of ‘digital native’ learners. However, this approach is impeded by the limited availability of suitable ‘serious’ games and high‑level design tools to enable domain experts to develop or customise serious games. Model Driven Engineering (MDE) goes some way to provide the techniques required to generate a wide variety of interoperable serious games software solutions whilst encapsulating and shielding the technicality of the full software development process. In this paper, we present our Game Technology Model (GTM) which models serious game software in a manner independent of any hardware or operating platform specifications for use in our Model Driven Serious Game Development Framework

    Mining software development process variations

    Get PDF
    Process tailoring aims to customize a software process to better suit the specific needs of an organization when executing a software project or due to a social context in which the process is inserted. Tailoring happens, in general, through variations in the process elements, such as activities, artifacts, and control flows. This paper aims to introduce a technique that uses process mining to uncover elements from the software process that are candidates for tailoring. The proposed approach analyzes the execution logs from several process instances that share a common standard process. As a result, execution traces that differ from the standard process flow are identified and assessed to uncover their variable elements. The proposed technique was evaluated with data extracted from a real software development scenario when a large system was under development for a set of Brazilian Federal Institutes of Education, Science and Technology.info:eu-repo/semantics/acceptedVersio

    Volume 24, Number 2, June 2004 OLAC Newsletter

    Get PDF
    Digitized June 2004 issue of the OLAC Newsletter

    Big Data Management Using Scientific Workflows

    Get PDF
    Humanity is rapidly approaching a new era, where every sphere of activity will be informed by the ever-increasing amount of data. Making use of big data has the potential to improve numerous avenues of human activity, including scientific research, healthcare, energy, education, transportation, environmental science, and urban planning, just to name a few. However, making such progress requires managing terabytes and even petabytes of data, generated by billions of devices, products, and events, often in real time, in different protocols, formats and types. The volume, velocity, and variety of big data, known as the 3 Vs , present formidable challenges, unmet by the traditional data management approaches. Traditionally, many data analyses have been performed using scientific workflows, tools for formalizing and structuring complex computational processes. While scientific workflows have been used extensively in structuring complex scientific data analysis processes, little work has been done to enable scientific workflows to cope with the three big data challenges on the one hand, and to leverage the dynamic resource provisioning capability of cloud computing to analyze big data on the other hand. In this dissertation, to facilitate efficient composition, verification, and execution of distributed large-scale scientific workflows, we first propose a formal approach to scientific workflow verification, including a workflow model, and the notion of a well-typed workflow. Our approach translates a scientific workflow into an equivalent typed lambda expression, and typechecks the workflow. We then propose a typetheoretic approach to the shimming problem in scientific workflows, which occurs when connecting related but incompatible components. We reduce the shimming problem to a runtime coercion problem in the theory of type systems, and propose a fully automated and transparent solution. Our technique algorithmically inserts invisible shims into the workflow specification, thereby resolving the shimming problem for any well-typed workflow. Next, we identify a set of important challenges for running big data workflows in the cloud. We then propose a generic, implementation-independent system architecture that addresses many of these challenges. Finally, we develop a cloud-enabled big data workflow management system, called DATAVIEW, that delivers a specific implementation of our proposed architecture. To further validate our proposed architecture, we conduct a case study in which we design and run a big data workflow from the automotive domain using the Amazon EC2 cloud environment

    Design and implementation of an enterprise application integration solution for science and research outcomes information management using guaranĂĄ technology

    Get PDF
    This document presents an Enterprise Application Integration based proposal for research outcomes and technological information management. The proposal addresses national and international science and research outcomes information management, and corresponding information systems. Information systems interoperability problems, approaches, technologies and integration tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the integration platform to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case

    A Software Product Line Approach to Ontology-based Recommendations in E-Tourism Systems

    Get PDF
    This study tackles two concerns of developers of Tourism Information Systems (TIS). First is the need for more dependable recommendation services due to the intangible nature of the tourism product where it is impossible for customers to physically evaluate the services on offer prior to practical experience. Second is the need to manage dynamic user requirements in tourism due to the advent of new technologies such as the semantic web and mobile computing such that etourism systems (TIS) can evolve proactively with emerging user needs at minimal time and development cost without performance tradeoffs. However, TIS have very predictable characteristics and are functionally identical in most cases with minimal variations which make them attractive for software product line development. The Software Product Line Engineering (SPLE) paradigm enables the strategic and systematic reuse of common core assets in the development of a family of software products that share some degree of commonality in order to realise a significant improvement in the cost and time of development. Hence, this thesis introduces a novel and systematic approach, called Product Line for Ontology-based Tourism Recommendation (PLONTOREC), a special approach focusing on the creation of variants of TIS products within a product line. PLONTOREC tackles the aforementioned problems in an engineering-like way by hybridizing concepts from ontology engineering and software product line engineering. The approach is a systematic process model consisting of product line management, ontology engineering, domain engineering, and application engineering. The unique feature of PLONTOREC is that it allows common TIS product requirements to be defined, commonalities and differences of content in TIS product variants to be planned and limited in advance using a conceptual model, and variant TIS products to be created according to a construction specification. We demonstrated the novelty in this approach using a case study of product line development of e-tourism systems for three countries in the West-African Region of Africa
    • 

    corecore