438 research outputs found

    MACE: connecting architectural content repositories to enable new educational experiences inside a collective external memory

    Get PDF
    In the practice and learning of Architecture and Civil Engineering, it is fundamental to access a big amount of learning material. A considerable part of the knowledge which once was written in books is now being moved to digital media. Today, most of the contents are produced and presented in digital format only. Spread around the world, digital content repositories containing a big amount of notions exist, but are oftentimes unknown and disjointed. As a consequence, they are not very efficient resources for learning at the moment. The European research project MACE (Metadata for Architectural Contents in Europe) aims at connecting digital architectural repositories by harvesting their metadata and enriching it through the integration of content and domain, context, competence and process, and usage and social metadata. The network created will allow for federated access and search over all connected repositories, allowing a new way of exploring notions and knowledge in the architectural domain, using the web as a "collective external memory

    The Use of Multi-Agents\u27 Systems in e-Learning Platforms

    Get PDF

    D1.1 Analysis Report on Federated Infrastructure and Application Profile

    Get PDF
    Kawese, R., Fisichella, M., Deng, F., Friedrich, M., Niemann, K., Börner, D., Holtkamp, P., Hun-Ha, K., Maxwell, K., Parodi, E., Pawlowski, J., Pirkkalainen, H., Rodrigo, C., & Schwertel, U. (2010). D1.1 Analysis Report on Federated Infrastructure and Application Profile. OpenScout project deliverable.The present deliverable aims to report on functionalities of the first step of the described process. In other words, the deliverable describes how the consortium will gather the learning objects metadata, centralize the access to existing learning resources and form a suitable application profile which will contribute to a proper and suitable modeling, retrieval and presentation of the required information (regarding the learning objects) to the interested users. The described approach is the foundation for the federated, skill-based search and learning object retrieval. The deliverable focuses on reporting the analysis of the available repositories and the best infrastructure that can support OpenScout’s initiative. The deliverable explains the motivations behind the chosen infrastructure based on the study of available information and previous research and literature.The work on this publication has been sponsored by the OpenScout (Skill based scouting of open user-generated and community-improved content for management education and training) Targeted Project that is funded by the European Commission’s 7th Framework Programme. Contract ECP-2008-EDU-42801

    Experiences of Revalidating the Undergraduate and Postgraduate Courses Within the Information Systems Curricula at University of Westminster, UK

    Get PDF
    Information Systems (IS) is probably the most frequently used name for a variety of academic programs focusing on applied information technology, whose curricula is available from a range of schools and university departments. For more than a decade we have successfully run at the University of Westminster, UK, BSc and MSc courses in IS, within our IS department of the Cavendish School of Computer Science. The major developments of curriculum design related to subject content, construction of courses and teaching/learning strategies, has triggered changes in our IS programs which were implemented through the IS course reviews in 2002. This paper addresses the purpose of course reviews within the UK Higher Education (HE) environment, gives a rationale for our curriculum changes, describes the revalidated IS courses at both BSc and MSc levels including our teaching and assessment strategies, and comments on our progress to date

    Solving Complex Logistics Problems with Multi-Artificial Intelligent System

    Get PDF
    The economy, which has become more information intensive, more global and more technologically dependent, is undergoing dramatic changes. The role of logistics is also becoming more and more important. In logistics, the objective of service providers is to fulfill all customers’ demands while adapting to the dynamic changes of logistics networks so as to achieve a higher degree of customer satisfaction and therefore a higher return on investment. In order to provide high quality service, knowledge and information sharing among departments becomes a must in this fast changing market environment. In particular, artificial intelligence (AI) technologies have achieved significant attention for enhancing the agility of supply chain management, as well as logistics operations. In this research, a multi-artificial intelligence system, named Integrated Intelligent Logistics System (IILS) is proposed. The objective of IILS is to provide quality logistics solutions to achieve high levels of service performance in the logistics industry. The new feature of this agile intelligence system is characterized by the incorporation of intelligence modules through the capabilities of the case-based reasoning, multi-agent, fuzzy logic and artificial neural networks, achieving the optimization of the performance of organizations

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Metadata quality issues in learning repositories

    Get PDF
    Metadata lies at the heart of every digital repository project in the sense that it defines and drives the description of digital content stored in the repositories. Metadata allows content to be successfully stored, managed and retrieved but also preserved in the long-term. Despite the enormous importance of metadata in digital repositories, one that is widely recognized, studies indicate that what is defined as metadata quality, is relatively low in most cases of digital repositories. Metadata quality is loosely defined as "fitness for purpose" meaning that low quality of metadata means that metadata cannot fulfill its purpose which is to allow for the successful storage, management and retrieval of resources. In practice, low metadata quality leads to ineffective searches for content, ones that recall the wrong resources or even worse, no resources which makes them invisible to the intended user, that is the "client" of each digital repository. The present dissertation approaches this problem by proposing a comprehensive metadata quality assurance method, namely the Metadata Quality Assurance Certification Process (MQACP). The basic idea of this dissertation is to propose a set of methods that can be deployed throughout the lifecycle of a repository to ensure that metadata generated from content providers are of high quality. These methods have to be straightforward, simple to apply with measurable results. They also have to be adaptable with minimum effort so that they can be used in different contexts easily. This set of methods was described analytically, taking into account the actors needed to apply them, describing the tools needed and defining the anticipated outcomes. In order to test our proposal, we applied it on a Learning Federation of repositories, from day 1 of its existence until it reached its maturity and regular operation. We supported the metadata creation process throughout the different phases of the repositories involved by setting up specific experiments using the methods and tools of the MQACP. Throughout each phase, we measured the resulting metadata quality to certify that the anticipated improvement in metadata quality actually took place. Lastly, through these different phases, the cost of the MQACP application was measured to provide a comparison basis for future applications. Based on the success of this first application, we decided to validate the MQACP approach by applying it on another two cases of a Cultural and a Research Federation of repositories. This would allow us to prove the transferability of the approach to other cases the present some similarities with the initial one but mainly significant differences. The results showed that the MQACP was successfully adapted to the new contexts, with minimum adaptations needed, with similar results produced and also with comparable costs. In addition, looking closer at the common experiments carried out in each phase of each use case, we were able to identify interesting patterns in the behavior of content providers that can be further researched. The dissertation is completed with a set of future research directions that came out of the cases examined. These research directions can be explored in order to support the next version of the MQACP in terms of the methods deployed, the tools used to assess metadata quality as well as the cost analysis of the MQACP methods

    Eco‑evo‑devo and iterated learning : towards an integrated approach in the light of niche construction

    Get PDF
    In this paper we argue that ecological evolutionary developmental biology (ecoevo-devo) accounts of cognitive modernity are compatible with cultural evolution theories of language built upon iterated learning models. Cultural evolution models show that the emergence of near universal properties of language do not require the preexistence of strong specific constraints. Instead, the development of general abilities, unrelated to informational specificity, like the copying of complex signals and sharing of communicative intentions is required for cultural evolution to yield specific properties, such as language structure. We argue that eco-evo-devo provides the appropriate conceptual background to ground an account for the many interconnected genetic, environmental and developmental factors that facilitated the emergence of an organic system able to develop language through the iterated transmission of information. We use the concept of niche construction to connect evolutionary developmental accounts for sensory guided motor capacities and cultural evolution guided by iterated learning models. This integrated theoretical model aims to build bridges between biological and cultural approaches

    Semantic and pragmatic characterization of learning objects

    Get PDF
    Tese de doutoramento. Engenharia Informática. Universidade do Porto. Faculdade de Engenharia. 201
    • …
    corecore