1,719 research outputs found

    A meta-semantic language for smart component-adapters

    Get PDF
    The issues confronting the software development community today are significantly different from the problems it faced only a decade ago. Advances in software development tools and technologies during the last two decades have greatly enhanced the ability to leverage large amounts of software for creating new applications through the reuse of software libraries and application frameworks. The problems facing organizations today are increasingly focused around systems integration and the creation of information flows. Software modeling based on the assembly of reusable components to support software development has not been successfully implemented on a wide scale. Several models for reusable software components have been suggested which primarily address the wiring-level connectivity problem. While this is considered necessary, it is not sufficient to support an automated process of component assembly. Two critical issues that remain unresolved are: (1) semantic modeling of components, and (2) deployment process that supports automated assembly. The first issue can be addressed through domain-based standardization that would make it possible for independent developers to produce interoperable components based on a common set of vocabulary and understanding of the problem domain. This is important not only for providing a semantic basis for developing components but also for the interoperability between systems. The second issue is important for two reasons: (a) eliminate the need for developers to be involved in the final assembly of software components, and (b) provide a basis for the development process to be potentially driven by the user. To resolve the above remaining issues (1) and (2) a late binding mechanism between components based on meta-protocols is required. In this dissertation we address the above issues by proposing a generic framework for the development of software components and an interconnection language, COMPILE, for the specification of software systems from components. The computational model of the COMPILE language is based on late and dynamic binding of the components\u27 control, data, and function properties. The use of asynchronous callbacks for method invocation allows control binding among components to be late and dynamic. Data exchanged between components is defined through the use of a meta- language that can describe the semantics of the information but without being bound to any specific programming language type representation. Late binding to functions is accomplished by maintaining domain-based semantics as component metainformation. This information allows clients of components to map generic requested service to specific functions

    Early aspects: aspect-oriented requirements engineering and architecture design

    Get PDF
    This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications

    SOFTWARE REUSE: SURVEY AND RESEARCH DIRECTIONS

    Get PDF
    Software reuse is the use of software resources from all stages of the software development process in new applications. Given the high cost and difficulty of developing high quality software, the idea of capitalizing on previous software investments is appealing. However, software reuse has not been as effective as expected and has not been very broadly or systematically used in industry. This paper surveys recent software reuse research using a framework that helps identify and organize the many factors that must be considered to achieve the benefits of software reuse in practice. We argue that software reuse needs to be viewed in the context of a total systems approach that addresses a broad range of technical, economic, managerial, organizational and legal issues and conclude with a summary of the major research issues in each of these areas.Information Systems Working Papers Serie

    A generic architecture for interactive intelligent tutoring systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 07/06/2001.This research is focused on developing a generic intelligent architecture for an interactive tutoring system. A review of the literature in the areas of instructional theories, cognitive and social views of learning, intelligent tutoring systems development methodologies, and knowledge representation methods was conducted. As a result, a generic ITS development architecture (GeNisa) has been proposed, which combines the features of knowledge base systems (KBS) with object-oriented methodology. The GeNisa architecture consists of the following components: a tutorial events communication module, which encapsulates the interactive processes and other independent computations between different components; a software design toolkit; and an autonomous knowledge acquisition from a probabilistic knowledge base. A graphical application development environment includes tools to support application development, and learning environments and which use a case scenario as a basis for instruction. The generic architecture is designed to support client-side execution in a Web browser environment, and further testing will show that it can disseminate applications over the World Wide Web. Such an architecture can be adapted to different teaching styles and domains, and reusing instructional materials automatically can reduce the effort of the courseware developer (hence cost and time) in authoring new materials. GeNisa was implemented using Java scripts, and subsequently evaluated at various commercial and academic organisations. Parameters chosen for the evaluation include quality of courseware, relevancy of case scenarios, portability to other platforms, ease of use, content, user-friendliness, screen display, clarity, topic interest, and overall satisfaction with GeNisa. In general, the evaluation focused on the novel characteristics and performances of the GeNisa architecture in comparison with other ITS and the results obtained are discussed and analysed. On the basis of the experience gained during the literature research and GeNisa development and evaluation. a generic methodology for ITS development is proposed as well as the requirements for the further development of ITS tools. Finally, conclusions are drawn and areas for further research are identified

    Recommender Systems Based on Deep Learning Techniques

    Get PDF
    Tese de mestrado em Ciência de Dados, Universidade de Lisboa, Faculdade de Ciências, 2020O atual aumento do número de opções disponíveis aquando a tomada de uma decisão, faz com que vários indivíduos se sintam sobrecarregados, o que origina experiências de utilização frustrantes e demoradas. Sistemas de Recomendação são ferramentas fundamentais para a mitigação deste acontecimento, ao remover certas alternativas que provavelmente serão irrelevantes para cada indivíduo. Desenvolver estes sistemas apresenta vários desafios, tornando-se assim uma tarefa de difícil realização. Para tal, vários sistemas (frameworks) para facilitar estes desenvolvimentos foram propostos, ajudando assim a reduzir os custos de desenvolvimento, através da oferta de ferramentas reutilizáveis, tal como implementações de estratégias comuns e modelos populares. Contudo, ainda é difícil encontrar um sistema (framework) que também ofereça uma abstração completa na conversão de conjuntos de dados, suporte para abordagens baseadas em aprendizagem profunda, modelos extensíveis, e avaliações reproduzíveis. Este trabalho introduz o DRecPy, um novo sistema (framework) que oferece vários módulos para evitar trabalho de desenvolvimento repetitivo, mas também para auxiliar os praticantes nos desafios mencionados anteriormente. O DRecPy contém módulos para lidar com: tarefas de carregar e converter conjuntos de dados; divisão de conjuntos de dados para treino, validação e teste de modelos; amostragem de pontos de dados através de estratégias distintas; criação de sistemas de recomendação complexos e extensíveis, ao seguir uma estrutura de modelo definida mas flexível; juntamente com vários processos de avaliação que originam resultados determinísticos por padrão. Para avaliar este novo sistema (framework), a sua consistência é analisada através da comparação dos resultados produzidos, com os resultados publicados na literatura. Para mostrar que o DRecPy pode ser uma ferramenta valiosa para a comunidade de sistemas de recomendação, várias características são também avaliadas e comparadas com ferramentas existentes, tais como extensibilidade, reutilização e reprodutibilidade.The current increase in available options makes individuals feel overwhelmed whenever facing a decision, resulting in a frustrating and time-consuming user experience. Recommender systems are a fundamental tool to solve this issue, filtering out the options that are most likely to be irrelevant for each person. Developing these systems presents us with a vast number of challenges, making it a difficult task to accomplish. To this end, various frameworks to aid their development have been proposed, helping reducing development costs by offering reusable tools, as well as implementations of common strategies and popular models. However, it is still hard to find a framework that also provides full abstraction over data set conversion, support for deep learning-based approaches, extensible models, and reproducible evaluations. This work introduces DRecPy, a novel framework that not only provides several modules to avoid repetitive development work, but also to assist practitioners with the above challenges. DRecPy contains modules to deal with: data set import and conversion tasks; splitting data sets for model training, validation, and testing; sampling data points using distinct strategies; creating extensible and complex recommenders, by following a defined but flexible model structure; together with many evaluation procedures that provide deterministic results by default. To evaluate this new framework, its consistency is analyzed by comparing the results generated by DRecPy against the results published by others using the same algorithms. Also, to show that DRecPy can be a valuable tool for the recommender systems’ community, several framework characteristics are evaluated and compared against existing tools, such as extensibility, reusability, and reproducibility

    A hybrid e-learning framework: Process-based, semantically-enriched and service-oriented

    Get PDF
    Despite the recent innovations in e-Learning, much development is needed to ensure better learning experience for everyone and bridge the research gap in the current state of the art e-Learning artefacts. Contemporary e-learning artefacts possess various limitations as follows. First, they offer inadequate variations of adaptivity, since their recommendations are limited to e-learning resources, peers or communities. Second, they are often overwhelmed with technology at the expense of proper pedagogy and learning theories underpinning e-learning practices. Third, they do not comprehensively capture the e-learning experiences as their focus shifts to e-learning activities instead of e-learning processes. In reality, learning is a complex process that includes various activities and interactions between different roles to achieve certain gaols in a continuously evolving environment. Fourth, they tend more towards legacy systems and lack the agility and flexibility in their structure and design. To respond to the above limitations, this research aims at investigating the effectiveness of combining three advanced technologies (i.e., Business Process Modelling and Enactment, Semantics and Service Oriented Computing – SOC–) with learning pedagogy in order to enhance the e-learner experience. The key design artefact of this research is the development of the HeLPS e-Learning Framework – Hybrid e-Learning Framework that is Process-based, Semantically-enriched and Service Oriented-enabled. In this framework, a generic e-learning process has been developed bottom-up based on surveying a wide range of e-learning models (i.e., practical artefacts) and their underpinning pedagogies/concepts (i.e., theories); and then forming a generic e-learning process. Furthermore, an e-Learning Meta-Model has been developed in order to capture the semantics of e-learning domain and its processes. Such processes have been formally modelled and dynamically enacted using a service-oriented enabled architecture. This framework has been evaluated using a concern-based evaluation employing both static and dynamic approaches. The HeLPS e-Learning Framework along with its components have been evaluated by applying a data-driven approach and artificially-constructed case study to check its effectiveness in capturing the semantics, enriching e-learning processes and deriving services that can enhance the e-learner experience. Results revealed the effectiveness of combining the above-mentioned technologies in order to enhance the e-learner experience. Also, further research directions have been suggested.This research contributes to enhancing the e-learner experience by making the e-learning artefacts driven by pedagogy and informed by the latest technologies. One major novel contribution of this research is the introduction of a layered architectural framework (i.e., HeLPS) that combines business process modelling and enactment, semantics and SOC together. Another novel contribution is adopting the process-based approach in e-learning domain through: identifying these processes and developing a generic business process model from a set of related e-learning business process models that have the same goals and associated objectives. A third key contribution is the development of the e-Learning Meta-Model, which captures a high-abstract view of learning domain and encapsulates various domain rules using the Semantic Web Rule Language. Additional contribution is promoting the utilisation of Service-Orientation in e-learning through developing a semantically-enriched approach to identify and discover web services from e-learning business process models. Fifth, e-Learner Experience Model (eLEM) and e-Learning Capability Maturity Model (eLCMM) have been developed, where the former aims at identifying and quantifying the e-learner experience and the latter represents a well-defined evolutionary plateau towards achieving a mature e-learning process from a technological perspective. Both models have been combined with a new developed data-driven Validation and Verification Model to develop a Concern-based Evaluation Approach for e-Learning artefacts, which is considered as another contribution

    Serious games for learning : a model and a reference architecture for efficient game development

    Get PDF

    Serious games for learning : a model and a reference architecture for efficient game development

    Get PDF
    corecore