1,369 research outputs found

    Towards a Reference Architecture with Modular Design for Large-scale Genotyping and Phenotyping Data Analysis: A Case Study with Image Data

    Get PDF
    With the rapid advancement of computing technologies, various scientific research communities have been extensively using cloud-based software tools or applications. Cloud-based applications allow users to access software applications from web browsers while relieving them from the installation of any software applications in their desktop environment. For example, Galaxy, GenAP, and iPlant Colaborative are popular cloud-based systems for scientific workflow analysis in the domain of plant Genotyping and Phenotyping. These systems are being used for conducting research, devising new techniques, and sharing the computer assisted analysis results among collaborators. Researchers need to integrate their new workflows/pipelines, tools or techniques with the base system over time. Moreover, large scale data need to be processed within the time-line for more effective analysis. Recently, Big Data technologies are emerging for facilitating large scale data processing with commodity hardware. Among the above-mentioned systems, GenAp is utilizing the Big Data technologies for specific cases only. The structure of such a cloud-based system is highly variable and complex in nature. Software architects and developers need to consider totally different properties and challenges during the development and maintenance phases compared to the traditional business/service oriented systems. Recent studies report that software engineers and data engineers confront challenges to develop analytic tools for supporting large scale and heterogeneous data analysis. Unfortunately, less focus has been given by the software researchers to devise a well-defined methodology and frameworks for flexible design of a cloud system for the Genotyping and Phenotyping domain. To that end, more effective design methodologies and frameworks are an urgent need for cloud based Genotyping and Phenotyping analysis system development that also supports large scale data processing. In our thesis, we conduct a few studies in order to devise a stable reference architecture and modularity model for the software developers and data engineers in the domain of Genotyping and Phenotyping. In the first study, we analyze the architectural changes of existing candidate systems to find out the stability issues. Then, we extract architectural patterns of the candidate systems and propose a conceptual reference architectural model. Finally, we present a case study on the modularity of computation-intensive tasks as an extension of the data-centric development. We show that the data-centric modularity model is at the core of the flexible development of a Genotyping and Phenotyping analysis system. Our proposed model and case study with thousands of images provide a useful knowledge-base for software researchers, developers, and data engineers for cloud based Genotyping and Phenotyping analysis system development

    A process model in platform independent and neutral formal representation for design engineering automation

    Get PDF
    An engineering design process as part of product development (PD) needs to satisfy ever-changing customer demands by striking a balance between time, cost and quality. In order to achieve a faster lead-time, improved quality and reduced PD costs for increased profits, automation methods have been developed with the help of virtual engineering. There are various methods of achieving Design Engineering Automation (DEA) with Computer-Aided (CAx) tools such as CAD/CAE/CAM, Product Lifecycle Management (PLM) and Knowledge Based Engineering (KBE). For example, Computer Aided Design (CAD) tools enable Geometry Automation (GA), PLM systems allow for sharing and exchange of product knowledge throughout the PD lifecycle. Traditional automation methods are specific to individual products and are hard-coded and bound by the proprietary tool format. Also, existing CAx tools and PLM systems offer bespoke islands of automation as compared to KBE. KBE as a design method incorporates complete design intent by including re-usable geometric, non-geometric product knowledge as well as engineering process knowledge for DEA including various processes such as mechanical design, analysis and manufacturing. It has been recognised, through an extensive literature review, that a research gap exists in the form of a generic and structured method of knowledge modelling, both informal and formal modelling, of mechanical design process with manufacturing knowledge (DFM/DFA) as part of model based systems engineering (MBSE) for DEA with a KBE approach. There is a lack of a structured technique for knowledge modelling, which can provide a standardised method to use platform independent and neutral formal standards for DEA with generative modelling for mechanical product design process and DFM with preserved semantics. The neutral formal representation through computer or machine understandable format provides open standard usage. This thesis provides a contribution to knowledge by addressing this gap in two-steps: • In the first step, a coherent process model, GPM-DEA is developed as part of MBSE which can be used for modelling of mechanical design with manufacturing knowledge utilising hybrid approach, based on strengths of existing modelling standards such as IDEF0, UML, SysML and addition of constructs as per author’s Metamodel. The structured process model is highly granular with complex interdependencies such as activities, object, function, rule association and includes the effect of the process model on the product at both component and geometric attributes. • In the second step, a method is provided to map the schema of the process model to equivalent platform independent and neutral formal standards using OWL/SWRL ontology for system development using Protégé tool, enabling machine interpretability with semantic clarity for DEA with generative modelling by building queries and reasoning on set of generic SWRL functions developed by the author. Model development has been performed with the aid of literature analysis and pilot use-cases. Experimental verification with test use-cases has confirmed the reasoning and querying capability on formal axioms in generating accurate results. Some of the other key strengths are that knowledgebase is generic, scalable and extensible, hence provides re-usability and wider design space exploration. The generative modelling capability allows the model to generate activities and objects based on functional requirements of the mechanical design process with DFM/DFA and rules based on logic. With the help of application programming interface, a platform specific DEA system such as a KBE tool or a CAD tool enabling GA and a web page incorporating engineering knowledge for decision support can consume relevant part of the knowledgebase

    Towards using intelligent techniques to assist software specialists in their tasks

    Full text link
    L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy

    Full Stack Application Generation for Insurance Sales based on Product Models

    Get PDF
    The insurance market is segregated in various lines-of-business such as Life, Health, Property & Casualty, among others. This segregation allows product engineers to focus on the rules and details of a speci c insurance area. However, having di erent conceptual models leads to an additional complexity when a generic presentation layer application has to be continuously adapted to work with these distinct models. With the objective to streamline these continuous adaptations in an existent presentation layer, this work investigates and proposes the usage of code generators to allow a complete application generation, able to communicate with the given insurance product model. Therefore, this work compares and combines di erent code generation tools to accomplish the desired application generation. During this project, it is chosen an existing framework to create several software layers and respective components such as necessary classes to represent the Domain Model ; database mappings; Service layer; REST Application Program Interface (API); and a rich javascript-based presentation layer. As a conclusion, this project demonstrates that the proposed tool can generate the application already adapted and able to communicate with the provided conceptual model. Proving that this autonomous process is faster than the current manual development processes to adapt a presentation layer to an Insurance product model.O mercado segurador encontra-se dividido em várias linhas-de-negócio (e.g. Vida, Saúde, Propriedade) que têm naturalmente, diferentes modelos conceptuais para a representação dos seus produtos. Esta panóplia de modelos leva a uma dificuldade acrescida quando o software de camada de apresentação tem que ser constantemente adaptado aos novos modelos bem como ás alterações efetuadas aos modelos existentes. Com o intuito de suprimir esta constante adaptação a novos modelos, este trabalho visa a exploração e implementação de geradores de código de forma a permitir gerar toda uma aplicação que servirá de camada de apresentação ao utilizador para um dado modelo. Assim, este trabalho expõe e compara várias ferramentas de geração de código actualmente disponíveis, de forma a que seja escolhida a mais eficaz para responder aos objectivos estabelecidos. É então selecionada a ferramenta mais promissora e capaz de gerar vários componentes de software, gerando o seu modelo de domínio, mapeamento com as respectivas tabelas de base de dados, uma camada de lógica de negócio, serviços REST bem como uma camada de apresentação. Como conclusão, este trabalho apresenta uma solução que é capaz de se basear num modelo proveniente do sistema de modelação de produto e assim gerar completamente a aplicação de camada de apresentação desejada para esse mesmo modelo. Permitindo assim, um processo mais rápido e eficaz quando comparado com os processos manuais de desenvolvimento e de adaptação de código-fonte existentes

    Intelligent IoT and Dynamic Network Semantic Maps for more Trustworthy Systems

    Get PDF
    As technology evolves, the Internet of Things (IoT) concept is gaining importance for constituting a foundation to reach optimum connectivity between people and things. For this to happen and to allow easier integration of sensors and other devices in these technologic environments (or networks), the configuration is a key process, promoting interoperability between heterogeneous devices and providing strategies and processes to enhance the network capabilities. The optimization of this important process of creating a truly dynamic network must be based on models that provide a standardization of communication patterns, protocols and technologies between the sensors. Despite standing as a major tendency today, many obstacles still arise when implementing an intelligent dynamic network. Existing models are not as widely adopted as expected and semantics are often not properly represented, hence resulting in complex and unsuitable configuration time. Thus, this work aims to understand the ideal models and ontologies to achieve proper architectures and semantic maps, which allow management and redundancy based on the information of the whole network, without compromising performance, and to develop a competent configuration of sensors to integrate in a contemporary industrial typical dynamic network

    Tagungsband Dagstuhl-Workshop MBEES: Modellbasierte Entwicklung eingebetteter Systeme 2005

    Get PDF

    APIbuster Testing Framework

    Get PDF
    In recent years, not only the Service-Oriented Architecture (SOA) became a popular paradigm for the development of distributed systems, but there has been significant progress in terms of their testing. Nonetheless, the multiple testing platforms available fail to fulfil the specific requirements of the Moodbuster platform from Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência (INESC TEC) – provide a systematic process to update the test knowledge, configure and test several Representational State Transfer (REST) Application Programming Interface (API) instances. Moreover, the solution should be implemented as another REST API. The goal is to design, implement and test a platform dedicated to the testing of REST API instances. This new testing platform should allow the addition of new instances to test, the configuration and execution of sets of dedicated tests, as well as, collect and store the results. Furthermore, it should support the updating of the testing knowledge with new test categories and properties on a needs basis. This dissertation describes the design, development and testing of APIbuster, a platform dedicated to the testing of REST API instances, such as Moodbuster. The approach relies on the creation and conversion of the test knowledge ontology into the persistent data model followed by the deployment of the platform (REST API and user dashboard) through a data modelling pipeline. The APIbuster prototype was thoroughly and successfully tested considering the functional, performance, load and usability dimensions. To validate the implementation, functional and performance tests were performed regarding each API call. To ascertain the scalability of the platform, the load tests focused on the most de manding functionality. Finally, a standard usability questionnaire was distributed among users to establish the usability score of the platform. The results show that the data modelling pipeline supports the creation and subsequent updating of the testing platform with new test attributes and classes. The pipeline not only converts the testing knowledge ontology into the corresponding persistent data model, but generates a fully operational testing platform instanceNos últimos anos, o desenvolvimento de sistemas distribuídos do tipo Service-Oriented Architecture (SOA) popularizou-se, tendo ocorrido significativos progressos em ter mos de testagem. Contudo, as múltiplas plataformas de testagem existentes não satisfazem as necessidades específicas de testagem de projetos Application Programming Interfaces (API) do tipo Representational State Transfer (REST) como o Moodbuster do Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência (INESC TEC). O INESC TEC necessita de um processo sistemático de atualização, configuração e testagem de múltiplas instâncias API REST. Adicional mente, esta solução deverá ser implementada como mais uma API REST. O objetivo é conceber, implementar e testar uma plataforma de testagem de instâncias API REST. Esta nova plataforma deverá permitir a adição de instâncias de teste, configuração e execução de grupos de testes, assim como, obter e salvaguardar os resultados. Deverá ainda viabilizar a atualização do conhecimento do domínio mediante a especificação de novas categorias e atributos de teste. Esta dissertação descreve a conceção, desenvolvimento e testagem da plataforma APIbuster dedicada à testagem de instâncias API REST, como as do projecto Moodbuster. A abordagem baseia-se na definição e conversão da ontologia de representação do conhecimento sobre a testagem de API REST no correspondente modelo persistente de dados, seguida da criação da plataforma (REST API e portal do utilizador) através de um processamento sequencial dedicado. O protótipo da APIbuster foi testado detalhadamente com sucesso em relação à funcionalidade, desempenho, carga e usabilidade. Foram efetuados testes funcionais e de desempenho a cada chamada da API para validar a implementação. Para determinar a escalabilidade da plataforma, os testes de carga focaram-se na funcionalidade mais exigente. Finalmente, o questionário de usabilidade foi distribuído entre os utilizadores para definir a usabilidade da plataforma desenvolvida. Os resultados mostram que o processamento sequencial desenvolvido suporta a criação e a subsequente atualização, com novos atributos e categorias, da plataforma de testagem. Este processo não converte apenas a ontologia no modelo de dados persistente, mas gera uma instância atualizada e operacional da plataform
    • …
    corecore