347,170 research outputs found

    Serious Games for Software Refactoring

    Get PDF
    Software design issues can severely impede software development and maintenance. Thus, it is important for the success of software projects that developers are aware of bad smells in code artifacts and improve their skills to reduce these issues via refactoring. However, software refactoring is a complex activity and involves multiple tasks and aspects. Therefore, imparting competences for identifying bad smells and refactoring code efficiently is challenging for software engineering education and training. The approaches proposed for teaching software refactoring in recent years mostly concentrate on small and artificial tasks and fall short in terms of higher level competences, such as analysis and evaluation. In this paper, we investigate the possibilities and challenges of designing serious games for software refactoring on real-world code artifacts. In particular, we propose a game design, where students can compete either against a predefined benchmark (technical debt) or against each other. In addition, we describe a lightweight architecture as the technical foundation for the game design that integrates pre-existing analysis tools such as test frameworks and software-quality analyzers. Finally, we provide an exemplary game scenario to illustrate the application of serious games in a learning setting

    Serious Refactoring Games

    Get PDF
    Software design issues can severely impede software development and maintenance. Thus, it is important for the success of software projects that developers are aware of bad smells in code artifacts and improve their skills to reduce these issues via refactoring. However, software refactoring is a complex activity and involves multiple tasks and aspects. Therefore, imparting competences for identifying bad smells and refactoring code efficiently is challenging for software engineering education and training. The approaches proposed for teaching software refactoring in recent years mostly concentrate on small and artificial tasks and fall short in terms of higher level competences, such as analysis and evaluation. In this paper, we investigate the possibilities and challenges of designing serious games for software refactoring on real-world code artifacts. In particular, we propose a game design, where students can compete either against a predefined benchmark (technical debt) or against each other. In addition, we describe a lightweight architecture as the technical foundation for the game design that integrates pre-existing analysis tools such as test frameworks and software-quality analyzers. Finally, we provide an exemplary game scenario to illustrate the application of serious games in a learning setting

    Models and metaphors: complexity theory and through-life management in the built environment

    Get PDF
    Complexity thinking may have both modelling and metaphorical applications in the through-life management of the built environment. These two distinct approaches are examined and compared. In the first instance, some of the sources of complexity in the design, construction and maintenance of the built environment are identified. The metaphorical use of complexity in management thinking and its application in the built environment are briefly examined. This is followed by an exploration of modelling techniques relevant to built environment concerns. Non-linear and complex mathematical techniques such as fuzzy logic, cellular automata and attractors, may be applicable to their analysis. Existing software tools are identified and examples of successful built environment applications of complexity modelling are given. Some issues that arise include the definition of phenomena in a mathematically usable way, the functionality of available software and the possibility of going beyond representational modelling. Further questions arising from the application of complexity thinking are discussed, including the possibilities for confusion that arise from the use of metaphor. The metaphor of a 'commentary machine' is suggested as a possible way forward and it is suggested that an appropriate linguistic analysis can in certain situations reduce perceived complexity

    Intelligent Systems and Advanced User Interfaces for Design, Operation, and Maintenance of Command Management Systems

    Get PDF
    Historically Command Management Systems (CMS) have been large, expensive, spacecraft-specific software systems that were costly to build, operate, and maintain. Current and emerging hardware, software, and user interface technologies may offer an opportunity to facilitate the initial formulation and design of a spacecraft-specific CMS as well as a to develop a more generic or a set of core components for CMS systems. Current MOC (mission operations center) hardware and software include Unix workstations, the C/C++ and Java programming languages, and X and Java window interfaces representations. This configuration provides the power and flexibility to support sophisticated systems and intelligent user interfaces that exploit state-of-the-art technologies in human-machine systems engineering, decision making, artificial intelligence, and software engineering. One of the goals of this research is to explore the extent to which technologies developed in the research laboratory can be productively applied in a complex system such as spacecraft command management. Initial examination of some of the issues in CMS design and operation suggests that application of technologies such as intelligent planning, case-based reasoning, design and analysis tools from a human-machine systems engineering point of view (e.g., operator and designer models) and human-computer interaction tools, (e.g., graphics, visualization, and animation), may provide significant savings in the design, operation, and maintenance of a spacecraft-specific CMS as well as continuity for CMS design and development across spacecraft with varying needs. The savings in this case is in software reuse at all stages of the software engineering process

    Towards a Reference Architecture with Modular Design for Large-scale Genotyping and Phenotyping Data Analysis: A Case Study with Image Data

    Get PDF
    With the rapid advancement of computing technologies, various scientific research communities have been extensively using cloud-based software tools or applications. Cloud-based applications allow users to access software applications from web browsers while relieving them from the installation of any software applications in their desktop environment. For example, Galaxy, GenAP, and iPlant Colaborative are popular cloud-based systems for scientific workflow analysis in the domain of plant Genotyping and Phenotyping. These systems are being used for conducting research, devising new techniques, and sharing the computer assisted analysis results among collaborators. Researchers need to integrate their new workflows/pipelines, tools or techniques with the base system over time. Moreover, large scale data need to be processed within the time-line for more effective analysis. Recently, Big Data technologies are emerging for facilitating large scale data processing with commodity hardware. Among the above-mentioned systems, GenAp is utilizing the Big Data technologies for specific cases only. The structure of such a cloud-based system is highly variable and complex in nature. Software architects and developers need to consider totally different properties and challenges during the development and maintenance phases compared to the traditional business/service oriented systems. Recent studies report that software engineers and data engineers confront challenges to develop analytic tools for supporting large scale and heterogeneous data analysis. Unfortunately, less focus has been given by the software researchers to devise a well-defined methodology and frameworks for flexible design of a cloud system for the Genotyping and Phenotyping domain. To that end, more effective design methodologies and frameworks are an urgent need for cloud based Genotyping and Phenotyping analysis system development that also supports large scale data processing. In our thesis, we conduct a few studies in order to devise a stable reference architecture and modularity model for the software developers and data engineers in the domain of Genotyping and Phenotyping. In the first study, we analyze the architectural changes of existing candidate systems to find out the stability issues. Then, we extract architectural patterns of the candidate systems and propose a conceptual reference architectural model. Finally, we present a case study on the modularity of computation-intensive tasks as an extension of the data-centric development. We show that the data-centric modularity model is at the core of the flexible development of a Genotyping and Phenotyping analysis system. Our proposed model and case study with thousands of images provide a useful knowledge-base for software researchers, developers, and data engineers for cloud based Genotyping and Phenotyping analysis system development

    A Computational Framework for Designing Interleaved Workflow and Groupware Tasks

    Get PDF
    Organizations are adopting a variety of process coordination tools such as groupware and workflow management systems to support seamless process execution and streamline individual and group knowledge worker activities. Such process support systems are being deployed in organizations in an ad hoc manner without any overall guiding process design principles leading to additional costly overheads of systems modeling and software maintenance without the requisite benefits. This paper presents a conceptual framework illustrating a structured approach to organizational process design, providing effective task coordination and information management to address some of the relevant issues. Contributions of the research discussed in this paper include: a) a declarative AI planning based representation formalism to describe both individual and group activities, b) a structured top-down design process that enables the design of group and individual activities in an explicit manner, c) computational procedures to automate the generation of process design alternatives, role assignment to tasks, and support the detailed design of group activities. The feasibility of the integrated representation is evaluated based on extant literature on process models and case studies. The benefits of the formalism are evaluated by prototyping intelligent build-time tools for process design, and utilize the same in the design of processes for tasks such as new product development, requirements analysis, and drug discovery. This paper summarizes the work done so far as well as ongoing work by the author as a part of his doctoral dissertation

    Blockchain Software Verification and Optimization

    Get PDF
    In the last decade, blockchain technology has undergone a strong evolution. The maturity reached and the consolidation obtained have aroused the interest of companies and businesses, transforming it into a possible response to various industrial needs. However, the lack of standards and tools for the development and maintenance of blockchain software leaves open challenges and various possibilities for improvements. The goal of this thesis is to tackle some of the challenges proposed by blockchain technology, to design and implement analysis, processes, and architectures that may be applied in the real world. In particular, two topics are addressed: the verification of the blockchain software and the code optimization of smart contracts. As regards the verification, the thesis focuses on the original developments of tools and analyses able to detect statically, i.e. without code execution, issues related to non-determinism, untrusted cross-contracts invocation, and numerical overflow/underflow. Moreover, an approach based on on-chain verification is investigated, to proactively involve the blockchain in verifying the code before and after its deployment. For the optimization side, the thesis describes an optimization process for the code translation from Solidity language to Takamaka, also proposing an efficient algorithm to compute snapshots for fungible and non-fungible tokens. The results of this thesis are an important first step towards improving blockchain software development, empirically demonstrating the applicability of the proposed approaches and their involvement also in the industrial field

    Model Driven Development and Maintenance of Business Logic for Information Systems

    Get PDF
    Since information systems become more and more important in today\''s society, business firms, organizations, and individuals rely on these systems to manage their daily business and social activities. The dependency of possibly critical business processes on complex IT systems requires a strategy that supports IT departments in reducing the time needed to implement changed or new domain requirements of functional departments. In this context, software models help to manage system\''s complexity and provide a tool for communication and documentation purposes. Moreover, software engineers tend to use automated software model processing such as code generation to improve development and maintenance processes. Particularly in the context of web-based information systems, a number of model driven approaches were developed. However, we believe that compared to the user interface layer and the persistency layer, there could be a better support of consistent approaches providing a suitable architecture for the consistent model driven development of business logic. To ameliorate this situation, we developed an architectural blueprint consisting of meta models, tools, and a method support for model driven development and maintenance of business logic from analysis until system maintenance. This blueprint, which we call Amabulo infrastructure, consists of five layers and provides concepts and tools to set up and apply concrete infrastructures for model driven development projects. Modeling languages can be applied as needed. In this thesis we focus on business logic layers of J2EE applications. However, concrete code generation rules can be adapted easily for different target platforms. After providing a high-level overview of our Amabulo infrastructure, we describe its layers in detail: The Visual Model Layer is responsible for all visual modeling tasks. For this purpose, we discuss requirements for visual software models for business logic, analyze several visual modeling languages concerning their usefulness, and provide an UML profile for business logic models. The Abstract Model Layer provides an abstract view on the business logic model in the form of a domain specific model, which we call Amabulo model. An Amabulo model is reduced to pure logical information concerning business logic aspects. It focuses on information that is relevant for the code generation. For this purpose, an Amabulo model integrates model elements for process modeling, state modeling, and structural modeling. It is used as a common interface between visual modeling languages and code generators. Visual models of the Visual Model Layer are automatically transformed into an Amabulo model. The Abstract System Layer provides a formal view onto the system in the form of a Coloured Petri Net (CPN). A Coloured Petri Net representation of the modeled business logic is a formal structure and independent of the actual business logic implementation. After an Amabulo model is automatically transformed into a CPN, it can be analyzed and simulated before any line of code is generated. The Code Generation Layer is responsible for code generation. To support the design and implementation of project-specific code generators, we discuss several aspects of code integration issues and provide object-oriented design approaches to tackle the issues. Then, we provide a conceptual mapping of Amabulo model elements into architectural elements of a J2EE infrastructure. This mapping explicitly considers robustness features, which support a later manual integration of generated critical code artifacts and external systems. The Application Layer is the target layer of an Amabulo infrastructure and comprises generated code artifacts. These artifacts are instances of a specific target platform specification, and they can be modified for integration purposes with development tools. Through the contributions in this thesis, we aim to provide an integrated set of solutions to support an efficient model driven development and maintenance process for the business logic of information systems. Therefore, we provide a consistent infrastructure blueprint that considers modeling tasks, model analysis tasks, and code generation tasks. As a result, we see potential for reducing the development and maintenance efforts for changed domain requirements and simultaneously guaranteeing robustness and maintainability even after several changes

    Advanced Techniques for Assets Maintenance Management

    Get PDF
    16th IFAC Symposium on Information Control Problems in Manufacturing INCOM 2018 Bergamo, Italy, 11–13 June 2018. Edited by Marco Macchi, László Monostori, Roberto PintoThe aim of this paper is to remark the importance of new and advanced techniques supporting decision making in different business processes for maintenance and assets management, as well as the basic need of adopting a certain management framework with a clear processes map and the corresponding IT supporting systems. Framework processes and systems will be the key fundamental enablers for success and for continuous improvement. The suggested framework will help to define and improve business policies and work procedures for the assets operation and maintenance along their life cycle. The following sections present some achievements on this focus, proposing finally possible future lines for a research agenda within this field of assets management
    • 

    corecore