19 research outputs found

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Assessing and Improving Interoperability of Distributed Systems

    Get PDF
    Interoperabilität von verteilten Systemen ist eine Grundlage für die Entwicklung von neuen und innovativen Geschäftslösungen. Sie erlaubt es existierende Dienste, die auf verschiedenen Systemen angeboten werden, so miteinander zu verknüpfen, dass neue oder erweiterte Dienste zur Verfügung gestellt werden können. Außerdem kann durch diese Integration die Zuverlässigkeit von Diensten erhöht werden. Das Erreichen und Bewerten von Interoperabilität stellt jedoch eine finanzielle und zeitliche Herausforderung dar. Zur Sicherstellung und Bewertung von Interoperabilität werden systematische Methoden benötigt. Um systematisch Interoperabilität von Systemen erreichen und bewerten zu können, wurde im Rahmen der vorliegenden Arbeit ein Prozess zur Verbesserung und Beurteilung von Interoperabilität (IAI) entwickelt. Der IAI-Prozess beinhaltet drei Phasen und kann die Interoperabilität von verteilten, homogenen und auch heterogenen Systemen bewerten und verbessern. Die Bewertung erfolgt dabei durch Interoperabilitätstests, die manuell oder automatisiert ausgeführt werden können. Für die Automatisierung von Interoperabilitätstests wird eine neue Methodik vorgestellt, die einen Entwicklungsprozess für automatisierte Interoperabilitätstestsysteme beinhaltet. Die vorgestellte Methodik erleichtert die formale und systematische Bewertung der Interoperabilität von verteilten Systemen. Im Vergleich zur manuellen Prüfung von Interoperabilität gewährleistet die hier vorgestellte Methodik eine höhere Testabdeckung, eine konsistente Testdurchführung und wiederholbare Interoperabilitätstests. Die praktische Anwendbarkeit des IAI-Prozesses und der Methodik für automatisierte Interoperabilitätstests wird durch drei Fallstudien belegt. In der ersten Fallstudie werden Prozess und Methodik für Internet Protocol Multimedia Subsystem (IMS) Netzwerke instanziiert. Die Interoperabilität von IMS-Netzwerken wurde bisher nur manuell getestet. In der zweiten und dritten Fallstudie wird der IAI-Prozess zur Beurteilung und Verbesserung der Interoperabilität von Grid- und Cloud-Systemen angewendet. Die Bewertung und Verbesserung dieser Interoperabilität ist eine Herausforderung, da Grid- und Cloud-Systeme im Gegensatz zu IMS-Netzwerken heterogen sind. Im Rahmen der Fallstudien werden Möglichkeiten für Integrations- und Interoperabilitätslösungen von Grid- und Infrastructure as a Service (IaaS) Cloud-Systemen sowie von Grid- und Platform as a Service (PaaS) Cloud-Systemen aufgezeigt. Die vorgestellten Lösungen sind in der Literatur bisher nicht dokumentiert worden. Sie ermöglichen die komplementäre Nutzung von Grid- und Cloud-Systemen, eine vereinfachte Migration von Grid-Anwendungen in ein Cloud-System sowie eine effiziente Ressourcennutzung. Die Interoperabilitätslösungen werden mit Hilfe des IAI-Prozesses bewertet. Die Durchführung der Tests für Grid-IaaS-Cloud-Systeme erfolgte manuell. Die Interoperabilität von Grid-PaaS-Cloud-Systemen wird mit Hilfe der Methodik für automatisierte Interoperabilitätstests bewertet. Interoperabilitätstests und deren Beurteilung wurden bisher in der Grid- und Cloud-Community nicht diskutiert, obwohl sie eine Basis für die Entwicklung von standardisierten Schnittstellen zum Erreichen von Interoperabilität zwischen Grid- und Cloud-Systemen bieten.Achieving interoperability of distributed systems offers means for the development of new and innovative business solutions. Interoperability allows the combination of existing services provided on different systems, into new or extended services. Such an integration can also increase the reliability of the provided service. However, achieving and assessing interoperability is a technical challenge that requires high effort regarding time and costs. The reasons are manifold and include differing implementations of standards as well as the provision of proprietary interfaces. The implementations need to be engineered to be interoperable. Techniques that assess and improve interoperability systematically are required. For the assurance of reliable interoperation between systems, interoperability needs to be assessed and improved in a systematic manner. To this aim, we present the Interoperability Assessment and Improvement (IAI) process, which describes in three phases how interoperability of distributed homogeneous and heterogeneous systems can be improved and assessed systematically. The interoperability assessment is achieved by means of interoperability testing, which is typically performed manually. For the automation of interoperability test execution, we present a new methodology including a generic development process for a complete and automated interoperability test system. This methodology provides means for a formalized and systematic assessment of systems' interoperability in an automated manner. Compared to manual interoperability testing, the application of our methodology has the following benefits: wider test coverage, consistent test execution, and test repeatability. We evaluate the IAI process and the methodology for automated interoperability testing in three case studies. Within the first case study, we instantiate the IAI process and the methodology for Internet Protocol Multimedia Subsystem (IMS) networks, which were previously assessed for interoperability only in a manual manner. Within the second and third case study, we apply the IAI process to assess and improve the interoperability of grid and cloud computing systems. Their interoperability assessment and improvement is challenging, since cloud and grid systems are, in contrast to IMS networks, heterogeneous. We develop integration and interoperability solutions for grids and Infrastructure as a Service (IaaS) clouds as well as for grids and Platform as a Service (PaaS) clouds. These solutions are unique and foster complementary usage of grids and clouds, simplified migration of grid applications into the cloud, as well as efficient resource utilization. In addition, we assess the interoperability of the grid-cloud interoperability solutions. While the tests for grid-IaaS clouds are performed manually, we applied our methodology for automated interoperability testing for the assessment of interoperability to grid-PaaS cloud interoperability successfully. These interoperability assessments are unique in the grid-cloud community and provide a basis for the development of standardized interfaces improving the interoperability between grids and clouds

    A conformance test framework for the DeviceNet fieldbus

    Get PDF
    The DeviceNet fieldbus technology is introduced and discussed. DeviceNet is an open standard fieldbus which uses the proven Controller Area Network technology. As an open standard fieldbus, the device conformance is extremely important to ensure smooth operation. The error management in DeviceNet protocol is highlighted and an error injection technique is devised to test the implementation under test for the correct error-recovery conformance. The designed Error Frame Generator prototype allows the error management and recovery of DeviceNet implementations to be conformance tested. The Error Frame Generator can also be used in other Controller Area Network based protocols. In addition, an automated Conformance Test Engine framework has been defined for realising the conformance testing of DeviceNet implementations. Automated conformance test is used to achieve consistent and reliable test results, apart from the benefits in time and personnel savings. This involves the investigations and feasibility studies in adapting the ISO 9646 conformance test standards for use in DeviceNet fieldbus. The Unique Input/Output sequences method is used for the generation of DeviceNet conformance tests. The Unique Input/Output method does not require a fully specified protocol specification and gives shorter test sequences, since only specific state information is needed. As conformance testing addresses only the protocol verification, it is foreseen that formal method validation of the DeviceNet protocol must be performed at some stage to validate the DeviceNet specification

    Supporting web programming assignment assessment with test automation and RPA

    Get PDF
    Automated software solutions to support and assist in assessment of student implemented applications are not a rarity, but often need to be custom engineered to fit a specific learning environment or a course. When such a system can be fielded in use properly, it has a tremendous potential to lighten the workload of course personnel by automating the repetitive manual tasks and testing student submissions against assignment requirements. Additionally, these support systems are often able to shorten the feedback loop which is seen to have a direct impact on student learning. In this thesis test automation and robotic process automation are researched to discover how they can be used to support web programming assignment assessment. The background on software testing, automation and feedback related pedagogy are researched mainly by the methods of literature review and expert interview. A third methodology – design science – is then applied for the purpose of verifying and extending the learnt theory in an empirical manner. A research artifact is created in the form of a prototype capable of supporting in assessment tasks. Performance of the prototype is measured by recording set execution metrics while assessing anonymized case study student submissions from a web development course arranged by University of Turku: DTEK2040 Web and Mobile Programming. Thesis concludes that to support assessment through test automation is to focus on unit and system level testing of functionalities while assuming the exact implementation at code level cannot be fully known. Suggestion is made that relying on assignment descriptions as basis for test design is not enough, but rather requirements engineering should be done together with course personnel to take advantage of their experience in what sort of errors are to be tolerated in student submissions. Thesis also concludes that automation can perform interaction with student submissions, file manipulation, record keeping and tracking tasks at a satisfactory level. The potential to shorten the feedback loop and summarizing quantitative feedback for the student is recognized, however, to build an automated system to identify, gather and summarize formative, pedagogically more valuable feedback was noted to be out of scope for this thesis and suggested as future work to possibly extend the prototype with.Automatisoidut ohjelmistoratkaisut, jotka tukevat ja avustavat opiskelijoiden toteuttamien sovellusten arvioinnissa, eivät ole harvinaisia, mutta ne useimmiten joudutaan rakentamaan tiettyyn oppimisympäristöön tai opintosisältöön sopiviksi. Tällaiset järjestelmät omaavat kuitenkin valtavan potentiaalin keventää kurssihenkilöstön työtaakkaa automatisoimalla toistuvia manuaalisia työtehtäviä ja automaatiotestaamalla opiskelijoiden palauttamia tuotoksia asetettuja tehtävävaatimuksia vastaan. Järjestelmät johtavat varsin usein myös opiskelijan näkökulmasta nopeampaan palautesykliin, jolla kyetään todeta olevan suora vaikutus oppimiseen. Tässä opinnäytetyössä tutkitaan testiautomaatiota sekä robottiprosessiautomaatiota pyrkimyksenä selvittää kuinka näitä teknologioita voitaisiin hyödyntää tukemaan web-ohjelmointitehtävien arviointia. Ohjelmistotestauksen, automaation ja palautteen pedagogiikan taustoja tutkitaan pääasiassa kirjallisuuskatsauksen ja asiantuntijahaastattelun menetelmin. Lisäksi sovelletaan kolmatta metodologiaa, suunnittelutiedettä, jonka tavoitteena on vahvistaa teoriaosuuden havaintoja sekä pyrkiä empiirisesti laajentamaan niitä. Suunnittelutieteen kautta tutkimusartifaktina syntyy prototyyppi, jonka suorituskykyä ja hyötyjä mitataan keräämällä dataa hyödyntäen aitoja, anonymisoituja opiskelijapalautuksia Turun yliopiston järjestämän DTEK2040: Web and Mobile Programming -kurssin tiimoilta. Opinnäytetyön johtopäätöksenä on, että arvioinnin tukeminen testiautomaation avulla on keskittymistä yksikkö- ja järjestelmätason toiminnallisuuksien testaukseen. Testaukseen on liitettävä myös oletus, että arvioitavan kohteen tarkkaa toteutusta kooditasolla ei voida täysin tuntea. Tehtäväkuvausten käyttö testitapausten suunnittelun perustana todetaan riittämättömäksi, ja vaatimussuunnittelu ehdotetaan tehtävän yhdessä kurssin henkilökunnan kanssa, jotta heidän kokemuksiaan voidaan hyödyntää yleisimpien opiskelijapalutuksissa ilmenevien virhetapausten kartoittamiseksi sekä testitapausten tarkkuuden ja arvioinnin jyrkkyyden säätämiseksi. Prosessiautomaation osalta todetaan, että automaatio kykenee suorittamaan vuorovaikutusta opiskelijoiden palautusten, tiedostojen käsittelyä, kirjanpito- ja seurantatehtäviä tyydyttävällä tasolla. Mahdollisuus palautesilmukan lyhentämiseen ja summaavan palautteen yhteenvetoon opiskelijalle tunnustetaan myös empiirisesti. Laadullisen, pedagogisesti arvokkaamman palautteen kokoaminen ja jalostaminen todettiin tämän opinnäytetyön mittakaavassa liian suureksi projektiksi ja sen empiiristä toteutusta ehdotettiin yhtenä mahdollisena jatkotutkimusaiheena

    A Model-Driven Methodology for Critical Systems Engineering

    Get PDF
    Model-Driven Engineering (MDE) promises to enhance system development by reducing development time, and increasing productivity and quality. MDE is gaining popularity in several industry sectors, and is attractive also for critical systems where they can reduce efforts and costs for verification and validation (V&V), and can ease certification. This thesis proposes a novel model-driven life cycle that is tailored to the development of critical railway systems. It also integrates an original approach for model-driven system validation, based on a new model named Computation Independent Test model (CIT). Moreover, the process supports the Failure Modes and Effect Analysis (FMEA), with a novel approach to conduct Model-Driven FMEA, based on custom SysML Diagram, namely the FMEA Diagram, and Prolog. The approaches have been experimented in multiple real-world case studies, from railway and automative domains

    Contribution to Quality-driven Evolutionary Software Development process for Service-Oriented Architectures

    Get PDF
    The quality of software is a key element for the successful of a system. Currently, with the advance of the technology, consumers demand more and better services. Models for the development process have also to be adapted to new requirements. This is particular true in the case of service oriented systems (domain of this thesis), where an unpredictable number of users can access to one or several services. This work proposes an improvement in the models for the software development process based on the theory of the evolutionary software development. The main objective is to maintain and improve the quality of software as long as possible and with the minimum effort and cost. Usually, this process is supported on methods known in the literature as agile software development methods. Other key element in this thesis is the service oriented software architecture. Software architecture plays an important role in the quality of any software system. The Service oriented architecture adds the service flexibility, the services are autonomous and compact assets, and they can be improved and integrated with better facility. The proposed model in this thesis for evolutionary software development makes emphasis in the quality of services. Therefore, some principles of evolutionary development are redefined and new processes are introduced, such as: architecture assessment, architecture recovery and architecture conformance. Every new process will be evaluated with case studies considering quality aspects. They have been selected according to the market demand, they are: the performance, security and evolutionability. Other aspects could be considered of the same way than the three previous, but we believe that these quality attributes are enough to demonstrate the viability of our proposal

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
    corecore