1,053 research outputs found

    Mega-modeling of complex, distributed, heterogeneous CPS systems

    Get PDF
    Model-Driven Design (MDD) has proven to be a powerful technology to address the development of increasingly complex embedded systems. Beyond complexity itself, challenges come from the need to deal with parallelism and heterogeneity. System design must target different execution platforms with different OSs and HW resources, even bare-metal, support local and distributed systems, and integrate on top of these heterogeneous platforms multiple functional component coming from different sources (developed from scratch, legacy code and third-party code), with different behaviors operating under different models of computation and communication. Additionally, system optimization to improve performance, power consumption, cost, etc. requires analyzing huge lists of possible design solutions. Addressing these challenges require flexible design technologies able to support from a single-source model its architectural mapping to different computing resources, of different kind and in different platforms. Traditional MDD methods and tools typically rely on fixed elements, which makes difficult their integration under this variability. For example, it is unlikely to integrate in the same system legacy code with a third-party component. Usually some re-coding is required to enable such interconnection. This paper proposes a UML/MARTE system modeling methodology able to address the challenges mentioned above by improving flexibility and scalability. This approach is illustrated and demonstrated on a flight management system. The model is flexible enough to be adapted to different architectural solutions with a minimal effort by changing its underlying Model of Computation and Communication (MoCC). Being completely platform independent, from the same model it is possible to explore various solutions on different execution platforms.This work has been partially funded by the EU and the Spanish MICINN through the ECSEL MegaMart and Comp4Drones projects and the TEC2017-86722-C4-3-R PLATINO project

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure

    Get PDF
    This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICA’s needs.Postprint (published version

    Services in pervasive computing environments : from design to delivery

    Get PDF
    The work presented in this thesis is based on the assumption that modern computer technologies are already potentially pervasive: CPUs are embedded in any sort of device; RAM and storage memory of a modern PDA is comparable to those of a ten years ago Unix workstation; Wi-Fi, GPRS, UMTS are leveraging the development of the wireless Internet. Nevertheless, computing is not pervasive because we do not have a clear conceptual model of the pervasive computer and we have not tools, methodologies, and middleware to write and to seamlessly deliver at once services over a multitude of heterogeneous devices and different delivery contexts. Our thesis addresses these issues starting from the analysis of forces in a pervasive computing environment: user mobility, user profile, user position, and device profile. The conceptual model, or metaphor, we use to drive our work is to consider the environment as surrounded by a multitude of services and objects and devices as the communicating gates between the real world and the virtual dimension of pervasive computing around us. Our thesis is thus built upon three main “pillars”. The first pillar is a domain-object-driven methodology which allows developer to abstract from low level details of the final delivery platform, and provides the user with the ability to access services in a multi-channel way. The rationale is that domain objects are self-contained pieces of software able to represent data and to compute functions and procedures. Our approach fills the gap between users and domain objects building an appropriate user interface which is both adapted to the domain object and to the end user device. As example, we present how to design, implement and deliver an electronic mail application over various platforms. The second pillar of this thesis analyzes in more details the forces that make direct object manipulation inadequate in a pervasive context. These forces are the user profile, the device profile, the context of use, and the combinatorial explosion of domain objects. From the analysis of the electronic mail application presented as example, we notice that according to the end user device, or according to particular circumstances during the access to the service (for instance if the user access the service by the interactive TV while he is having his breakfast) some functionalities are not compulsory and do not fit an adequate task sequence. So we decided to make task models explicit in the design of a service and to integrate the capability to automatically generate user interfaces for domain objects with the formal definition of task models adapted to the final delivery context. Finally, the third pillar of our thesis is about the lifecycle of services in a pervasive computing environment. Our solutions are based upon an existing framework, the Jini connection technology, and enrich this framework with new services and architectures for the deployment and discovery of services, for the user session management, and for the management of offline agents

    Forum Session at the First International Conference on Service Oriented Computing (ICSOC03)

    Get PDF
    The First International Conference on Service Oriented Computing (ICSOC) was held in Trento, December 15-18, 2003. The focus of the conference ---Service Oriented Computing (SOC)--- is the new emerging paradigm for distributed computing and e-business processing that has evolved from object-oriented and component computing to enable building agile networks of collaborating business applications distributed within and across organizational boundaries. Of the 181 papers submitted to the ICSOC conference, 10 were selected for the forum session which took place on December the 16th, 2003. The papers were chosen based on their technical quality, originality, relevance to SOC and for their nature of being best suited for a poster presentation or a demonstration. This technical report contains the 10 papers presented during the forum session at the ICSOC conference. In particular, the last two papers in the report ere submitted as industrial papers

    UML-Based co-design framework for body sensor network applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Visual Unified Modeling Language for the Composition of Scenarios in Modeling and Simulation Systems

    Get PDF
    The Department of Defense uses modeling and simulation systems in many various roles, from research and training to modeling likely outcomes of command decisions. Simulation systems have been increasing in complexity with the increased capability of low-cost computer systems to support these DOD requirements. The demand for scenarios is also increasing, but the complexity of the simulation systems has caused a bottleneck in scenario development due to the limited number of individuals with knowledge of the arcane simulator languages in which these scenarios are written. This research combines the results of previous AFIT efforts in visual modeling languages to create a language that unifies description of entities within a scenario with its behavior using a visual tool that was developed in the course of this research. The resulting language has a grammar and syntax that can be parsed from the visual representation of the scenario. The language is designed so that scenarios can be described in a generic manner, not tied to a specific simulation system, allowing the future development of modules to translate the generic scenario into simulation system specific scenarios
    corecore