343 research outputs found

    Binary Pattern for Nested Cardinality Constraints for Software Product Line of IoT-Based Feature Models

    Get PDF
    Software product line (SPL) is extensively used for reusability of resources in family of products. Feature modeling is an important technique used to manage common and variable features of SPL in applications, such as Internet of Things (IoT). In order to adopt SPL for application development, organizations require information, such as cost, scope, complexity, number of features, total number of products, and combination of features for each product to start the application development. Application development of IoT is varied in different contexts, such as heat sensor indoor and outdoor environment. Variability management of IoT applications enables to find the cost, scope, and complexity. All possible combinations of features make it easy to find the cost of individual application. However, exact number of all possible products and features combination for each product is more valuable information for an organization to adopt product line. In this paper, we have proposed binary pattern for nested cardinality constraints (BPNCC), which is simple and effective approach to calculate the exact number of products with complex relationships between application's feature models. Furthermore, BPNCC approach identifies the feasible features combinations of each IoT application by tracing the constraint relationship from top-to-bottom. BPNCC is an open source and tool-independent approach that does not hide the internal information of selected and non-selected IoT features. The proposed method is validated by implementing it on small and large IoT application feature models with “n” number of constraints, and it is found that the total number of products and all features combinations in each product without any constraint violation

    Clafer: Lightweight Modeling of Structure, Behaviour, and Variability

    Get PDF
    Embedded software is growing fast in size and complexity, leading to intimate mixture of complex architectures and complex control. Consequently, software specification requires modeling both structures and behaviour of systems. Unfortunately, existing languages do not integrate these aspects well, usually prioritizing one of them. It is common to develop a separate language for each of these facets. In this paper, we contribute Clafer: a small language that attempts to tackle this challenge. It combines rich structural modeling with state of the art behavioural formalisms. We are not aware of any other modeling language that seamlessly combines these facets common to system and software modeling. We show how Clafer, in a single unified syntax and semantics, allows capturing feature models (variability), component models, discrete control models (automata) and variability encompassing all these aspects. The language is built on top of first order logic with quantifiers over basic entities (for modeling structures) combined with linear temporal logic (for modeling behaviour). On top of this semantic foundation we build a simple but expressive syntax, enriched with carefully selected syntactic expansions that cover hierarchical modeling, associations, automata, scenarios, and Dwyer's property patterns. We evaluate Clafer using a power window case study, and comparing it against other notations that substantially overlap with its scope (SysML, AADL, Temporal OCL and Live Sequence Charts), discussing benefits and perils of using a single notation for the purpose

    Multi-Objective Optimum Solutions for IoT-Based Feature Models of Software Product Line

    Get PDF
    A software product line is used for the development of a family of products utilizing the reusability of existing resources with low costs and time to market. Feature Model (FM) is used extensively to manage the common and variable features of a family of products, such as Internet of Things (IoT) applications. In the literature, the binary pattern for nested cardinality constraints (BPNCC) approach has been proposed to compute all possible combinations of development features for IoT applications without violating any relationship constraints. Relationship constraints are a predefined set of rules for the selection of features from an FM. Due to high probability of relationship constraints violations, obtaining optimum features combinations from large IoT-based FMs are a challenging task. Therefore, in order to obtain optimum solutions, in this paper, we have proposed multi-objective optimum-BPNCC that consists of three independent paths (first, second, and third). Furthermore, we applied heuristics on these paths and found that the first path is infeasible due to space and execution time complexity. The second path reduces the space complexity; however, time complexity increases due to the increasing group of features. Among these paths, the performance of the third path is best as it removes optional features that are not required for optimization. In experiments, we calculated the outcomes of all three paths that show the significant improvement of optimum solution without constraint violation occurrence. We theoretically prove that this paper is better than previously proposed optimization algorithms, such as a non-dominated sorting genetic algorithm and an indicator-based evolutionary algorithm

    Vertical Optimizations of Convolutional Neural Networks for Embedded Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    A context -and template- based data compression approach to improve resource-constrained IoT systems interoperability.

    Get PDF
    170 p.El objetivo del Internet de las Cosas (the Internet of Things, IoT) es el de interconectar todo tipo de cosas, desde dispositivos simples, como una bombilla o un termostato, a elementos más complejos y abstractoscomo una máquina o una casa. Estos dispositivos o elementos varían enormemente entre sí, especialmente en las capacidades que poseen y el tipo de tecnologías que utilizan. Esta heterogeneidad produce una gran complejidad en los procesos integración en lo que a la interoperabilidad se refiere.Un enfoque común para abordar la interoperabilidad a nivel de representación de datos en sistemas IoT es el de estructurar los datos siguiendo un modelo de datos estándar, así como formatos de datos basados en texto (e.g., XML). Sin embargo, el tipo de dispositivos que se utiliza normalmente en sistemas IoT tiene capacidades limitadas, así como recursos de procesamiento y de comunicación escasos. Debido a estas limitaciones no es posible integrar formatos de datos basados en texto de manera sencilla y e1ciente en dispositivos y redes con recursos restringidos. En esta Tesis, presentamos una novedosa solución de compresión de datos para formatos de datos basados en texto, que está especialmente diseñada teniendo en cuenta las limitaciones de dispositivos y redes con recursos restringidos. Denominamos a esta solución Context- and Template-based Compression (CTC). CTC mejora la interoperabilidad a nivel de los datos de los sistemas IoT a la vez que requiere muy pocos recursos en cuanto a ancho de banda de las comunicaciones, tamaño de memoria y potencia de procesamiento

    A context -and template- based data compression approach to improve resource-constrained IoT systems interoperability.

    Get PDF
    170 p.El objetivo del Internet de las Cosas (the Internet of Things, IoT) es el de interconectar todo tipo de cosas, desde dispositivos simples, como una bombilla o un termostato, a elementos más complejos y abstractoscomo una máquina o una casa. Estos dispositivos o elementos varían enormemente entre sí, especialmente en las capacidades que poseen y el tipo de tecnologías que utilizan. Esta heterogeneidad produce una gran complejidad en los procesos integración en lo que a la interoperabilidad se refiere.Un enfoque común para abordar la interoperabilidad a nivel de representación de datos en sistemas IoT es el de estructurar los datos siguiendo un modelo de datos estándar, así como formatos de datos basados en texto (e.g., XML). Sin embargo, el tipo de dispositivos que se utiliza normalmente en sistemas IoT tiene capacidades limitadas, así como recursos de procesamiento y de comunicación escasos. Debido a estas limitaciones no es posible integrar formatos de datos basados en texto de manera sencilla y e1ciente en dispositivos y redes con recursos restringidos. En esta Tesis, presentamos una novedosa solución de compresión de datos para formatos de datos basados en texto, que está especialmente diseñada teniendo en cuenta las limitaciones de dispositivos y redes con recursos restringidos. Denominamos a esta solución Context- and Template-based Compression (CTC). CTC mejora la interoperabilidad a nivel de los datos de los sistemas IoT a la vez que requiere muy pocos recursos en cuanto a ancho de banda de las comunicaciones, tamaño de memoria y potencia de procesamiento

    Correct-by-Construction Development of Dynamic Topology Control Algorithms

    Get PDF
    Wireless devices are influencing our everyday lives today and will even more so in the future. A wireless sensor network (WSN) consists of dozens to hundreds of small, cheap, battery-powered, resource-constrained sensor devices (motes) that cooperate to serve a common purpose. These networks are applied in safety- and security-critical areas (e.g., e-health, intrusion detection). The topology of such a system is an attributed graph consisting of nodes representing the devices and edges representing the communication links between devices. Topology control (TC) improves the energy consumption behavior of a WSN by blocking costly links. This allows a mote to reduce its transmission power. A TC algorithm must fulfill important consistency properties (e.g., that the resulting topology is connected). The traditional development process for TC algorithms only considers consistency properties during the initial specification phase. The actual implementation is carried out manually, which is error prone and time consuming. Thus, it is difficult to verify that the implementation fulfills the required consistency properties. The problem becomes even more severe if the development process is iterative. Additionally, many TC algorithms are batch algorithms, which process the entire topology, irrespective of the extent of the topology modifications since the last execution. Therefore, dynamic TC is desirable, which reacts to change events of the topology. In this thesis, we propose a model-driven correct-by-construction methodology for developing dynamic TC algorithms. We model local consistency properties using graph constraints and global consistency properties using second-order logic. Graph transformation rules capture the different types of topology modifications. To specify the control flow of a TC algorithm, we employ the programmed graph transformation language story-driven modeling. We presume that local consistency properties jointly imply the global consistency properties. We ensure the fulfillment of the local consistency properties by synthesizing weakest preconditions for each rule. The synthesized preconditions prohibit the application of a rule if and only if the application would lead to a violation of a consistency property. Still, this restriction is infeasible for topology modifications that need to be executed in any case. Therefore, as a major contribution of this thesis, we propose the anticipation loop synthesis algorithm, which transforms the synthesized preconditions into routines that anticipate all violations of these preconditions. This algorithm also enables the correct-by-construction runtime reconfiguration of adaptive WSNs. We provide tooling for both common evaluation steps. Cobolt allows to evaluate the specified TC algorithms rapidly using the network simulator Simonstrator. cMoflon generates embedded C code for hardware testbeds that build on the sensor operating system Contiki

    A Two-Level Information Modelling Translation Methodology and Framework to Achieve Semantic Interoperability in Constrained GeoObservational Sensor Systems

    Get PDF
    As geographical observational data capture, storage and sharing technologies such as in situ remote monitoring systems and spatial data infrastructures evolve, the vision of a Digital Earth, first articulated by Al Gore in 1998 is getting ever closer. However, there are still many challenges and open research questions. For example, data quality, provenance and heterogeneity remain an issue due to the complexity of geo-spatial data and information representation. Observational data are often inadequately semantically enriched by geo-observational information systems or spatial data infrastructures and so they often do not fully capture the true meaning of the associated datasets. Furthermore, data models underpinning these information systems are typically too rigid in their data representation to allow for the ever-changing and evolving nature of geo-spatial domain concepts. This impoverished approach to observational data representation reduces the ability of multi-disciplinary practitioners to share information in an interoperable and computable way. The health domain experiences similar challenges with representing complex and evolving domain information concepts. Within any complex domain (such as Earth system science or health) two categories or levels of domain concepts exist. Those concepts that remain stable over a long period of time, and those concepts that are prone to change, as the domain knowledge evolves, and new discoveries are made. Health informaticians have developed a sophisticated two-level modelling systems design approach for electronic health documentation over many years, and with the use of archetypes, have shown how data, information, and knowledge interoperability among heterogenous systems can be achieved. This research investigates whether two-level modelling can be translated from the health domain to the geo-spatial domain and applied to observing scenarios to achieve semantic interoperability within and between spatial data infrastructures, beyond what is possible with current state-of-the-art approaches. A detailed review of state-of-the-art SDIs, geo-spatial standards and the two-level modelling methodology was performed. A cross-domain translation methodology was developed, and a proof-of-concept geo-spatial two-level modelling framework was defined and implemented. The Open Geospatial Consortium’s (OGC) Observations & Measurements (O&M) standard was re-profiled to aid investigation of the two-level information modelling approach. An evaluation of the method was undertaken using II specific use-case scenarios. Information modelling was performed using the two-level modelling method to show how existing historical ocean observing datasets can be expressed semantically and harmonized using two-level modelling. Also, the flexibility of the approach was investigated by applying the method to an air quality monitoring scenario using a technologically constrained monitoring sensor system. This work has demonstrated that two-level modelling can be translated to the geospatial domain and then further developed to be used within a constrained technological sensor system; using traditional wireless sensor networks, semantic web technologies and Internet of Things based technologies. Domain specific evaluation results show that twolevel modelling presents a viable approach to achieve semantic interoperability between constrained geo-observational sensor systems and spatial data infrastructures for ocean observing and city based air quality observing scenarios. This has been demonstrated through the re-purposing of selected, existing geospatial data models and standards. However, it was found that re-using existing standards requires careful ontological analysis per domain concept and so caution is recommended in assuming the wider applicability of the approach. While the benefits of adopting a two-level information modelling approach to geospatial information modelling are potentially great, it was found that translation to a new domain is complex. The complexity of the approach was found to be a barrier to adoption, especially in commercial based projects where standards implementation is low on implementation road maps and the perceived benefits of standards adherence are low. Arising from this work, a novel set of base software components, methods and fundamental geo-archetypes have been developed. However, during this work it was not possible to form the required rich community of supporters to fully validate geoarchetypes. Therefore, the findings of this work are not exhaustive, and the archetype models produced are only indicative. The findings of this work can be used as the basis to encourage further investigation and uptake of two-level modelling within the Earth system science and geo-spatial domain. Ultimately, the outcomes of this work are to recommend further development and evaluation of the approach, building on the positive results thus far, and the base software artefacts developed to support the approach
    corecore