28 research outputs found

    A Fitted Random Sampling Scheme for Load Distribution in Grid Networks

    Get PDF
    Grid networks provide the ability to perform higher throughput computing by taking advantage of many networked computer’s resources to solve large-scale computation problems. As the popularity of the Grid networks has increased, there is a need to efficiently distribute the load among the resources accessible on the network. In this paper, we present a stochastic network system that gives a distributed load-balancing scheme by generating almost regular networks. This network system is self-organized and depends only on local information for load distribution and resource discovery. The in-degree of each node is refers to its free resources, and job assignment and resource discovery processes required for load balancing is accomplished by using fitted random sampling. Simulation results show that the generated network system provides an effective, scalable, and reliable load-balancing scheme for the distributed resources accessible on Grid networks

    Rômulo Silva de Oliveira

    Get PDF

    Creating telecommunication services based on object-oriented frameworks and SDL

    Get PDF
    This paper describes the tools and techniques being applied in the TINA Open Service Creation Architecture (TOSCA) project to develop object-oriented models of distributed telecommunication services in SDL. The paper also describes the way in which Tree and Tabular Combined Notation (TTCN) test cases are derived from these models and subsequently executed against the CORBA-based implementations of these services through a TTCN/CORBA gateway

    Attribute-based filtering for embedded systems

    Get PDF

    Anturidatan lähettäminen fyysiseltä kaksoselta digitaaliselle kaksoselle

    Get PDF
    A digital twin is a digital counterpart of a physical thing such as a machine. The term digital twin was first introduced in 2010. Thereafter, it has received an extensive amount of interest because of the numerous benefits it is expected to offer throughout the product life cycle. Currently, the concept is developed by the world’s largest companies such as Siemens. The purpose of this thesis is to examine which application layer protocols and communication technologies are the most suitable for the sensor data transmission from a physical twin to a digital twin. In addition, a platform enabling this data transmission is developed. As the concept of a digital twin is relatively new, a comprehensive literature view on the definition of a digital twin in scientific literature is presented. It has been found that the vision of a digital twin has evolved from the concepts of ‘intelligent products’ presented at the beginning of the 2000s. The most widely adopted definition states that a digital twin accurately mirrors the current state of its corresponding twin. However, the definition of a digital twin is not yet standardized and varies in different fields. Based on the literature review, the communication needs of a digital twin are derived. Thereafter, the suitability of HTTP, MQTT, CoAP, XMPP, AMQP, DDS, and OPC UA for sensor data transmission are examined through a literature review. In addition, a review of 4G, 5G, NB-IoT, LoRa, Sigfox, Bluetooth, Wi-Fi, Z-Wave, ZigBee, and WirelessHART is presented. A platform for the management of the sensors is developed. The platform narrows the gap between the concept and realization of a digital twin by enabling sensor data transmission. The platform allows easy addition of sensors to a physical twin and provides an interface for their configuration remotely over the Internet. It supports multiple sensor types and application protocols and offers both web user iterface and REST API.Digitaalinen kaksonen on fyysisen tuotteen digitaalinen vastinkappale, joka sisältää tiedon sen nykyisestä tilasta. Digitaalisen kaksosen käsite otettiin ensimmäisen kerran käyttöön vuonna 2010. Sen jälkeen digitaalinen kaksonen on saanut paljon huomiota, ja sitä ovat lähteneet kehittämään maailman suurimmat yritykset, kuten Siemens. Tämän työn tarkoituksena tutkia, mitkä sovelluskerroksen protokollat ja langattomat verkot soveltuvat parhaiten anturien keräämän datan lähettämiseen fyysiseltä kaksoselta digitaaliselle kaksoselle. Sen lisäksi työssä esitellään alusta, joka mahdollistaa tämän tiedonsiirron. Digitaalisen kaksosesta esitetään laaja kirjallisuuskatsaus, joka luo pohjan työn myöhemmille osioille. Digitaalisen kaksosen konsepti pohjautuu 2000-luvun alussa esiteltyihin ajatuksiin ”älykkäistä tuotteista”. Yleisimmän käytössä olevan määritelmän mukaan digitaalinen kaksonen heijastaa sen fyysisen vastinparin tämän hetkistä tilaa. Määritelmä kuitenkin vaihtelee eri alojen välillä eikä se ole vielä vakiintunut tieteellisessä kirjallisuudessa. Kirjallisuuskatsauksen avulla johdetaan digitaalisen kaksosen kommunikaatiotarpeet. Sen jälkeen arvioidaan seuraavien sovelluskerroksen protokollien soveltuvuutta anturidatan lähettämiseen kirjallisuuskatsauksen avulla: HTTP, MQTT, CoAP, XMPP, AMQP, DDS ja OPC UA. Myös seuraavien langattomien verkkojen soveltuvuutta tiedonsiirtoon tutkitaan: 4G, 5G, NB-IoT, LoRaWAN, Sigfox, Bluetooth, Wi-Fi, Z-Wave, ZigBee ja WirelessHART. Osana työtä kehitettiin myös ohjelmistoalusta, joka mahdollistaa anturien hallinnan etänä Internetin välityksellä. Alusta on pieni askel kohti digitaalisen kaksosen käytän-nön toteutusta, sillä se mahdollistaa tiedon keräämisen fyysisestä vastinkappaleesta. Sen avulla sensorien lisääminen fyysiseen kaksoseen on helppoa, ja se tukee sekä useita sensorityyppejä että sovelluskerroksen protokollia. Alusta tukee REST API –rajapintaa ja sisältää web-käyttöliittymän

    Preliminary definition of CORTEX interaction model

    Get PDF
    As scheduled in the Technical Annex, WP2-D3 comprises work on the basic communication abstractions and the context and environmental awareness. It is structured in an introduction, providing a short survey of the content and four technical chapters. Chapter 2 describes the notion of event channels as a basic middleware abstraction of the interaction model. The concept of event channels accommodates an event-based, generative, many-to-many, anonymous communication model. It contributes to the resolution of the trade-off between autonomy and the need of coordination. Rather than explicitly coordinating actions by transferring control, an event channel allows interaction via a shared data space, thereby maintaining the autonomy of components. A comparison with alternative schemes is presented in chapter 3. Here, the impact of the interaction scheme on the modelling and implementation of a complex robotic application is analysed. It provides additional arguments in favour of a publisher/subscriber communication architecture. One of the challenges in CORTEX is to integrate the cooperation of components through the environment into the general interaction concept. The sensor capabilities of the sentient components and their ability to interact with the environment open new ways of cooperation. A mechanism called Stigmergy which is borrowed from biology and discussed in the CORTEX context is presented in chapter 4. Any activity which is carried out in the physical world needs to adapt to the pace and dependability requirements dictated by the environment. In technical terms this means that non-functional properties of the system, as timeliness and reliability of operation have to be included. These Quality of Service (QoS) attributes have to be guaranteed even in an environment where unanticipated dynamic change is one of the inherent properties. Chapter 5 introduces an adaptive QoS mechanism based on a reliable and timely system service. This service, called the Timely Computing Base (TCB) is able to monitor distributed system activities and to provide an "early warning system" for temporal and functional failures. The TCB thus provides part of the context and environmental awareness needed for adaptatio

    Um modelo para provisão de garantia dinâmica de tempo real em middleware baseado em componentes

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2007.A abordagem baseada em componentes foi desenvolvida em resposta à necessidade de lidar com a complexidade das aplicações e diminuir o ciclo de desenvolvimento do software. A separação em lógica de aplicação e parte não funcional em um componente permite que requisitos temporais sejam configurados, ao invés de inseridos ao longo do código. Como resultado, os componentes se tornam menos dependentes da plataforma subjacente e podem ser reusados em aplicações diferentes. Este trabalho apresenta um modelo para provisão de garantia dinâmica em sistemas de tempo real distribuídos baseados em componentes. O modelo desenvolvido condiciona a aceitação de um cliente à disponibilidade de recursos para satisfazer os requisitos temporais deste cliente e de clientes previamente aceitos. Este modelo permite a adoção de diferentes algoritmos para o teste de aceitação, se adequando ao modelo das tarefas escalonadas ou à capacidade da plataforma. Outra contribuição é um serviço de monitoramento de tempos de resposta de componentes, inicialmente desenvolvidos para prover dados iniciais para o modelo de garantia dinâmica. O serviço de monitoramento permite que o mecanismo de garantia dinâmica se mantenha preciso apesar da flutuação da carga computacional do servidor e permite a aplicação de algoritmos probabilistas para o modelo de garantia dinâmica.Abstract : The component-based approach was developed in response to the need to cope with application complexity and reduce the software development time. The component separation of concerns allows real-time constraints to be configured instead of hard coded. As a result, components become less dependent from the underlying platform and can be reused in different applications. This work presentes a model for real-time dynamic guarantee for component-based distributed systems. According to the model, the acceptance of a client to the system is subject to the availability of resources to satisfy all clients real-time constraints. This model allows the use of different algorithms for the acceptance test, according to the application task model or the platform capacity. Another contribution is the response time monitoring service, developed to provide input data for the dynamic guarantee model. This service provides updates for the dynamic guarantee model and also allows the use of probabilistic approaches for the acceptance test

    Model Checking Cyber-Physical Systems

    Get PDF
    2017 - 2018Cyber-Physical Systems (CPSs) are integrations of computation with physical processes. Applications of CPS arguably have the potential to overshadow the 20-th century IT revolution. Nowadays, CPSs application to many sectors like Smart Grids, Transportation, and Health help us run our lives and businesses smoothly, successfully and safely. Since malfunctions in these CPSs can have serious, expensive, sometimes fatal consequences, Simulation-based Veri cation (SBV) tools are vital to minimize the probability of errors occurring during the development process and beyond. Their applicability is supported by the increasingly widespread use of Model Based Design (MBD) tools. MBD enables the simulation of CPS models in order to check for their correct behaviour from the very initial design phase. The disadvantage is that SBV for complex CPSs is an extremely resources and time-consuming process, which typically requires several months of simulation. Current SBV tools are aimed at accelerating the veri cation process with mul- tiple simulators working simultaneously. To this end, they compute all the scenarios in advance in such a way as to split and simulate them in parallel. Nevertheless, there are still limitations that prevent a more widespread adop- tion of SBV tools. To this end, we present a MBD methodology aiming the acausual modeling and veri cation via formal-methods, speci cally the model checking techniques, the system under veri cation (SUV). Our approach relies basically on: Firstly, the analysis of the steady-states of the CPS and the bound- ing technique of the system's state in parallel with the simulation in order to identify the state space of the system simulating it only once, then represent it as a Finite State Machine (FSM). Secondly, exhaustively verify the resulted FSM using a symbolic model checker and express the desired properties in classical temporal logic. The application to a power management system is presented as a case study. [edited by Author]XXX cicl
    corecore