103 research outputs found

    Designing Distributed, Component-Based Systems for Industrial Robotic Applications

    Get PDF
    none3noneM. Amoretti; S. Caselli; M. ReggianiM., Amoretti; S., Caselli; Reggiani, Monic

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in field experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction specifies methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction unifies different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and verified with proofs of concept and analyses of field experiments. The resulting designs consist of an operating system kernel, a software update method, a data unification interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data unification interface homogenizes the access to the data of heterogeneous sensor networks. A unification model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical field experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the field experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in field experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction specifies methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction unifies different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and verified with proofs of concept and analyses of field experiments. The resulting designs consist of an operating system kernel, a software update method, a data unification interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data unification interface homogenizes the access to the data of heterogeneous sensor networks. A unification model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical field experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the field experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms

    Energy-Efficient and Reliable Computing in Dark Silicon Era

    Get PDF
    Dark silicon denotes the phenomenon that, due to thermal and power constraints, the fraction of transistors that can operate at full frequency is decreasing in each technology generation. Moore’s law and Dennard scaling had been backed and coupled appropriately for five decades to bring commensurate exponential performance via single core and later muti-core design. However, recalculating Dennard scaling for recent small technology sizes shows that current ongoing multi-core growth is demanding exponential thermal design power to achieve linear performance increase. This process hits a power wall where raises the amount of dark or dim silicon on future multi/many-core chips more and more. Furthermore, from another perspective, by increasing the number of transistors on the area of a single chip and susceptibility to internal defects alongside aging phenomena, which also is exacerbated by high chip thermal density, monitoring and managing the chip reliability before and after its activation is becoming a necessity. The proposed approaches and experimental investigations in this thesis focus on two main tracks: 1) power awareness and 2) reliability awareness in dark silicon era, where later these two tracks will combine together. In the first track, the main goal is to increase the level of returns in terms of main important features in chip design, such as performance and throughput, while maximum power limit is honored. In fact, we show that by managing the power while having dark silicon, all the traditional benefits that could be achieved by proceeding in Moore’s law can be also achieved in the dark silicon era, however, with a lower amount. Via the track of reliability awareness in dark silicon era, we show that dark silicon can be considered as an opportunity to be exploited for different instances of benefits, namely life-time increase and online testing. We discuss how dark silicon can be exploited to guarantee the system lifetime to be above a certain target value and, furthermore, how dark silicon can be exploited to apply low cost non-intrusive online testing on the cores. After the demonstration of power and reliability awareness while having dark silicon, two approaches will be discussed as the case study where the power and reliability awareness are combined together. The first approach demonstrates how chip reliability can be used as a supplementary metric for power-reliability management. While the second approach provides a trade-off between workload performance and system reliability by simultaneously honoring the given power budget and target reliability

    Flexible Scheduling in Middleware for Distributed rate-based real-time applications - Doctoral Dissertation, May 2002

    Get PDF
    Distributed rate-based real-time systems, such as process control and avionics mission computing systems, have traditionally been scheduled statically. Static scheduling provides assurance of schedulability prior to run-time overhead. However, static scheduling is brittle in the face of unanticipated overload, and treats invocation-to-invocation variations in resource requirements inflexibly. As a consequence, processing resources are often under-utilized in the average case, and the resulting systems are hard to adapt to meet new real-time processing requirements. Dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling offers relief from the limitations of static scheduling. However, dynamic scheduling often has a high run-time cost because certain decisions are enforced on-line. Furthermore, under conditions of overload tasks can be scheduled dynamically that may never be dispatched, or that upon dispatch would miss their deadlines. We review the implications of these factors on rate-based distributed systems, and posits the necessity to combine static and dynamic approaches to exploit the strengths and compensate for the weakness of either approach in isolation. We present a general hybrid approach to real-time scheduling and dispatching in middleware, that can employ both static and dynamic components. This approach provides (1) feasibility assurance for the most critical tasks, (2) the ability to extend this assurance incrementally to operations in successively lower criticality equivalence classes, (3) the ability to trade off bounds on feasible utilization and dispatching over-head in cases where, for example, execution jitter is a factor or rates are not harmonically related, and (4) overall flexibility to make more optimal use of scarce computing resources and to enforce a wider range of application-specified execution requirements. This approach also meets additional constraints of an increasingly important class of rate-based systems, those with requirements for robust management of real-time performance in the face of rapidly and widely changing operating conditions. To support these requirements, we present a middleware framework that implements the hybrid scheduling and dispatching approach described above, and also provides support for (1) adaptive re-scheduling of operations at run-time and (2) reflective alternation among several scheduling strategies to improve real-time performance in the face of changing operating conditions. Adaptive re-scheduling must be performed whenever operating conditions exceed the ability of the scheduling and dispatching infrastructure to meet the critical real-time requirements of the system under the currently specified rates and execution times of operations. Adaptive re-scheduling relies on the ability to change the rates of execution of at least some operations, and may occur under the control of a higher-level middleware resource manager. Different rates of execution may be specified under different operating conditions, and the number of such possible combinations may be arbitrarily large. Furthermore, adaptive rescheduling may in turn require notification of rate-sensitive application components. It is therefore desirable to handle variations in operating conditions entirely within the scheduling and dispatching infrastructure when possible. A rate-based distributed real-time application, or a higher-level resource manager, could thus fall back on adaptive re-scheduling only when it cannot achieve acceptable real-time performance through self-adaptation. Reflective alternation among scheduling heuristics offers a way to tune real-time performance internally, and we offer foundational support for this approach. In particular, run-time observable information such as that provided by our metrics-feedback framework makes it possible to detect that a given current scheduling heuristic is underperforming the level of service another could provide. Furthermore we present empirical results for our framework in a realistic avionics mission computing environment. This forms the basis for guided adaption. This dissertation makes five contributions in support of flexible and adaptive scheduling and dispatching in middleware. First, we provide a middle scheduling framework that supports arbitrary and fine-grained composition of static/dynamic scheduling, to assure critical timeliness constraints while improving noncritical performance under a range of conditions. Second, we provide a flexible dispatching infrastructure framework composed of fine-grained primitives, and describe how appropriate configurations can be generated automatically based on the output of the scheduling framework. Third, we describe algorithms to reduce the overhead and duration of adaptive rescheduling, based on sorting for rate selection and priority assignment. Fourth, we provide timely and efficient performance information through an optimized metrics-feedback framework, to support higher-level reflection and adaptation decisions. Fifth, we present the results of empirical studies to quantify and evaluate the performance of alternative canonical scheduling heuristics, across a range of load and load jitter conditions. These studies were conducted within an avionics mission computing applications framework running on realistic middleware and embedded hardware. The results obtained from these studies (1) demonstrate the potential benefits of reflective alternation among distinct scheduling heuristics at run-time, and (2) suggest performance factors of interest for future work on adaptive control policies and mechanisms using this framework

    JISC Preservation of Web Resources (PoWR) Handbook

    Get PDF
    Handbook of Web Preservation produced by the JISC-PoWR project which ran from April to November 2008. The handbook specifically addresses digital preservation issues that are relevant to the UK HE/FE web management community”. The project was undertaken jointly by UKOLN at the University of Bath and ULCC Digital Archives department

    Adaptive Knobs for Resource Efficient Computing

    Get PDF
    Performance demands of emerging domains such as artificial intelligence, machine learning and vision, Internet-of-things etc., continue to grow. Meeting such requirements on modern multi/many core systems with higher power densities, fixed power and energy budgets, and thermal constraints exacerbates the run-time management challenge. This leaves an open problem on extracting the required performance within the power and energy limits, while also ensuring thermal safety. Existing architectural solutions including asymmetric and heterogeneous cores and custom acceleration improve performance-per-watt in specific design time and static scenarios. However, satisfying applications’ performance requirements under dynamic and unknown workload scenarios subject to varying system dynamics of power, temperature and energy requires intelligent run-time management. Adaptive strategies are necessary for maximizing resource efficiency, considering i) diverse requirements and characteristics of concurrent applications, ii) dynamic workload variation, iii) core-level heterogeneity and iv) power, thermal and energy constraints. This dissertation proposes such adaptive techniques for efficient run-time resource management to maximize performance within fixed budgets under unknown and dynamic workload scenarios. Resource management strategies proposed in this dissertation comprehensively consider application and workload characteristics and variable effect of power actuation on performance for pro-active and appropriate allocation decisions. Specific contributions include i) run-time mapping approach to improve power budgets for higher throughput, ii) thermal aware performance boosting for efficient utilization of power budget and higher performance, iii) approximation as a run-time knob exploiting accuracy performance trade-offs for maximizing performance under power caps at minimal loss of accuracy and iv) co-ordinated approximation for heterogeneous systems through joint actuation of dynamic approximation and power knobs for performance guarantees with minimal power consumption. The approaches presented in this dissertation focus on adapting existing mapping techniques, performance boosting strategies, software and dynamic approximations to meet the performance requirements, simultaneously considering system constraints. The proposed strategies are compared against relevant state-of-the-art run-time management frameworks to qualitatively evaluate their efficacy

    A Model-Based Development and Verification Framework for Distributed System-on-Chip Architecture

    Get PDF
    The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.Siirretty Doriast

    Model for WCET prediction, scheduling and task allocation for emergent agent-behaviours in real-time scenarios

    Get PDF
    [ES]Hasta el momento no se conocen modelos de tiempo real específicamente desarrollados para su uso en sistemas abiertos, como las Organizaciones Virtuales de Agentes (OVs). Convencionalmente, los modelos de tiempo real se aplican a sistemas cerrados donde todas las variables se conocen a priori. Esta tesis presenta nuevas contribuciones y la novedosa integración de agentes en tiempo real dentro de OVs. Hasta donde alcanza nuestro conocimiento, éste es el primer modelo específicamente diseñado para su aplicación en OVs con restricciones temporales estrictas. Esta tesis proporciona una nueva perspectiva que combina la apertura y dinamicidad necesarias en una OV con las restricciones de tiempo real. Ésto es una aspecto complicado ya que el primer paradigma no es estricto, como el propio término de sistema abierto indica, sin embargo, el segundo paradigma debe cumplir estrictas restricciones. En resumen, el modelo que se presenta permite definir las acciones que una OV debe llevar a cabo con un plazo concreto, considerando los cambios que pueden ocurrir durante la ejecución de un plan particular. Es una planificación de tiempo real en una OV. Otra de las principales contribuciones de esta tesis es un modelo para el cálculo del tiempo de ejecución en el peor caso (WCET). La propuesta es un modelo efectivo para calcular el peor escenario cuando un agente desea formar parte de una OV y para ello, debe incluir sus tareas o comportamientos dentro del sistema de tiempo real, es decir, se calcula el WCET de comportamientos emergentes en tiempo de ejecución. También se incluye una planificación local para cada nodo de ejecución basada en el algoritmo FPS y una distribución de tareas entre los nodos disponibles en el sistema. Para ambos modelos se usan modelos matemáticos y estadísticos avanzados para crear un mecanismo adaptable, robusto y eficiente para agentes inteligentes en OVs. El desconocimiento, pese al estudio realizado, de una plataforma para sistemas abiertos que soporte agentes con restricciones de tiempo real y los mecanismos necesarios para el control y la gestión de OVs, es la principal motivación para el desarrollo de la plataforma de agentes PANGEA+RT. PANGEA+RT es una innovadora plataforma multi-agente que proporciona soporte para la ejecución de agentes en ambientes de tiempo real. Finalmente, se presenta un caso de estudio donde robots heterogéneos colaboran para realizar tareas de vigilancia. El caso de estudio se ha desarrollado con la plataforma PANGEA+RT donde el modelo propuesto está integrado. Por tanto al final de la tesis, con este caso de estudio se obtienen los resultados y conclusiones que validan el modelo

    Towards Optimal Application Mapping for Energy-Efficient Many-Core Platforms

    Get PDF
    Siirretty Doriast
    corecore