149 research outputs found

    Inferential networked control with accessibility constraints in both the sensor and actuator channels

    Get PDF
    The predictor and controller design for an inferential control scheme over a network is addressed. A linear plant with disturbances and measurement noise is assumed to be controlled by a controller that communicates with the sensors and the actuators through a constrained network. An algorithm is proposed such that the scarce available outputs are used to make a prediction of the system evolution with an observer that takes into account the amount of lost data between successful measurements transmissions. The state prediction is then used to calculate the control actions sent to the actuator. The possibility of control action drop due to network constraints is taken into account. This networked control scheme is analyzed and both the predictor and controller designs are addressed taking into account the disturbances, the measurement noise, the scarce availability of output samples and the scarce capability of control actions update. The time-varying sampling periods that result for the process inputs and outputs due to network constraints have been determined as a function of the probability of successful transmission on a specified time with a Bernoulli distribution. For both designs H∞ performance has been established and LMI design techniques have been used to achieve a numerical solution

    Networked gain-scheduled fault diagnosis under control input dropouts without data delivery acknowledgement

    Get PDF
    This paper investigates the fault diagnosis problem for discrete‐time networked control systems under dropouts in both control and measurement channel with no delivery acknowledgment. We propose to use a proportional integral observer‐based fault diagnoser collocated with the controller. The observer estimates the faults and computes a residual signal whose comparison with a threshold alarms the fault appearance. We employ the expected value of the arriving control input for the open‐loop estimation and the measurement reception scenario for the correction with a jump observer. The jumping gains are scheduled in real time with rational functions depending on a statistic of the difference between the control command being applied in the plant and the one being used in the observer. We design the observer, the residual, and the threshold to maximize the sensitivity under faults while guaranteeing some minimum detectable faults under a predefined false alarm rate. Exploiting sum‐of‐squares decomposition techniques, the design procedure becomes an optimization problem over polynomials

    Jump state estimation with multiple sensors with packet dropping and delaying channels

    Get PDF
    This work addresses the design of a state observer for systems whose outputs are measured through a communication network. The measurements from each sensor node are assumed to arrive randomly, scarcely and with a time-varying delay. The proposed model of the plant and the network measurement scenarios cover the cases of multiple sensors, out-of-sequence measurements, buffered measurements on a single packet and multirate sensor measurements. A jump observer is proposed that selects a different gain depending on the number of periods elapsed between successfully received measurements and on the available data. A finite set of gains is pre-calculated offline with a tractable optimisation problem, where the complexity of the observer implementation is a design parameter. The computational cost of the observer implementation is much lower than in the Kalman filter, whilst the performance is similar. Several examples illustrate the observer design for different measurement scenarios and observer complexity and show the achievable performance

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Modelling, Monitoring, Control and Optimization for Complex Industrial Processes

    Get PDF
    This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors

    Ubiquitous Robotics System for Knowledge-based Auto-configuration System for Service Delivery within Smart Home Environments

    Get PDF
    The future smart home will be enhanced and driven by the recent advance of the Internet of Things (IoT), which advocates the integration of computational devices within an Internet architecture on a global scale [1, 2]. In the IoT paradigm, the smart home will be developed by interconnecting a plethora of smart objects both inside and outside the home environment [3-5]. The recent take-up of these connected devices within home environments is slowly and surely transforming traditional home living environments. Such connected and integrated home environments lead to the concept of the smart home, which has attracted significant research efforts to enhance the functionality of home environments with a wide range of novel services. The wide availability of services and devices within contemporary smart home environments make their management a challenging and rewarding task. The trend whereby the development of smart home services is decoupled from that of smart home devices increases the complexity of this task. As such, it is desirable that smart home services are developed and deployed independently, rather than pre-bundled with specific devices, although it must be recognised that this is not always practical. Moreover, systems need to facilitate the deployment process and cope with any changes in the target environment after deployment. Maintaining complex smart home systems throughout their lifecycle entails considerable resources and effort. These challenges have stimulated the need for dynamic auto-configurable services amongst such distributed systems. Although significant research has been directed towards achieving auto-configuration, none of the existing solutions is sufficient to achieve auto-configuration within smart home environments. All such solutions are considered incomplete, as they lack the ability to meet all smart home requirements efficiently. These requirements include the ability to adapt flexibly to new and dynamic home environments without direct user intervention. Fulfilling these requirements would enhance the performance of smart home systems and help to address cost-effectiveness, considering the financial implications of the manual configuration of smart home environments. Current configuration approaches fail to meet one or more of the requirements of smart homes. If one of these approaches meets the flexibility criterion, the configuration is either not executed online without affecting the system or requires direct user intervention. In other words, there is no adequate solution to allow smart home systems to adapt dynamically to changing circumstances, hence to enable the correct interconnections among its components without direct user intervention and the interruption of the whole system. Therefore, it is necessary to develop an efficient, adaptive, agile and flexible system that adapts dynamically to each new requirement of the smart home environment. This research aims to devise methods to automate the activities associated with customised service delivery for dynamic home environments by exploiting recent advances in the field of ubiquitous robotics and Semantic Web technologies. It introduces a novel approach called the Knowledge-based Auto-configuration Software Robot (Sobot) for Smart Home Environments, which utilises the Sobot to achieve auto-configuration of the system. The research work was conducted under the Distributed Integrated Care Services and Systems (iCARE) project, which was designed to accomplish and deliver integrated distributed ecosystems with a homecare focus. The auto-configuration Sobot which is the focus of this thesis is a key component of the iCARE project. It will become one of the key enabling technologies for generic smart home environments. It has a profound impact on designing and implementing a high quality system. Its main role is to generate a feasible configuration that meets the given requirements using the knowledgebase of the smart home environment as a core component. The knowledgebase plays a pivotal role in helping the Sobot to automatically select the most appropriate resources in a given context-aware system via semantic searching and matching. Ontology as a technique of knowledgebase representation generally helps to design and develop a specific domain. It is also a key technology for the Semantic Web, which enables a common understanding amongst software agents and people, clarifies the domain assumptions and facilitates the reuse and analysis of its knowledge. The main advantages of the Sobot over traditional applications is its awareness of the changing digital and physical environments and its ability to interpret these changes, extract the relevant contextual data and merge any new information or knowledge. The Sobot is capable of creating new or alternative feasible configurations to meet the system’s goal by utilising inferred facts based on the smart home ontological model, so that the system can adapt to the changed environment. Furthermore, the Sobot has the capability to execute the generated reconfiguration plan without interrupting the running of the system. A proof-of-concept testbed has been designed and implemented. The case studies carried out have shown the potential of the proposed approach to achieve flexible and reliable auto-configuration of the smart home system, with promising directions for future research

    The internet of ontological things: On symmetries between ubiquitous problems and their computational solutions in the age of smart objects

    Get PDF
    This dissertation is about an abstract form of computer network that has recently earned a new physical incarnation called “the Internet of Things.” It surveys the ontological transformations that have occurred over recent decades to the computational components of this network, objects—initially designed as abstract algorithmic agents in a source code of computer programming but now transplanted into real-world objects. Embodying the ideal of modularity, objects have provided computer programmers with more intuitive means to construct a software application with lots of simple and reusable functional building blocks. Their capability of being reassembled into many different networks for a variety of applications has also embodied another ideal of computing machines, namely general-purposiveness. In the algorithmic cultures of the past century, these objects existed as mere abstractions to help humans to understand electromagnetic signals that had infiltrated every corner of automatized spaces from private to public. As an instrumental means to domesticate these elusive signals into programmable architectures according to the goals imposed by professional programmers and amateur end-users, objects promised a universal language for any computable human activities. This utopian vision for the object-oriented domestication of the digital has had enough traction for the growth of the software industry as it has provided an alibi to hide another process of colonization occurring on the flipside of their interfacing between humans and machines: making programmable the highest number of online and offline human activities possible. A more recent media age, which this dissertation calls the age of the Internet of Things, refers to the second phase of this colonization of human cultures by the algorithmic objects, no longer trapped in the hard-wired circuit boards of personal computer, but now residing in real-life objects with new wireless communicability. Chapters of this dissertation examine each different computer application—a navigation system in a smart car, smart home, open-world video games, and neuro-prosthetics—as each particular case of this object-oriented redefinition of human cultures

    Re-use of tests and arguments for assesing dependable mixed-critically systems

    Get PDF
    The safety assessment of mixed-criticality systems (MCS) is a challenging activity due to system heterogeneity, design constraints and increasing complexity. The foundation for MCSs is the integrated architecture paradigm, where a compact hardware comprises multiple execution platforms and communication interfaces to implement concurrent functions with different safety requirements. Besides a computing platform providing adequate isolation and fault tolerance mechanism, the development of an MCS application shall also comply with the guidelines defined by the safety standards. A way to lower the overall MCS certification cost is to adopt a platform-based design (PBD) development approach. PBD is a model-based development (MBD) approach, where separate models of logic, hardware and deployment support the analysis of the resulting system properties and behaviour. The PBD development of MCSs benefits from a composition of modular safety properties (e.g. modular safety cases), which support the derivation of mixed-criticality product lines. The validation and verification (V&V) activities claim a substantial effort during the development of programmable electronics for safety-critical applications. As for the MCS dependability assessment, the purpose of the V&V is to provide evidences supporting the safety claims. The model-based development of MCSs adds more V&V tasks, because additional analysis (e.g., simulations) need to be carried out during the design phase. During the MCS integration phase, typically hardware-in-the-loop (HiL) plant simulators support the V&V campaigns, where test automation and fault-injection are the key to test repeatability and thorough exercise of the safety mechanisms. This dissertation proposes several V&V artefacts re-use strategies to perform an early verification at system level for a distributed MCS, artefacts that later would be reused up to the final stages in the development process: a test code re-use to verify the fault-tolerance mechanisms on a functional model of the system combined with a non-intrusive software fault-injection, a model to X-in-the-loop (XiL) and code-to-XiL re-use to provide models of the plant and distributed embedded nodes suited to the HiL simulator, and finally, an argumentation framework to support the automated composition and staged completion of modular safety-cases for dependability assessment, in the context of the platform-based development of mixed-criticality systems relying on the DREAMS harmonized platform.La dificultad para evaluar la seguridad de los sistemas de criticidad mixta (SCM) aumenta con la heterogeneidad del sistema, las restricciones de diseño y una complejidad creciente. Los SCM adoptan el paradigma de arquitectura integrada, donde un hardware embebido compacto comprende múltiples plataformas de ejecución e interfaces de comunicación para implementar funciones concurrentes y con diferentes requisitos de seguridad. Además de una plataforma de computación que provea un aislamiento y mecanismos de tolerancia a fallos adecuados, el desarrollo de una aplicación SCM además debe cumplir con las directrices definidas por las normas de seguridad. Una forma de reducir el coste global de la certificación de un SCM es adoptar un enfoque de desarrollo basado en plataforma (DBP). DBP es un enfoque de desarrollo basado en modelos (DBM), en el que modelos separados de lógica, hardware y despliegue soportan el análisis de las propiedades y el comportamiento emergente del sistema diseñado. El desarrollo DBP de SCMs se beneficia de una composición modular de propiedades de seguridad (por ejemplo, casos de seguridad modulares), que facilitan la definición de líneas de productos de criticidad mixta. Las actividades de verificación y validación (V&V) representan un esfuerzo sustancial durante el desarrollo de aplicaciones basadas en electrónica confiable. En la evaluación de la seguridad de un SCM el propósito de las actividades de V&V es obtener las evidencias que apoyen las aseveraciones de seguridad. El desarrollo basado en modelos de un SCM incrementa las tareas de V&V, porque permite realizar análisis adicionales (por ejemplo, simulaciones) durante la fase de diseño. En las campañas de pruebas de integración de un SCM habitualmente se emplean simuladores de planta hardware-in-the-loop (HiL), en donde la automatización de pruebas y la inyección de faltas son la clave para la repetitividad de las pruebas y para ejercitar completamente los mecanismos de tolerancia a fallos. Esta tesis propone diversas estrategias de reutilización de artefactos de V&V para la verificación temprana de un MCS distribuido, artefactos que se emplearán en ulteriores fases del desarrollo: la reutilización de código de prueba para verificar los mecanismos de tolerancia a fallos sobre un modelo funcional del sistema combinado con una inyección de fallos de software no intrusiva, la reutilización de modelo a X-in-the-loop (XiL) y código a XiL para obtener modelos de planta y nodos distribuidos aptos para el simulador HiL y, finalmente, un marco de argumentación para la composición automatizada y la compleción escalonada de casos de seguridad modulares, en el contexto del desarrollo basado en plataformas de sistemas de criticidad mixta empleando la plataforma armonizada DREAMS.Kritikotasun nahastuko sistemen segurtasun ebaluazioa jarduera neketsua da beraien heterogeneotasuna dela eta. Sistema hauen oinarria arkitektura integratuen paradigman datza, non hardware konpaktu batek exekuzio plataforma eta komunikazio interfaze ugari integratu ahal dituen segurtasun baldintza desberdineko funtzio konkurrenteak inplementatzeko. Konputazio plataformek isolamendu eta akatsen aurkako mekanismo egokiak emateaz gain, segurtasun arauek definituriko jarraibideak jarraitu behar dituzte kritikotasun mistodun aplikazioen garapenean. Sistema hauen zertifikazio prozesuaren kostua murrizteko aukera bat plataformetan oinarritutako garapenean (PBD) datza. Garapen planteamendu hau modeloetan oinarrituriko garapena da (MBD) non modeloaren logika, hardware eta garapen desberdinak sistemaren propietateen eta portaeraren aurka aztertzen diren. Kritikotasun mistodun sistemen PBD garapenak etekina ateratzen dio moduluetan oinarrituriko segurtasun propietateei, adibidez: segurtasun kasu modularrak (MSC). Modulu hauek kritikotasun mistodun produktu-lerroak ere hartzen dituzte kontutan. Berifikazio eta balioztatze (V&V) jarduerek esfortzu kontsideragarria eskatzen dute segurtasun-kiritikoetarako elektronika programagarrien garapenean. Kritikotasun mistodun sistemen konfiantzaren ebaluazioaren eta V&V jardueren helburua segurtasun eskariak jasotzen dituzten frogak proportzionatzea da. Kritikotasun mistodun sistemen modelo bidezko garapenek zeregin gehigarriak atxikitzen dizkio V&V jarduerari, fase honetan analisi gehigarriak (hots, simulazioak) zehazten direlako. Bestalde, kritikotasun mistodun sistemen integrazio fasean, hardware-in-the-loop (Hil) simulazio plantek V&V iniziatibak sostengatzen dituzte non testen automatizazioan eta akatsen txertaketan funtsezko jarduerak diren. Jarduera hauek frogen errepikapena eta segurtasun mekanismoak egiaztzea ahalbidetzen dute. Tesi honek V&V artefaktuen berrerabilpenerako estrategiak proposatzen ditu, kritikotasun mistodun sistemen egiaztatze azkarrerako sistema mailan eta garapen prozesuko azken faseetaraino erabili daitezkeenak. Esate baterako, test kodearen berrabilpena akats aurkako mekanismoak egiaztatzeko, modelotik X-in-the-loop (XiL)-ra eta kodetik XiL-rako konbertsioa HiL simulaziorako eta argumentazio egitura bat DREAMS Europear proiektuan definituriko arkitektura estiloan oinarrituriko segurtasun kasu modularrak automatikoki eta gradualki sortzeko

    Parkinson's Disease Management through ICT

    Get PDF
    Parkinson's Disease (PD) is a neurodegenerative disorder that manifests with motor and non-motor symptoms. PD treatment is symptomatic and tries to alleviate the associated symptoms through an adjustment of the medication. As the disease is evolving and this evolution is patient specific, it could be very difficult to properly manage the disease.The current available technology (electronics, communication, computing, etc.), correctly combined with wearables, can be of great use for obtaining and processing useful information for both clinicians and patients allowing them to become actively involved in their condition.Parkinson's Disease Management through ICT: The REMPARK Approach presents the work done, main results and conclusions of the REMPARK project (2011 – 2015) funded by the European Union under contract FP7-ICT-2011-7-287677. REMPARK system was proposed and developed as a real Personal Health Device for the Remote and Autonomous Management of Parkinson’s Disease, composed of different levels of interaction with the patient, clinician and carers, and integrating a set of interconnected sub-systems: sensor, auditory cueing, Smartphone and server. The sensor subsystem, using embedded algorithmics, is able to detect the motor symptoms associated with PD in real time. This information, sent through the Smartphone to the REMPARK server, is used for an efficient management of the disease
    corecore