18 research outputs found

    A Scheduling Genetic Algorithm For Real-Time Data Freshness And Cloud Data Security Over Keywords Searching

    Get PDF
    Cloud storage services allow customers to ingress data stored from any device at any time. The growth of the Internet helps the number of users who need to access online databases without a deep understanding of the schema or query. The languages have risen dramatically, allowing users to search secured data and retrieve desired data from cloud storage using keywords. On the other hand, there are fundamental difficulties such as security, which must be provided to secure user'spersonal information. A hybrid scheduling genetic algorithm (SGA) is proposed in this research. SGA technique enhances the security level and provides data freshness. For evaluation and comparison, parameters such as execution time throughputs are used. According to experimental results, the proposed technique ensures the security of user data from unauthorized parties. Furthermore, SGA is strong and more effective when compared to a set of parameters to the existing algorithm like Data Encryption Standard (DES), Blowfish, and AdvancedEncryption Standard (AES)

    Data Freshness Over-Engineering: Formulation and Results

    Get PDF
    In many application scenarios, data consumed by real-time tasks are required to meet a maximum age, or freshness, guarantee. In this paper, we consider the end-to-end freshness constraint of data that is passed along a chain of tasks in a uniprocessor setting. We do so with few assumptions regarding the scheduling algorithm used. We present a method for selecting the periods of tasks in chains of length two and three such that the end-to-end freshness requirement is satisfied, and then extend our method to arbitrary chains. We perform evaluations of both methods using parameters from an embedded benchmark suite (E3S) and several schedulers to support our result

    Analysis of distributed multi-periodic systems to achieve consistent data matching

    Get PDF
    Distributed real-time architecture of an embedded system is often described as a set of communicating components. Such a system is data flow (for its description) and time-triggered (for its execution). This work fits in with these problematics and focuses on the control of the time compatibility of a set of interdependent data used by the system components. The architecture of a component-based system forms a graph of communicating components, where more than one path can link two components. These paths may have different timing characteristics but the flows of information which transit on these paths may need to be adequately matched, so that a component uses inputs which all (directly or indirectly) depend on the same production step. In this paper, we define this temporal data-matching property, we show how to analyze the architecture to detect situations that can cause data matching inconsistencies, and we describe an approach to manage data matching that uses queues to delay too fast paths and timestamps to recognize consistent data

    MANAGING QUERY AND UPDATE TRANSACTIONS UNDER QUALITY CONTRACTS IN WEB-DATABASES

    Get PDF
    In modern Web-database systems, users typically perform read-only queries, whereas all write-only data updates are performed in the background, concurrently with queries.For most of these services to be successful and their users to be kept satisfied, two criteria need to be met: user requests must be answered in a timely fashion and must return fresh data. This is relatively easy when the system is lightly loaded and, as such, both queries and updates can be executed quickly. However, this goal becomes practically hard to achieve in real systems due to the high volumes of queries and updates, especially in periods of flash crowds. In this work, we argue it is beneficial to allow users to specify their preferences and let the system optimize towards satisfying user preferences, instead of simply improving the average case. We believe that this user-centric approach will empower the system to gracefully deal with a broader spectrum of workloads.Towards user-centric web-databases, we propose a Quality Contracts framework to help users express their preferences over multiple quality specifications. Moreover, we propose a suite of algorithms to effectively perform load balancing and scheduling for both queries and updates according to user preferences. We evaluate the proposed framework and algorithms through a simulation with real traces from disk accesses and from a stock information website. Finally, to increase the applicability of Quality Contracts enhanced Web-database systems, we propose an algorithm to help users adapt to the Web-database system behavior and maximize their query success ratio

    Analysis of distributed multi-periodic systems to achieve consistent data matching

    Get PDF
    International audienceDistributed real-time architecture of an embedded system is often described as a set of communicating components. Such a system is data flow (for its description) and time-triggered (for its execution). This work fits in with these problematics and focuses on the control of the time compatibility of a set of interdependent data used by the system components. The architecture of a component-based system forms a graph of communicating components, where more than one path can link two components. These paths may have different timing characteristics but the flows of information which transit on these paths may need to be adequately matched, so that a component uses inputs which all (directly or indirectly) depend on the same production step. In this paper, we define this temporal datamatching property, we show how to analyze the architecture to detect situations that cause data matching inconsistencies, and we describe an approach to manage data matching that uses queues to delay too fast paths and timestamps to recognize consistent data

    Timing analysis in existing and emerging cyber physical systems

    Get PDF
    A main mission of safety-critical cyber-physical systems is to guarantee timing correctness. The examples of safety- critical systems are avionic, automotive or medical systems in which timing violations could have disastrous effects, from loss of human life to damage to machines and/or the environment. Over the past decade, multicore processors have become increasingly common for their potential of efficiency, which has made new single-core processors become relatively scarce. As a result, it has created a pressing need to transition to multicore processors. However, existing safety-critical software that has been certified on single-core processors is not allowed to be fielded on a multicore system as is. The issue stems from, namely, serious inter- core interference problems on shared resources in current multicore processors, which create non-deterministic timing behavior. Since meeting the timing constraints is the crucial requirement of safety-critical real-time systems, the use of more than one core in a multicore chip is currently not certified yet by the authorities. Academia has paid relatively little attention to non-determinism due to uncoordinated I/O communications, as compared with other resources such as cache or memory, although industry considers it as one of the most troublesome challenges. Hence we focused on I/O synchronization, requiring no information of Worst Case Execution Time (WCET) that can get impacted by other interference sources. Traditionally, a two-level scheduling, such as Integrated Modular Avionics system (IMA), has been used for providing temporal isolation capability. However, such hierarchical approaches introduce significant priority inversions across applications, especially in multicore systems, ultimately leading to lower system utilization. To address these issues, we have proposed a novel scheduling mechanism called budgeted generalized rate monotonic analysis (Budgeted GRMS) in which different applications’ tasks are globally scheduled for avoiding unnecessary priority inversions, yet the CPU resource is still partitioned for temporal isolation among applications. Incorporating the issues of no information of WCETs and I/O synchronization, this new scheduling paradigm enables the “safe” use of multicore processors in safety-critical real-time systems. Recently, newly emerging Internet of Things (IoT) and Smart City applications are becoming a part of cyber- physical systems, as the needs are required and the feasibility are getting visible. What we need to pay attention to is that the promises and challenges arising from IoT and Smart City applications are providing new research landscapes and opportunities and fundamentally transforming real-time scheduling. As mentioned earlier, in traditional real-time systems, an instance of a program execution (a process) is described as a scheduling entity, while, in the emerging applications, the fundamental schedulable units are chunks of data transported over communication media. Another transformation is that, in IoT and Smart City applications, there are multiple options and combinations to choose to utilize and schedule since there are massively deployed heterogeneous kinds of sensing devices. This is contrary to the existing real-time work which is given a fixed task set to be analyzed. For that reason, they also suggest variants of performance or quality optimization problems. Suppose a disaster response infrastructure in a troubled area to ensure safety of humanitarian missions. Cameras and other sensors are deployed along key routes to monitor local conditions, but turned off by default and turned on on-demand to save limited battery life. To determine a safe route to deliver humanitarian shipments, a decision-maker must collect reconnaissance information and schedule the data items to support timely decision-making. Such data items acquired from the time-evolving physical world are in general time-sensitive - a retrieved item may become stale and no longer be accurate/relevant as conditions in the physical environment change. Therefore, “when to acquire” affects the performance and correctness of such applications and thus the overall system safety and data timeliness should be carefully considered. For the addressed problem, we explored various algorithmic options for maximizing quality of information, and developed the optimal algorithm for the order of retrievals of data items to make multiple decisions. I believe this is a significant initial step toward expanding timing-safety research landscapes and opportunities in the emerging CPS area

    Qduino: a cyber-physical programming platform for multicore Systems-on-Chip

    Full text link
    Emerging multicore Systems-on-Chip are enabling new cyber-physical applications such as autonomous drones, driverless cars and smart manufacturing using web-connected 3D printers. Common to those applications is a communicating task pipeline, to acquire and process sensor data and produce outputs that control actuators. As a result, these applications usually have timing requirements for both individual tasks and task pipelines formed for sensor data processing and actuation. Current cyber-physical programming platforms, such as Arduino and embedded Linux with the POSIX interface do not allow application developers to specify those timing requirements. Moreover, none of them provide the programming interface to schedule tasks and map them to processor cores, while managing I/O in a predictable manner, on multicore hardware platforms. Hence, this thesis presents the Qduino programming platform. Qduino adopts the simplicity of the Arduino API, with additional support for real-time multithreaded sketches on multicore architectures. Qduino allows application developers to specify timing properties of individual tasks as well as task pipelines at the design stage. To this end, we propose a mathematical framework to derive each task’s budget and period from the specified end-to-end timing requirements. The second part of the thesis is motivated by the observation that at the center of these pipelines are tasks that typically require complex software support, such as sensor data fusion or image processing algorithms. These features are usually developed by many man-year engineering efforts and thus commonly seen on General-Purpose Operating Systems (GPOS). Therefore, in order to support modern, intelligent cyber-physical applications, we enhance the Qduino platform’s extensibility by taking advantage of the Quest-V virtualized partitioning kernel. The platform’s usability is demonstrated by building a novel web-connected 3D printer and a prototypical autonomous drone framework in Qduino

    Location-Dependent Query Processing Under Soft Real-Time Constraints

    Get PDF

    Robust and cheating-resilient power auctioning on Resource Constrained Smart Micro-Grids

    Get PDF
    The principle of Continuous Double Auctioning (CDA) is known to provide an efficient way of matching supply and demand among distributed selfish participants with limited information. However, the literature indicates that the classic CDA algorithms developed for grid-like applications are centralised and insensitive to the processing resources capacity, which poses a hindrance for their application on resource constrained, smart micro-grids (RCSMG). A RCSMG loosely describes a micro-grid with distributed generators and demand controlled by selfish participants with limited information, power storage capacity and low literacy, communicate over an unreliable infrastructure burdened by limited bandwidth and low computational power of devices. In this thesis, we design and evaluate a CDA algorithm for power allocation in a RCSMG. Specifically, we offer the following contributions towards power auctioning on RCSMGs. First, we extend the original CDA scheme to enable decentralised auctioning. We do this by integrating a token-based, mutual-exclusion (MUTEX) distributive primitive, that ensures the CDA operates at a reasonably efficient time and message complexity of O(N) and O(logN) respectively, per critical section invocation (auction market execution). Our CDA algorithm scales better and avoids the single point of failure problem associated with centralised CDAs (which could be used to adversarially provoke a break-down of the grid marketing mechanism). In addition, the decentralised approach in our algorithm can help eliminate privacy and security concerns associated with centralised CDAs. Second, to handle CDA performance issues due to malfunctioning devices on an unreliable network (such as a lossy network), we extend our proposed CDA scheme to ensure robustness to failure. Using node redundancy, we modify the MUTEX protocol supporting our CDA algorithm to handle fail-stop and some Byzantine type faults of sites. This yields a time complexity of O(N), where N is number of cluster-head nodes; and message complexity of O((logN)+W) time, where W is the number of check-pointing messages. These results indicate that it is possible to add fault tolerance to a decentralised CDA, which guarantees continued participation in the auction while retaining reasonable performance overheads. In addition, we propose a decentralised consumption scheduling scheme that complements the auctioning scheme in guaranteeing successful power allocation within the RCSMG. Third, since grid participants are self-interested we must consider the issue of power theft that is provoked when participants cheat. We propose threat models centred on cheating attacks aimed at foiling the extended CDA scheme. More specifically, we focus on the Victim Strategy Downgrade; Collusion by Dynamic Strategy Change, Profiling with Market Prediction; and Strategy Manipulation cheating attacks, which are carried out by internal adversaries (auction participants). Internal adversaries are participants who want to get more benefits but have no interest in provoking a breakdown of the grid. However, their behaviour is dangerous because it could result in a breakdown of the grid. Fourth, to mitigate these cheating attacks, we propose an exception handling (EH) scheme, where sentinel agents use allocative efficiency and message overheads to detect and mitigate cheating forms. Sentinel agents are tasked to monitor trading agents to detect cheating and reprimand the misbehaving participant. Overall, message complexity expected in light demand is O(nLogN). The detection and resolution algorithm is expected to run in linear time complexity O(M). Overall, the main aim of our study is achieved by designing a resilient and cheating-free CDA algorithm that is scalable and performs well on resource constrained micro-grids. With the growing popularity of the CDA and its resource allocation applications, specifically to low resourced micro-grids, this thesis highlights further avenues for future research. First, we intend to extend the decentralised CDA algorithm to allow for participants’ mobile phones to connect (reconnect) at different shared smart meters. Such mobility should guarantee the desired CDA properties, the reliability and adequate security. Secondly, we seek to develop a simulation of the decentralised CDA based on the formal proofs presented in this thesis. Such a simulation platform can be used for future studies that involve decentralised CDAs. Third, we seek to find an optimal and efficient way in which the decentralised CDA and the scheduling algorithm can be integrated and deployed in a low resourced, smart micro-grid. Such an integration is important for system developers interested in exploiting the benefits of the two schemes while maintaining system efficiency. Forth, we aim to improve on the cheating detection and mitigation mechanism by developing an intrusion tolerance protocol. Such a scheme will allow continued auctioning in the presence of cheating attacks while incurring low performance overheads for applicability in a RCSMG

    Real-time communications over switched Ethernet supporting dynamic QoS management

    Get PDF
    Doutoramento em Engenharia InformáticaDurante a última década temos assistido a um crescente aumento na utilização de sistemas embutidos para suporte ao controlo de processos, de sistemas robóticos, de sistemas de transportes e veículos e até de sistemas domóticos e eletrodomésticos. Muitas destas aplicações são críticas em termos de segurança de pessoas e bens e requerem um alto nível de determinismo com respeito aos instantes de execução das respectivas tarefas. Além disso, a implantação destes sistemas pode estar sujeita a limitações estruturais, exigindo ou beneficiando de uma configuração distribuída, com vários subsistemas computacionais espacialmente separados. Estes subsistemas, apesar de espacialmente separados, são cooperativos e dependem de uma infraestrutura de comunicação para atingir os objectivos da aplicação e, por consequência, também as transacções efectuadas nesta infraestrutura estão sujeitas às restrições temporais definidas pela aplicação. As aplicações que executam nestes sistemas distribuídos, chamados networked embedded systems (NES), podem ser altamente complexas e heterogéneas, envolvendo diferentes tipos de interacções com diferentes requisitos e propriedades. Um exemplo desta heterogeneidade é o modelo de activação da comunicação entre os subsistemas que pode ser desencadeada periodicamente de acordo com uma base de tempo global (time-triggered), como sejam os fluxos de sistemas de controlo distribuído, ou ainda ser desencadeada como consequência de eventos assíncronos da aplicação (event-triggered). Independentemente das características do tráfego ou do seu modelo de activação, é de extrema importância que a plataforma de comunicações disponibilize as garantias de cumprimento dos requisitos da aplicação ao mesmo tempo que proporciona uma integração simples dos vários tipos de tráfego. Uma outra propriedade que está a emergir e a ganhar importância no seio dos NES é a flexibilidade. Esta propiedade é realçada pela necessidade de reduzir os custos de instalação, manutenção e operação dos sistemas. Neste sentido, o sistema é dotado da capacidade para adaptar o serviço fornecido à aplicação aos respectivos requisitos instantâneos, acompanhando a evolução do sistema e proporcionando uma melhor e mais racional utilização dos recursos disponíveis. No entanto, maior flexibilidade operacional é igualmente sinónimo de maior complexidade derivada da necessidade de efectuar a alocação dinâmica dos recursos, acabando também por consumir recursos adicionais no sistema. A possibilidade de modificar dinâmicamente as caracteristicas do sistema também acarreta uma maior complexidade na fase de desenho e especificação. O aumento do número de graus de liberdade suportados faz aumentar o espaço de estados do sistema, dificultando a uma pre-análise. No sentido de conter o aumento de complexidade são necessários modelos que representem a dinâmica do sistema e proporcionem uma gestão optimizada e justa dos recursos com base em parâmetros de qualidade de serviço (QdS). É nossa tese que as propriedades de flexibilidade, pontualidade e gestão dinâmica de QdS podem ser integradas numa rede switched Ethernet (SE), tirando partido do baixo custo, alta largura de banda e fácil implantação. Nesta dissertação é proposto um protocolo, Flexible Time-Triggered communication over Switched Ethernet (FTT-SE), que suporta as propriedades desejadas e que ultrapassa as limitações das redes SE para aplicações de tempo-real tais como a utilização de filas FIFO, a existência de poucos níveis de prioridade e a pouca capacidade de gestão individualizada dos fluxos. O protocolo baseia-se no paradigma FTT, que genericamente define a arquitectura de uma pilha protocolar sobre o acesso ao meio de uma rede partilhada, impondo desta forma determinismo temporal, juntamente com a capacidade para reconfiguração e adaptação dinâmica da rede. São ainda apresentados vários modelos de distribuição da largura de banda da rede de acordo com o nível de QdS especificado por cada serviço utilizador da rede. Esta dissertação expõe a motivação para a criação do protocolo FTT-SE, apresenta uma descrição do mesmo, bem como a análise de algumas das suas propiedades mais relevantes. São ainda apresentados e comparados modelos de distribuição da QdS. Finalmente, são apresentados dois casos de aplicações que sustentam a validade da tese acima mencionada.During the last decade we have witnessed a massive deployment of embedded systems on a wide applications range, from industrial automation to process control, avionics, cars or even robotics. Many of these applications have an inherently high level of criticality, having to perform tasks within tight temporal constraints. Additionally, the configuration of such systems is often distributed, with several computing nodes that rely on a communication infrastructure to cooperate and achieve the application global goals. Therefore, the communications are also subject to the same temporal constraints set by the application requirements. Many applications relying on such networked embedded systems (NES) are complex and heterogeneous, comprehending different activities with different requirements and properties. For example, the communication between subsystems may follow a strict temporal synchronization with respect to a global time-base (time-triggered), like in a distributed feedback control loop, or it may be issued asynchronously upon the occurrence of events (eventtriggered). Regardless of the traffic characteristics and its activation model, it is of paramount importance having a communication framework that provides seamless integration of heterogeneous traffic sources while guaranteeing the application requirements. Another property that has been emerging as important for NES design and operation is flexibility. The need to reduce installation and operational costs, while facilitating maintenance is promoting a more rational use of the available resources at run-time, exploring the ability to tune service parameters as the system evolves. However, such operational flexibility comes with the cost of increasing the complexity of the system to handle the dynamic resource management, which on the other hand demands the allocation of additional system resources. Moreover, the capacity to dynamically modify the system properties also causes a higher complexity when designing and specifying the system, since the operational state-space increases with the degrees of flexibility of the system. Therefore, in order to bound this complexity appropriate operational models are needed to handle the system dynamics and carry on an efficient and fair resource management strategy based on quality of service (QoS) metrics. This thesis states that the properties of flexibility and timeliness as needed for dynamic QoS management can be provided to switched Ethernet based systems. Switched Ethernet, although initially designed for general purpose Internet access and file transfers, is becoming widely used in NES-based applications. However, COTS switched Ethernet is insufficient regarding the needs for real-time predictability and for supporting the aforementioned properties due the use of FIFO queues too few priority levels and for stream-level management capabilities. In this dissertation we propose a protocol to overcome those limitations, namely the Flexible Time-Triggered communication over Switched Ethernet (FTT-SE). The protocol is based on the FTT paradigm that generically defines a protocol architecture suitable to enforce real-time determinism on a communication network supporting the desired flexibility properties. This dissertation addresses the motivation for FTT-SE, describing the protocol as well as its schedulability analysis. It additionally covers the resource distribution topic, where several distribution models are proposed to manage the resource capacity among the competing services and while considering the QoS level requirements of each service. A couple of application cases are shown that support the aforementioned thesis
    corecore