152 research outputs found

    Managing deadline miss ratio and sensor data freshness in real-time databases

    Full text link

    Data Freshness Over-Engineering: Formulation and Results

    Get PDF
    In many application scenarios, data consumed by real-time tasks are required to meet a maximum age, or freshness, guarantee. In this paper, we consider the end-to-end freshness constraint of data that is passed along a chain of tasks in a uniprocessor setting. We do so with few assumptions regarding the scheduling algorithm used. We present a method for selecting the periods of tasks in chains of length two and three such that the end-to-end freshness requirement is satisfied, and then extend our method to arbitrary chains. We perform evaluations of both methods using parameters from an embedded benchmark suite (E3S) and several schedulers to support our result

    A New Approach to Manage QoS in Distributed Multimedia Systems

    Full text link
    Dealing with network congestion is a criterion used to enhance quality of service (QoS) in distributed multimedia systems. The existing solutions for the problem of network congestion ignore scalability considerations because they maintain a separate classification for each video stream. In this paper, we propose a new method allowing to control QoS provided to clients according to the network congestion, by discarding some frames when needed. The technique proposed, called (m,k)-frame, is scalable with little degradation in application performances. (m,k)-frame method is issued from the notion of (m,k)-firm realtime constraints which means that among k invocations of a task, m invocations must meet their deadline. Our simulation studies show the usefulness of (m,k)-frame method to adapt the QoS to the real conditions in a multimedia application, according to the current system load. Notably, the system must adjust the QoS provided to active clients1 when their number varies, i.e. dynamic arrival of clients.Comment: 10 pages, International Journal of Computer Science and Information Security (IJCSIS

    Performance assessment of real-time data management on wireless sensor networks

    Get PDF
    Technological advances in recent years have allowed the maturity of Wireless Sensor Networks (WSNs), which aim at performing environmental monitoring and data collection. This sort of network is composed of hundreds, thousands or probably even millions of tiny smart computers known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and the requirements of low-cost nodes, these sensor node resources such as processing power, storage and especially energy are very limited. Once the sensors perform their measurements from the environment, the problem of data storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going interaction between sensors and environment results huge amounts of data. Techniques for data storage and query in WSN can be based on either external storage or local storage. The external storage, called warehousing approach, is a centralized system on which the data gathered by the sensors are periodically sent to a central database server where user queries are processed. The local storage, in the other hand called distributed approach, exploits the capabilities of sensors calculation and the sensors act as local databases. The data is stored in a central database server and in the devices themselves, enabling one to query both. The WSNs are used in a wide variety of applications, which may perform certain operations on collected sensor data. However, for certain applications, such as real-time applications, the sensor data must closely reflect the current state of the targeted environment. However, the environment changes constantly and the data is collected in discreet moments of time. As such, the collected data has a temporal validity, and as time advances, it becomes less accurate, until it does not reflect the state of the environment any longer. Thus, these applications must query and analyze the data in a bounded time in order to make decisions and to react efficiently, such as industrial automation, aviation, sensors network, and so on. In this context, the design of efficient real-time data management solutions is necessary to deal with both time constraints and energy consumption. This thesis studies the real-time data management techniques for WSNs. It particularly it focuses on the study of the challenges in handling real-time data storage and query for WSNs and on the efficient real-time data management solutions for WSNs. First, the main specifications of real-time data management are identified and the available real-time data management solutions for WSNs in the literature are presented. Secondly, in order to provide an energy-efficient real-time data management solution, the techniques used to manage data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many research works argue that the distributed approach is the most energy-efficient way of managing data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the network. Thirdly, based on these two studies and considering the complexity of developing, testing, and debugging this kind of complex system, a model for a simulation framework of the real-time databases management on WSN that uses a distributed approach and its implementation are proposed. This will help to explore various solutions of real-time database techniques on WSNs before deployment for economizing money and time. Moreover, one may improve the proposed model by adding the simulation of protocols or place part of this simulator on another available simulator. For validating the model, a case study considering real-time constraints as well as energy constraints is discussed. Fourth, a new architecture that combines statistical modeling techniques with the distributed approach and a query processing algorithm to optimize the real-time user query processing are proposed. This combination allows performing a query processing algorithm based on admission control that uses the error tolerance and the probabilistic confidence interval as admission parameters. The experiments based on real world data sets as well as synthetic data sets demonstrate that the proposed solution optimizes the real-time query processing to save more energy while meeting low latency.Fundação para a Ciência e Tecnologi

    MANAGING QUERY AND UPDATE TRANSACTIONS UNDER QUALITY CONTRACTS IN WEB-DATABASES

    Get PDF
    In modern Web-database systems, users typically perform read-only queries, whereas all write-only data updates are performed in the background, concurrently with queries.For most of these services to be successful and their users to be kept satisfied, two criteria need to be met: user requests must be answered in a timely fashion and must return fresh data. This is relatively easy when the system is lightly loaded and, as such, both queries and updates can be executed quickly. However, this goal becomes practically hard to achieve in real systems due to the high volumes of queries and updates, especially in periods of flash crowds. In this work, we argue it is beneficial to allow users to specify their preferences and let the system optimize towards satisfying user preferences, instead of simply improving the average case. We believe that this user-centric approach will empower the system to gracefully deal with a broader spectrum of workloads.Towards user-centric web-databases, we propose a Quality Contracts framework to help users express their preferences over multiple quality specifications. Moreover, we propose a suite of algorithms to effectively perform load balancing and scheduling for both queries and updates according to user preferences. We evaluate the proposed framework and algorithms through a simulation with real traces from disk accesses and from a stock information website. Finally, to increase the applicability of Quality Contracts enhanced Web-database systems, we propose an algorithm to help users adapt to the Web-database system behavior and maximize their query success ratio

    Evaluation of Load Scheduling Strategies for Real-Time Data Warehouse Environments

    Get PDF
    The demand for so-called living or real-time data warehouses is increasing in many application areas, including manufacturing, event monitoring and telecommunications. In fields like these, users normally expect short response times for their queries and high freshness for the requested data. However, it is truly challenging to meet both requirements at the same time because of the continuous flow of write-only updates and read-only queries as well as the latency caused by arbitrarily complex ETL processes. To optimize the update flow in terms of data freshness maximization and load minimization, we propose two algorithms - local and global scheduling - that operate on the basis of different system information. We want to discuss the benefits and drawbacks of both approaches in detail and derive recommendations regarding the optimal scheduling strategy for any given system setup and workload

    Location-Dependent Query Processing Under Soft Real-Time Constraints

    Get PDF

    Timing analysis in existing and emerging cyber physical systems

    Get PDF
    A main mission of safety-critical cyber-physical systems is to guarantee timing correctness. The examples of safety- critical systems are avionic, automotive or medical systems in which timing violations could have disastrous effects, from loss of human life to damage to machines and/or the environment. Over the past decade, multicore processors have become increasingly common for their potential of efficiency, which has made new single-core processors become relatively scarce. As a result, it has created a pressing need to transition to multicore processors. However, existing safety-critical software that has been certified on single-core processors is not allowed to be fielded on a multicore system as is. The issue stems from, namely, serious inter- core interference problems on shared resources in current multicore processors, which create non-deterministic timing behavior. Since meeting the timing constraints is the crucial requirement of safety-critical real-time systems, the use of more than one core in a multicore chip is currently not certified yet by the authorities. Academia has paid relatively little attention to non-determinism due to uncoordinated I/O communications, as compared with other resources such as cache or memory, although industry considers it as one of the most troublesome challenges. Hence we focused on I/O synchronization, requiring no information of Worst Case Execution Time (WCET) that can get impacted by other interference sources. Traditionally, a two-level scheduling, such as Integrated Modular Avionics system (IMA), has been used for providing temporal isolation capability. However, such hierarchical approaches introduce significant priority inversions across applications, especially in multicore systems, ultimately leading to lower system utilization. To address these issues, we have proposed a novel scheduling mechanism called budgeted generalized rate monotonic analysis (Budgeted GRMS) in which different applications’ tasks are globally scheduled for avoiding unnecessary priority inversions, yet the CPU resource is still partitioned for temporal isolation among applications. Incorporating the issues of no information of WCETs and I/O synchronization, this new scheduling paradigm enables the “safe” use of multicore processors in safety-critical real-time systems. Recently, newly emerging Internet of Things (IoT) and Smart City applications are becoming a part of cyber- physical systems, as the needs are required and the feasibility are getting visible. What we need to pay attention to is that the promises and challenges arising from IoT and Smart City applications are providing new research landscapes and opportunities and fundamentally transforming real-time scheduling. As mentioned earlier, in traditional real-time systems, an instance of a program execution (a process) is described as a scheduling entity, while, in the emerging applications, the fundamental schedulable units are chunks of data transported over communication media. Another transformation is that, in IoT and Smart City applications, there are multiple options and combinations to choose to utilize and schedule since there are massively deployed heterogeneous kinds of sensing devices. This is contrary to the existing real-time work which is given a fixed task set to be analyzed. For that reason, they also suggest variants of performance or quality optimization problems. Suppose a disaster response infrastructure in a troubled area to ensure safety of humanitarian missions. Cameras and other sensors are deployed along key routes to monitor local conditions, but turned off by default and turned on on-demand to save limited battery life. To determine a safe route to deliver humanitarian shipments, a decision-maker must collect reconnaissance information and schedule the data items to support timely decision-making. Such data items acquired from the time-evolving physical world are in general time-sensitive - a retrieved item may become stale and no longer be accurate/relevant as conditions in the physical environment change. Therefore, “when to acquire” affects the performance and correctness of such applications and thus the overall system safety and data timeliness should be carefully considered. For the addressed problem, we explored various algorithmic options for maximizing quality of information, and developed the optimal algorithm for the order of retrievals of data items to make multiple decisions. I believe this is a significant initial step toward expanding timing-safety research landscapes and opportunities in the emerging CPS area

    Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring

    Get PDF
    PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains such as healthcare, smart homes, smart cars, smart-x applications, and smart cities. The number of applications based on IoT and cloud computing is projected to increase rapidly over the next few years. IoT-based services must meet the guaranteed levels of quality of service (QoS) to match users’ expectations. Ensuring QoS through specifying the QoS constraints using service level agreements (SLAs) is crucial. Also because of the potentially highly complex nature of multi-layered IoT applications, lifecycle management (deployment, dynamic reconfiguration, and monitoring) needs to be automated. To achieve this it is essential to be able to specify SLAs in a machine-readable format. currently available SLA specification languages are unable to accommodate the unique characteristics (interdependency of its multi-layers) of the IoT domain. Therefore, in this research, we propose a grammar for a syntactical structure of an SLA specification for IoT. The grammar is based on a proposed conceptual model that considers the main concepts that can be used to express the requirements for most common hardware and software components of an IoT application on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to evaluate the generality and expressiveness of the proposed grammar by reviewing its concepts and their predefined lists of vocabularies against two use-cases with a number of participants whose research interests are mainly related to IoT. The results of the analysis show that the proposed grammar achieved 91.70% of its generality goal and 93.43% of its expressiveness goal. To enhance the process of specifying SLA terms, We then developed a toolkit for creating SLA specifications for IoT applications. The toolkit is used to simplify the process of capturing the requirements of IoT applications. We demonstrate the effectiveness of the toolkit using a remote health monitoring service (RHMS) use-case as well as applying a user experience measure to evaluate the tool by applying a questionnaire-oriented approach. We discussed the applicability of our tool by including it as a core component of two different applications: 1) a contextaware recommender system for IoT configuration across layers; and 2) a tool for automatically translating an SLA from JSON to a smart contract, deploying it on different peer nodes that represent the contractual parties. The smart contract is able to monitor the created SLA using Blockchain technology. These two applications are utilized within our proposed SLA management framework for IoT. Furthermore, we propose a greedy heuristic algorithm to decentralize workflow activities of an IoT application across Edge and Cloud resources to enhance response time, cost, energy consumption and network usage. We evaluated the efficiency of our proposed approach using iFogSim simulator. The performance analysis shows that the proposed algorithm minimized cost, execution time, networking, and Cloud energy consumption compared to Cloud-only and edge-ward placement approaches
    • …
    corecore