3,439 research outputs found

    Concurrent Design of Embedded Control Software

    Get PDF
    Embedded software design for mechatronic systems is becoming an increasingly time-consuming and error-prone task. In order to cope with the heterogeneity and complexity, a systematic model-driven design approach is needed, where several parts of the system can be designed concurrently. There is however a trade-off between concurrency efficiency and integration efficiency. In this paper, we present a case study on the development of the embedded control software for a real-world mechatronic system in order to evaluate how we can integrate concurrent and largely independent designed embedded system software parts in an efficient way. The case study was executed using our embedded control system design methodology which employs a concurrent systematic model-based design approach that ensures a concurrent design process, while it still allows a fast integration phase by using automatic code synthesis. The result was a predictable concurrently designed embedded software realization with a short integration time

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Smart Ontology Framework for Multi-Tenant Cloud Architecture

    Get PDF
    The exponential growth of data complexity in an era marked by the rapid expansion of the computer environment has led to an increase in the demand for scalable and effective systems. The crucial stage of data management, which acts as a vital conduit for accelerating the processing of enormous amounts of data, is at the centre of this paradigm. Scientific workflows must be coordinated in order to orchestrate the management of large datasets within this complex ecosystem. These workflows differ from generic workflows in that they involve a complex interplay of scheduling, algorithms, data flow, processes, operational protocols, and a focused attention on data-intensive systems. Software as a Service's (SaaS) distinctive feature of multi-tenancy is inextricably related to the growth of the industry. In this complex fabric, the investigation of scientific processes reveals a mutually beneficial relationship with the multi-tenant cloud orchestration environment, revealing a realm that goes beyond simple control and data propagation. It opens a fresh path for system development and makes service delivery's previously hidden facets visible. This study pioneers an exploration into a thorough framework for scientific operations within the context of multi-tenant cloud orchestration. Semantics-based workflows, which leverage semantics to help users manage the complexities of data orchestration, form the basis of this paradigm. In addition, policy-based processes provide another level of intricacy, giving users a flexible way to manoeuvre the complex environment of multi-tenancy, orchestration, and service identification. The study focuses on the fundamentals of orchestrating scientific workflows in a multi-tenant cloud environment, where the creative, scalable, and effective composition results from the harmonious integration of data and semantics under the guidance of rules

    Near-Memory Address Translation

    Full text link
    Memory and logic integration on the same chip is becoming increasingly cost effective, creating the opportunity to offload data-intensive functionality to processing units placed inside memory chips. The introduction of memory-side processing units (MPUs) into conventional systems faces virtual memory as the first big showstopper: without efficient hardware support for address translation MPUs have highly limited applicability. Unfortunately, conventional translation mechanisms fall short of providing fast translations as contemporary memories exceed the reach of TLBs, making expensive page walks common. In this paper, we are the first to show that the historically important flexibility to map any virtual page to any page frame is unnecessary in today's servers. We find that while limiting the associativity of the virtual-to-physical mapping incurs no penalty, it can break the translate-then-fetch serialization if combined with careful data placement in the MPU's memory, allowing for translation and data fetch to proceed independently and in parallel. We propose the Distributed Inverted Page Table (DIPTA), a near-memory structure in which the smallest memory partition keeps the translation information for its data share, ensuring that the translation completes together with the data fetch. DIPTA completely eliminates the performance overhead of translation, achieving speedups of up to 3.81x and 2.13x over conventional translation using 4KB and 1GB pages respectively.Comment: 15 pages, 9 figure

    RAD Applied in the Context of Investment Banking

    Get PDF
    RAD as a methodology for implementing information systems has been used in a broad range of domains utilizing technology as an informational backbone but perhaps one of the main areas where this approach has been proven to be a natural fit has been in the investment banking (IB) industry, most notably when applied to trading systems. This paper introduces some of the main tenants of RAD development and focuses on a number of case studies where RAD has proven to be an extremely suitable method for implementing solutions required in the IB industry as well as explaining why RAD may be more successful than other classic de-velopment methods when applied to IB related solutions.RAD, Information Systems, Investment Banking, Trading Systems

    Smart e-Learning Systems with Big Data

    Get PDF
    Nowadays, the Internet connects people, multimedia and physical objects leading to a new-wave of services. This includes learning applications, which require to manage huge and mixed volumes of information coming from Web and social media, smart-cities and Internet of Things nodes. Unfortunately, designing smart e-learning systems able to take advantage of such a complex technological space raises different challenges. In this perspective, this paper introduces a reference architecture for the development of future and big-data-capable e-learning platforms. Also, it showcases how data can be used to enrich the learning process
    corecore