123 research outputs found

    An Innovative Approach to Achieve Compositionality Efficiently using Multi-Version Object Based Transactional Systems

    Full text link
    In the modern era of multicore processors, utilizing cores is a tedious job. Synchronization and communication among processors involve high cost. Software transaction memory systems (STMs) addresses this issues and provide better concurrency in which programmer need not have to worry about consistency issues. Another advantage of STMs is that they facilitate compositionality of concurrent programs with great ease. Different concurrent operations that need to be composed to form a single atomic unit is achieved by encapsulating them in a single transaction. In this paper, we introduce a new STM system as multi-version object based STM (MVOSTM) which is the combination of both of these ideas for harnessing greater concurrency in STMs. As the name suggests MVOSTM, works on a higher level and maintains multiple versions corresponding to each key. We have developed MVOSTM with the unlimited number of versions corresponding to each key. In addition to that, we have developed garbage collection for MVOSTM (MVOSTM-GC) to delete unwanted versions corresponding to the keys to reduce traversal overhead. MVOSTM provides greater concurrency while reducing the number of aborts and it ensures compositionality by making the transactions atomic. Here, we have used MVOSTM for the list and hash-table data structure as list-MVOSTM and HT- MVOSTM. Experimental results of list-MVOSTM outperform almost two to twenty fold speedup than existing state-of-the-art list based STMs (Trans-list, Boosting-list, NOrec-list, list-MVTO, and list-OSTM). HT-MVOSTM shows a significant performance gain of almost two to nineteen times better than existing state-of-the-art hash-table based STMs (ESTM, RWSTMs, HT-MVTO, and HT-OSTM). MVOSTM with list and hash-table shows the least number of aborts among all the existing STM algorithms. MVOSTM satisfies correctness-criteria as opacity.Comment: 35 pages, 23 figure

    Efficient Concurrent Execution of Smart Contracts in Blockchains using Object-based Transactional Memory

    Full text link
    This paper proposes an efficient framework to execute Smart Contract Transactions (SCTs) concurrently based on object semantics, using optimistic Single-Version Object-based Software Transactional Memory Systems (SVOSTMs) and Multi-Version OSTMs (MVOSTMs). In our framework, a multi-threaded miner constructs a Block Graph (BG), capturing the object-conflicts relations between SCTs, and stores it in the block. Later, validators re-execute the same SCTs concurrently and deterministically relying on this BG. A malicious miner can modify the BG to harm the blockchain, e.g., to cause double-spending. To identify malicious miners, we propose Smart Multi-threaded Validator (SMV). Experimental analysis shows that the proposed multi-threaded miner and validator achieve significant performance gains over state-of-the-art SCT execution framework.Comment: 49 pages, 26 figures, 11 table

    An Efficient Approach to Achieve Compositionality using Optimized Multi-Version Object Based Transactional Systems

    Get PDF
    In the modern era of multi-core systems, the main aim is to utilize the cores properly. This utilization can be done by concurrent programming. But developing a flawless and well-organized concurrent program is difficult. Software Transactional Memory Systems (STMs) are a convenient programming interface which assist the programmer to access the shared memory concurrently without worrying about consistency issues such as priority-inversion, deadlock, livelock, etc. Another important feature that STMs facilitate is compositionality of concurrent programs with great ease. It composes different concurrent operations in a single atomic unit by encapsulating them in a transaction. Many STMs available in the literature execute read/write primitive operations on memory buffers. We represent them as Read-Write STMs or RWSTMs. Whereas, there exist some STMs (transactional boosting and its variants) which work on higher level operations such as insert, delete, lookup, etc. on a hash-table. We refer these STMs as Object Based STMs or OSTMs. The literature of databases and RWSTMs say that maintaining multiple versions ensures greater concurrency. This motivates us to maintain multiple version at higher level with object semantics and achieves greater concurrency. So, this paper pro-poses the notion of Optimized Multi-version Object Based STMs or OPT-MVOSTMs which encapsulates the idea of multiple versions in OSTMs to harness the greater concurrency efficiently

    Achieving Starvation-Freedom with Greater Concurrency in Multi-Version Object-based Transactional Memory Systems

    Full text link
    To utilize the multi-core processors properly concurrent programming is needed. Concurrency control is the main challenge while designing a correct and efficient concurrent program. Software Transactional Memory Systems (STMs) provides ease of multithreading to the programmer without worrying about concurrency issues such as deadlock, livelock, priority inversion, etc. Most of the STMs works on read-write operations known as RWSTMs. Some STMs work at high-level operations and ensure greater concurrency than RWSTMs. Such STMs are known as Object-Based STMs (OSTMs). The transactions of OSTMs can return commit or abort. Aborted OSTMs transactions retry. But in the current setting of OSTMs, transactions may starve. So, we proposed a Starvation-Free OSTM (SF-OSTM) which ensures starvation-freedom in object based STM systems while satisfying the correctness criteria as co-opacity. Databases, RWSTMs and OSTMs say that maintaining multiple versions corresponding to each key of transaction reduces the number of aborts and improves the throughput. So, to achieve greater concurrency, we proposed Starvation-Free Multi-Version OSTM (SF-MVOSTM) which ensures starvation-freedom while storing multiple versions corresponding to each key and satisfies the correctness criteria such as local opacity. To show the performance benefits, We implemented three variants of SF-MVOSTM (SF-MVOSTM, SF-MVOSTM-GC and SF-KOSTM) and compared it with state-of-the-art STMs.Comment: 68 pages, 24 figures. arXiv admin note: text overlap with arXiv:1709.0103

    Modeling and Simulation Methodologies for Digital Twin in Industry 4.0

    Get PDF
    The concept of Industry 4.0 represents an innovative vision of what will be the factory of the future. The principles of this new paradigm are based on interoperability and data exchange between dierent industrial equipment. In this context, Cyber- Physical Systems (CPSs) cover one of the main roles in this revolution. The combination of models and the integration of real data coming from the field allows to obtain the virtual copy of the real plant, also called Digital Twin. The entire factory can be seen as a set of CPSs and the resulting system is also called Cyber-Physical Production System (CPPS). This CPPS represents the Digital Twin of the factory with which it would be possible analyze the real factory. The interoperability between the real industrial equipment and the Digital Twin allows to make predictions concerning the quality of the products. More in details, these analyses are related to the variability of production quality, prediction of the maintenance cycle, the accurate estimation of energy consumption and other extra-functional properties of the system. Several tools [2] allow to model a production line, considering dierent aspects of the factory (i.e. geometrical properties, the information flows etc.) However, these simulators do not provide natively any solution for the design integration of CPSs, making impossible to have precise analysis concerning the real factory. Furthermore, for the best of our knowledge, there are no solution regarding a clear integration of data coming from real equipment into CPS models that composes the entire production line. In this context, the goal of this thesis aims to define an unified methodology to design and simulate the Digital Twin of a plant, integrating data coming from real equipment. In detail, the presented methodologies focus mainly on: integration of heterogeneous models in production line simulators; Integration of heterogeneous models with ad-hoc simulation strategies; Multi-level simulation approach of CPS and integration of real data coming from sensors into models. All the presented contributions produce an environment that allows to perform simulation of the plant based not only on synthetic data, but also on real data coming from equipments

    Representation Learning Methods for Sequential Information in Marketing and Customer Level Transactions

    Get PDF
    The rapid growth of data generated by businesses has surpassed human capabilities to produce actionable insights. Modern marketing applications depend on vast amounts of customer labelled data and supervised machine learning algorithms to predict customer behaviour and their potential next actions. However, this process requires significant effort in data pre-processing and the involvement of domain experts, which can be costly and time-consuming. This work reviews representation learning techniques as an alternative approach to feature engineering, aiming to eliminate the need for hand-crafted features and accelerate the process of extracting insights from data. Techniques such as Bayesian neural networks, general embeddings, and encoding-decoding architectures are explored to compress information obtained directly from raw input data into a dense probabilistic space. This thesis introduces the necessary technical aspects of neural networks and representation learning, from traditional methods like principal component analysis (PCA) and embeddings, to latent variable and generative methods that use deep neural networks, such as variational auto-encoders and Bayesian neural networks. It also explores the theoretical background of survival analysis and recommender systems, which serve as the foundation for the applications presented in this work to predict when individuals are likely to stop their relationship with businesses in a non-contractual settings or which items individuals are the most likely to interact with in their next purchase. Experiments conducted on real-world retail and benchmark datasets demonstrate comparable results in terms of predictive performance and superior computational efficiency when compared to existing methods

    Interoperability of Enterprise Software and Applications

    Get PDF

    Adaptive object management for distributed systems

    Get PDF
    This thesis describes an architecture supporting the management of pluggable software components and evaluates it against the requirement for an enterprise integration platform for the manufacturing and petrochemical industries. In a distributed environment, we need mechanisms to manage objects and their interactions. At the least, we must be able to create objects in different processes on different nodes; we must be able to link them together so that they can pass messages to each other across the network; and we must deliver their messages in a timely and reliable manner. Object based environments which support these services already exist, for example ANSAware(ANSA, 1989), DEC's Objectbroker(ACA,1992), Iona's Orbix(Orbix,1994)Yet such environments provide limited support for composing applications from pluggable components. Pluggability is the ability to install and configure a component into an environment dynamically when the component is used, without specifying static dependencies between components when they are produced. Pluggability is supported to a degree by dynamic binding. Components may be programmed to import references to other components and to explore their interfaces at runtime, without using static type dependencies. Yet thus overloads the component with the responsibility to explore bindings. What is still generally missing is an efficient general-purpose binding model for managing bindings between independently produced components. In addition, existing environments provide no clear strategy for dealing with fine grained objects. The overhead of runtime binding and remote messaging will severely reduce performance where there are a lot of objects with complex patterns of interaction. We need an adaptive approach to managing configurations of pluggable components according to the needs and constraints of the environment. Management is made difficult by embedding bindings in component implementations and by relying on strong typing as the only means of verifying and validating bindings. To solve these problems we have built a set of configuration tools on top of an existing distributed support environment. Specification tools facilitate the construction of independent pluggable components. Visual composition tools facilitate the configuration of components into applications and the verification of composite behaviours. A configuration model is constructed which maintains the environmental state. Adaptive management is made possible by changing the management policy according to this state. Such policy changes affect the location of objects, their bindings, and the choice of messaging system
    corecore