3,968 research outputs found

    Designing Software Architectures As a Composition of Specializations of Knowledge Domains

    Get PDF
    This paper summarizes our experimental research and software development activities in designing robust, adaptable and reusable software architectures. Several years ago, based on our previous experiences in object-oriented software development, we made the following assumption: ‘A software architecture should be a composition of specializations of knowledge domains’. To verify this assumption we carried out three pilot projects. In addition to the application of some popular domain analysis techniques such as use cases, we identified the invariant compositional structures of the software architectures and the related knowledge domains. Knowledge domains define the boundaries of the adaptability and reusability capabilities of software systems. Next, knowledge domains were mapped to object-oriented concepts. We experienced that some aspects of knowledge could not be directly modeled in terms of object-oriented concepts. In this paper we describe our approach, the pilot projects, the experienced problems and the adopted solutions for realizing the software architectures. We conclude the paper with the lessons that we learned from this experience

    Performance models of concurrency control protocols for transaction processing systems

    Get PDF
    Transaction processing plays a key role in a lot of IT infrastructures. It is widely used in a variety of contexts, spanning from database management systems to concurrent programming tools. Transaction processing systems leverage on concurrency control protocols, which allow them to concurrently process transactions preserving essential properties, as isolation and atomicity. Performance is a critical aspect of transaction processing systems, and it is unavoidably affected by the concurrency control. For this reason, methods and techniques to assess and predict the performance of concurrency control protocols are of interest for many IT players, including application designers, developers and system administrators. The analysis and the proper understanding of the impact on the system performance of these protocols require quantitative approaches. Analytical modeling is a practical approach for building cost-effective computer system performance models, enabling us to quantitatively describe the complex dynamics characterizing these systems. In this dissertation we present analytical performance models of concurrency control protocols. We deal with both traditional transaction processing systems, such as database management systems, and emerging ones, as transactional memories. The analysis focuses on widely used protocols, providing detailed performance models and validation studies. In addition, we propose new modeling approaches, which also broaden the scope of our study towards a more realistic, application-oriented, performance analysis

    Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.

    Get PDF
    Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    RepComp - replicated software components for improved performance

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaThe current trend of evolution in CPU architectures favours increasing the number of processing cores in lieu of improving the clock speed of an individual core. While improving clock rates automatically benefits any software executing on that processor, the same is not valid for adding new cores. To take advantage of an increased number of cores, software must include explicit support for parallel execution. This work explores a solution based on diverse replication which allows applications to transparently explore parallel processing power: macro-components. Applications typically make use of components with well-defined interfaces that have a number of possible underlying implementations with different characteristic. A macro-component is a component which encloses several of these implementations while offering the same interface as a regular implementation. Inside the macro-component,the implementations are used as replicas, and used to process any incoming operations. Using the best replica for each incoming operation, the macro-component is able to improve global performance. This dissertation provides an initial research on the use of these macro-components,detailing the technical challenges faced and proposing a design for the macro-component support system. Additionally, an implementation and subsequent validation of the proposed system are presented. These examples show that macro-components can achieve improved performance versus simple component implementations

    A modular distributed transactional memory framework

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThe traditional lock-based concurrency control is complex and error-prone due to its low-level nature and composability challenges. Software transactional memory (STM), inherited from the database world, has risen as an exciting alternative, sparing the programmer from dealing explicitly with such low-level mechanisms. In real world scenarios, software is often faced with requirements such as high availability and scalability, and the solution usually consists on building a distributed system. Given the benefits of STM over traditional concurrency controls, Distributed Software Transactional Memory (DSTM) is now being investigated as an attractive alternative for distributed concurrency control. Our long-term objective is to transparently enable multithreaded applications to execute over a DSTM setting. In this work we intend to pave the way by defining a modular DSTM framework for the Java programming language. We extend an existing, efficient, STM framework with a new software layer to create a DSTM framework. This new layer interacts with the local STM using well-defined interfaces, and allows the implementation of different distributed memory models while providing a non-intrusive, familiar,programming model to applications, unlike any other DSTM framework. Using the proposed DSTM framework we have successfully, and easily, implemented a replicated STM which uses a Certification protocol to commit transactions. An evaluation using common STM benchmarks showcases the efficiency of the replicated STM,and its modularity enables us to provide insight on the relevance of different implementations of the Group Communication System required by the Certification scheme, with respect to performance under different workloads.Fundação para a Ciência e Tecnologia - project (PTDC/EIA-EIA/113613/2009

    Optimized traffic scheduling and routing in smart home networks

    Get PDF
    Home networks are evolving rapidly to include heterogeneous physical access and a large number of smart devices that generate different types of traffic with different distributions and different Quality of Service (QoS) requirements. Due to their particular architectures, which are very dense and very dynamic, the traditional one-pair-node shortest path solution is no longer efficient to handle inter-smart home networks (inter-SHNs) routing constraints such as delay, packet loss, and bandwidth in all-pair node heterogenous links. In addition, Current QoS-aware scheduling methods consider only the conventional priority metrics based on the IP Type of Service (ToS) field to make decisions for bandwidth allocation. Such priority based scheduling methods are not optimal to provide both QoS and Quality of Experience (QoE), especially for smart home applications, since higher priority traffic does not necessarily require higher stringent delay than lower-priority traffic. Moreover, current QoS-aware scheduling methods in the intra-smart home network (intra-SHN) do not consider concurrent traffic caused by the fluctuation of intra-SH network traffic distributions. Thus, the goal of this dissertation is to build an efficient heterogenous multi-constrained routing mechanism and an optimized traffic scheduling tool in order to maintain a cost-effective communication between all wired-wireless connected devices in inter-SHNs and to effectively process concurrent and non-concurrent traffic in intra-SHN. This will help Internet service providers (ISPs) and home user to enhance the overall QoS and QoE of their applications while maintaining a relevant communication in both inter-SHNs and intra-SHN. In order to meet this goal, three key issues are required to be addressed in our framework and are summarized as follows: i) how to build a cost-effective routing mechanism in heterogonous inter-SHNs ? ii) how to efficiently schedule the multi-sourced intra-SHN traffic based on both QoS and QoE ? and iii) how to design an optimized queuing model for intra-SHN concurrent traffics while considering their QoS requirements? As part of our contributions to solve the first problem highlighted above, we present an analytical framework for dynamically optimizing data flows in inter-SHNs using Software-defined networking (SDN). We formulate a QoS-based routing optimization problem as a constrained shortest path problem and then propose an optimized solution (QASDN) to determine minimal cost between all pairs of nodes in the network taking into account the different types of physical accesses and the network utilization patterns. To address the second issue and to solve the gaps between QoS and QoE, we propose a new queuing model for QoS-level Pair traffic with mixed arrival distributions in Smart Home network (QP-SH) to make a dynamic QoS-aware scheduling decision meeting delay requirements of all traffic while preserving their degrees of criticality. A new metric combining the ToS field and the maximum number of packets that can be processed by the system's service during the maximum required delay, is defined. Finally, as part of our contribution to address the third issue, we present an analytic model for a QoS-aware scheduling optimization of concurrent intra-SHN traffics with mixed arrival distributions and using probabilistic queuing disciplines. We formulate a hybrid QoS-aware scheduling problem for concurrent traffics in intra-SHN, propose an innovative queuing model (QC-SH) based on the auction economic model of game theory to provide a fair multiple access over different communication channels/ports, and design an applicable model to implement auction game on both sides; traffic sources and the home gateway, without changing the structure of the IEEE 802.11 standard. The results of our work offer SHNs more effective data transfer between all heterogenous connected devices with optimal resource utilization, a dynamic QoS/QoE-aware traffic processing in SHN as well as an innovative model for optimizing concurrent SHN traffic scheduling with enhanced fairness strategy. Numerical results show an improvement up to 90% for network resource utilization, 77% for bandwidth, 40% for scheduling with QoS and QoE and 57% for concurrent traffic scheduling delay using our proposed solutions compared with Traditional methods
    • …
    corecore