1,023 research outputs found

    Analysis, classification and comparison of scheduling techniques for software transactional memories

    Get PDF
    Transactional Memory (TM) is a practical programming paradigm for developing concurrent applications. Performance is a critical factor for TM implementations, and various studies demonstrated that specialised transaction/thread scheduling support is essential for implementing performance-effective TM systems. After one decade of research, this article reviews the wide variety of scheduling techniques proposed for Software Transactional Memories. Based on peculiarities and differences of the adopted scheduling strategies, we propose a classification of the existing techniques, and we discuss the specific characteristics of each technique. Also, we analyse the results of previous evaluation and comparison studies, and we present the results of a new experimental study encompassing techniques based on different scheduling strategies. Finally, we identify potential strengths and weaknesses of the different techniques, as well as the issues that require to be further investigated

    Preemptive Software Transactional Memory

    Get PDF
    In state-of-the-art Software Transactional Memory (STM) systems, threads carry out the execution of transactions as non-interruptible tasks. Hence, a thread can react to the injection of a higher priority transactional task and take care of its processing only at the end of the currently executed transaction. In this article we pursue a paradigm shift where the execution of an in-memory transaction is carried out as a preemptable task, so that a thread can start processing a higher priority transactional task before finalizing its current transaction. We achieve this goal in an application-transparent manner, by only relying on Operating System facilities we include in our preemptive STM architecture. With our approach we are able to re-evaluate CPU assignment across transactions along a same thread every few tens of microseconds. This is mandatory for an effective priority-aware architecture given the typically finer-grain nature of in-memory transactions compared to their counterpart in database systems. We integrated our preemptive STM architecture with the TinySTM package, and released it as open source. We also provide the results of an experimental assessment of our proposal based on running a port of the TPC-C benchmark to the STM environment

    Model-Based Proactive Read-Validation in Transaction Processing Systems

    Get PDF
    Concurrency control protocols based on read-validation schemes allow transactions which are doomed to abort to still run until a subsequent validation check reveals them as invalid. These late aborts do not favor the reduction of wasted computation and can penalize performance. To counteract this problem, we present an analytical model that predicts the abort probability of transactions handled via read-validation schemes. Our goal is to determine what are the suited points-along a transaction lifetime-to carry out a validation check. This may lead to early aborting doomed transactions, thus saving CPU time. We show how to exploit the abort probability predictions returned by the model in combination with a threshold-based scheme to trigger read-validations. We also show how this approach can definitely improve performance-leading up to 14 % better turnaround-as demonstrated by some experiments carried out with a port of the TPC-C benchmark to Software Transactional Memory

    Dagstuhl News January - December 2008

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Accelerated Molecular Dynamics for the Exascale

    Get PDF
    A range of specialized Molecular Dynamics (MD) methods have been developed in order to overcome the challenge of reaching longer timescales in systems that evolve through sequences of rare events. In this talk, we consider Parallel Trajectory Splicing (ParSplice) which works by generating large number of MD trajectory segments in parallel in such a way that they can later be assembled into a single statistically correct state-to-state trajectory, enabling parallel speedups up to N, the number of parallel workers. The prospect of strong-scaling MD is extremely enticing given the continuously increasing scale of available computational resources: on current peta-scale platforms N can be in the hundreds of thousands, which opens the door to MD-accurate millisecond-long atomistic simulations; extending such a capability into the exascale era could be transformative.In practice, however, the ability for ParSplice to scale increasingly relies on predicting where the trajectory will be found in the future. With this insight in mind, we develop a maximum likelihood transition model that is updated on the fly and make use of an uncertainty-driven estimator to approximate the optimal distribution of trajectory segments to be generated next. In addition, we investigate resource optimization schemes designed to fully utilize computational resources in order to generate the maximum expected throughput

    Mining Event Logs to Support Workflow Resource Allocation

    Full text link
    Workflow technology is widely used to facilitate the business process in enterprise information systems (EIS), and it has the potential to reduce design time, enhance product quality and decrease product cost. However, significant limitations still exist: as an important task in the context of workflow, many present resource allocation operations are still performed manually, which are time-consuming. This paper presents a data mining approach to address the resource allocation problem (RAP) and improve the productivity of workflow resource management. Specifically, an Apriori-like algorithm is used to find the frequent patterns from the event log, and association rules are generated according to predefined resource allocation constraints. Subsequently, a correlation measure named lift is utilized to annotate the negatively correlated resource allocation rules for resource reservation. Finally, the rules are ranked using the confidence measures as resource allocation rules. Comparative experiments are performed using C4.5, SVM, ID3, Na\"ive Bayes and the presented approach, and the results show that the presented approach is effective in both accuracy and candidate resource recommendations.Comment: T. Liu et al., Mining event logs to support workflow resource allocation, Knowl. Based Syst. (2012), http://dx.doi.org/ 10.1016/j.knosys.2012.05.01

    Modeling and formal verification of probabilistic reconfigurable systems

    Get PDF
    In this thesis, we propose a new approach for formal modeling and verification of adaptive probabilistic systems. Dynamic reconfigurable systems are the trend of all future technological systems, such as flight control systems, vehicle electronic systems, and manufacturing systems. In order to meet user and environmental requirements, such a dynamic reconfigurable system has to actively adjust its configuration at run-time by modifying its components and connections, while changes are detected in the internal/external execution environment. On the other hand, these changes may violate the memory usage, the required energy and the concerned real-time constraints since the behavior of the system is unpredictable. It might also make the system's functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with any new configuration requires that the post reconfigurable system fully satisfies the related constraints. We introduce GR-TNCES formalism for the optimal functional and temporal specification of probabilistic reconfigurable systems under resource constraints. It enables the optimal specification of a probabilistic, energetic and memory constraints of such a system. To formally verify the correctness and the safety of such a probabilistic system specification, and the non-violation of its properties, an automatic transformation from GR-TNCES models into PRISM models is introduced. Moreover, a new approach XCTL is also proposed to formally verify reconfigurable systems. It enables the formal certification of uncompleted and reconfigurable systems. A new version of the software ZIZO is also proposed to model, simulate and verify such GR-TNCES model. To prove its relevance, the latter was applied to case studies; it was used to model and simulate the behavior of an IPV4 protocol to prevent the energy and memory resources violation. It was also used to optimize energy consumption of an automotive skid conveyor.In dieser Arbeit wird ein neuer Ansatz zur formalen Modellierung und Verifikation dynamisch rekonfigurierbarer Systeme vorgestellt. Dynamische rekonfigurierbare Systeme sind in vielen aktuellen und zukünftigen Anwendungen, wie beispielsweise Flugsteuerungssystemen, Fahrzeugelektronik und Fertigungssysteme zu finden. Diese Systeme weisen ein probabilistisches, adaptives Verhalten auf. Um die Benutzer- und Umgebungsbedingungen kontinuierlich zu erfüllen, muss ein solches System seine Konfiguration zur Laufzeit aktiv anpassen, indem es seine Komponenten, Verbindungen zwischen Komponenten und seine Daten modifiziert (adaptiv), sobald Änderungen in der internen oder externen Ausführungsumgebung erkannt werden (probabilistisch). Diese Anpassungen dürfen Beschränkungen bei der Speichernutzung, der erforderlichen Energie und bestehende Echtzeitbedingungen nicht verletzen. Eine nicht geprüfte Rekonfiguration könnte dazu führen, dass die Funktionen des Systems für einige Zeit nicht verfügbar wären und potenziell menschliches Leben gefährdet würde oder großer finanzieller Schaden entstünde. Somit erfordert das Aktualisieren eines Systems mit einer neuen Konfiguration, dass das rekonfigurierte System die zugehörigen Beschränkungen vollständig einhält. Um dies zu überprüfen, wird in dieser Arbeit der GR-TNCES-Formalismus, eine Erweiterung von Petrinetzen, für die optimale funktionale und zeitliche Spezifikation probabilistischer rekonfigurierbarer Systeme unter Ressourcenbeschränkungen vorgeschlagen. Die entstehenden Modelle sollen über probabilistische model checking verifiziert werden. Dazu eignet sich die etablierte Software PRISM. Um die Verifikation zu ermöglichen wird in dieser Arbeit ein Verfahren zur Transformation von GR-TNCES-Modellen in PRISM-Modelle beschrieben. Eine neu eingeführte Logik (XCTL) erlaubt zudem die einfache Beschreibung der zu prüfenden Eigenschaften. Die genannten Schritte wurden in einer Softwareumgebung für den automatisierten Entwurf, die Simulation und die formale Verifikation (durch eine automatische Transformation nach PRISM) umgesetzt. Eine Fallstudie zeigt die Anwendung des Verfahren
    corecore