66 research outputs found

    RULE BASED ADAPTATION: LITERATURE REVIEW

    Get PDF
    Rule based adaptive systems are growing in popularity and rules have been considered as an effective and elastic way to adapt systems. A rule based approach allows transparent monitoring of performed adaptation actions and gives an important advantage of easily modifiable adaptation process. The goal of this paper is to summarize literature review on rule based adaptation systems. The emphasis is put on rule types, semantics used for defining rules and measurement of effectiveness and correctness of rule based adaptation systems. The literature review has been done following a systematic approach consisting of three steps: planning, reviewing and analysis. Targeted research questions have been used to guide the review process. The review results are to be used for conducting further research in the area of rule based context-aware adaptive systems. This paper accents the potential of using rules as means to perform adaptive actions in enterprise applications taking into account contextual factors as well as points challenges, difficulties and open issues for planning, developing, implementing and running of such systems

    The Design, Implementation, and Refinement of Wait-Free Algorithms and Containers

    Get PDF
    My research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress. Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems. These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation. In addition to the safety concerns, the fine-grained synchronization used in implementing these algorithms promises to provide scalable performance in massively parallel systems. My research has resulted in the development of wait-free versions of the stack, hash map, ring buffer, vector, and a multi-word compare-and-swap algorithms. Through this experience, I have learned and developed new techniques and methodologies for implementing non-blocking and wait-free algorithms. I have worked with and refined existing techniques to improve their practicality and applicability. In the creation of the aforementioned algorithms, I have developed an association model for use with descriptor-based operations. This model, originally developed for the multi-word compare-and-swap algorithm, has been applied to the design of the vector and ring buffer algorithms. To unify these algorithms and techniques, I have released Tervel, a wait-free library of common algorithms and containers. This library includes a framework that simplifies and improves the design of non-blocking algorithms. I have reimplemented several algorithms using this framework and the resulting implementation exhibits less code duplication and fewer perceivable states. When reimplementing algorithms, I have adapted their Application Programming Interface (API) specification to remove ambiguity and non-deterministic behavior found when using a sequential API in a concurrent environment. To improve the performance of my algorithm implementations, I extended OVIS\u27s Lightweight Distributed Metric Service (LDMS)\u27s data collection and transport system to support performance monitoring using perf_event and PAPI libraries. These libraries have provided me with deeper insights into the behavior of my algorithms, and I was able to use these insights to improve the design and performance of my algorithms

    New IR & Ranking Algorithm for Top-K Keyword Search on Relational Databases ‘Smart Search’

    Get PDF
    Database management systems are as old as computers, and the continuous research and development in databases is huge and an interest of many database venders and researchers, as many researchers work in solving and developing new modules and frameworks for more efficient and effective information retrieval based on free form search by users with no knowledge of the structure of the database. Our work as an extension to previous works, introduces new algorithms and components to existing databases to enable the user to search for keywords with high performance and effective top-k results. Work intervention aims at introducing new table structure for indexing of keywords, which would help algorithms to understand the semantics of keywords and generate only the correct CN‟s (Candidate Networks) for fast retrieval of information with ranking of results according to user‟s history, semantics of keywords, distance between keywords and match of keywords. In which a three modules where developed for this purpose. We implemented our three proposed modules and created the necessary tables, with the development of a web search interface called „Smart Search‟ to test our work with different users. The interface records all user interaction with our „Smart Search‟ for analyses, as the analyses of results shows improvements in performance and effective results returned to the user. We conducted hundreds of randomly generated search terms with different sizes and multiple users; all results recorded and analyzed by the system were based on different factors and parameters. We also compared our results with previous work done by other researchers on the DBLP database which we used in our research. Our final result analysis shows the importance of introducing new components to the database for top-k keywords search and the performance of our proposed system with high effective results.نظم إدارة قواعد البيانات قديمة مثل أجيزة الكمبيوتر، و البحث والتطوير المستمر في قواعد بيانات ضخم و ىنالك اىتمام من العديد من مطوري قواعد البيانات والباحثين، كما يعمل العديد من الباحثين في حل وتطوير وحدات جديدة و أطر السترجاع المعمومات بطرق أكثر كفاءة وفعالية عمى أساس نموذج البحث الغير مقيد من قبل المستخدمين الذين ليس لدييم معرفة في بنية قاعدة البيانات. ويأتي عممنا امتدادا لألعمال السابقة، ويدخل الخوارزميات و مكونات جديدة لقواعد البيانات الموجودة لتمكين المستخدم من البحث عن الكممات المفتاحية )search Keyword )مع األداء العالي و نتائج فعالة في الحصول عمى أعمى ترتيب لمبيانات .)Top-K( وييدف ىذا العمل إلى تقديم بنية جديدة لفيرسة الكممات المفتاحية )Table Keywords Index ،)والتي من شأنيا أن تساعد الخوارزميات المقدمة في ىذا البحث لفيم معاني الكممات المفتاحية المدخمة من قبل المستخدم وتوليد فقط الشبكات المرشحة (s’CN (الصحيحة السترجاع سريع لممعمومات مع ترتيب النتائج وفقا ألوزان مختمفة مثل تاريخ البحث لممستخدم، ترتيب الكمات المفتاحية في النتائج والبعد بين الكممات المفتاحية في النتائج بالنسبة لما قام المستخدم بأدخالو. قمنا بأقتراح ثالث مكونات جديدة )Modules )وتنفيذىا من خالل ىذه االطروحة، مع تطوير واجية البحث عمى شبكة اإلنترنت تسمى "البحث الذكي" الختبار عممنا مع المستخدمين. وتتضمن واجية البحث مكونات تسجل تفاعل المستخدمين وتجميع تمك التفاعالت لمتحميل والمقارنة، وتحميالت النتائج تظير تحسينات في أداء استرجاع البينات و النتائج ذات صمة ودقة أعمى. أجرينا مئات عمميات البحث بأستخدام جمل بحث تم أنشائيا بشكل عشوائي من مختمف األحجام، باالضافة الى االستعانة بعدد من المستخدمين ليذه الغاية. واستندت جميع النتائج المسجمة وتحميميا بواسطة واجية البحث عمى عوامل و معايير مختمفة .وقمنا بالنياية بعمل مقارنة لنتائجنا مع االعمال السابقة التي قام بيا باحثون آخرون عمى نفس قاعدة البيانات (DBLP (الشييرة التي استخدمناىا في أطروحتنا. وتظير نتائجنا النيائية مدى أىمية أدخال بنية جديدة لفيرسة الكممات المفتاحية الى قواعد البيانات العالئقية، وبناء خوارزميات استنادا الى تمك الفيرسة لمبحث بأستخدام كممات مفتاحية فقط والحصول عمى نتائج أفضل ودقة أعمى، أضافة الى التحسن في وقت البحث

    Revised submission for MOF 2.0 query / views / transformations RFP.

    Get PDF
    This submission presents the QVT-Partners proposal for the MOF 2.0 QVT standard. The proposal consists of a number of key ingredients which we briefly discuss in this section. -Specification and implementation: A common scenario in the development of any artifact is to first create a specification of the form and behaviour of the the artifact, and then realise an implementation which satisfies the specification. The specification is characterised by a lack of implementation details, but having a close correspondence to the requirements; conversely an implementation may lack close correspondence to the requirements. This submission maintains this important distinction. Relations provide a specification oriented view of the relationship between models and are specified in a language that can be easily understood. They say what it means to translate between several models but without saying precisely how the translation is achieved. Those details are realised by mappings which characterise the means by which models are translated. It should be noted though, that while the mappings language is rich enough to provide an implementation of relations it also manages to maintain a requirements oriented focus. This may give rise to a scenario where developers prefer to omit relations and directly define mappings. -Scalability and reuse: Decomposition is a key approach to managing complexity. This submission provides a number of composition mechanisms whereby relations and mappings can be composed to form more complex specifications. These mechanisms also aid reuse since mappings and relations can be treated as reusable components which are composed for specific contexts. -Usability: Diagrammatic notations have been important to the success of many OMG standards. This proposal presents a diagrammatic notation which is an extension of collaboration object diagrams and is therefore familiar to many end users. A criticism often levelled at diagrammatic notations is their scalability. This submission also presents a textual syntax, constructs of the diagrammatic notations are closely aligned with its textual counterpart. Considering the domains of relations and mappings at the generic type level is often too limiting. Instead it often is specific-types of things that are of interest. This submission uses patterns to describe the domains of both relations and mappings. Patterns are a means of succinctly describing specific-types of model elements and enable domains of interest to be rapidly stated with ease. -Semantic soundness: By definition a standard should give rise to consistency across differing implementations. It is important that an end user can get the same results on two different implementations. For this reason, this submission goes to some effort to ensure that all the constructs have a well-defined semantic basis. This is achieved by treating the submission in two parts. The infrastructure part has a small number of constructs which can be easily and consistently understood from informal descriptions (although a mathematical semantics is given in Appendix B for the sake of completeness and rigour). The superstructure part uses the infrastructure as its semantic basis and defines the syntax that the end user deals with. The relationship between the superstructure and the infrastructure is expressed as a translation

    A GPU-based algorithm for fast node label learning in large and unbalanced biomolecular networks

    Get PDF
    Background: Several problems in network biology and medicine can be cast into a framework where entities are represented through partially labeled networks, and the aim is inferring the labels (usually binary) of the unlabeled part. Connections represent functional or genetic similarity between entities, while the labellings often are highly unbalanced, that is one class is largely under-represented: for instance in the automated protein function prediction (AFP) for most Gene Ontology terms only few proteins are annotated, or in the disease-gene prioritization problem only few genes are actually known to be involved in the etiology of a given disease. Imbalance-aware approaches to accurately predict node labels in biological networks are thereby required. Furthermore, such methods must be scalable, since input data can be large-sized as, for instance, in the context of multi-species protein networks. Results: We propose a novel semi-supervised parallel enhancement of COSNet, an imbalance-aware algorithm build on Hopfield neural model recently suggested to solve the AFP problem. By adopting an efficient representation of the graph and assuming a sparse network topology, we empirically show that it can be efficiently applied to networks with millions of nodes. The key strategy to speed up the computations is to partition nodes into independent sets so as to process each set in parallel by exploiting the power of GPU accelerators. This parallel technique ensures the convergence to asymptotically stable attractors, while preserving the asynchronous dynamics of the original model. Detailed experiments on real data and artificial big instances of the problem highlight scalability and efficiency of the proposed method. Conclusions: By parallelizing COSNet we achieved on average a speed-up of 180x in solving the AFP problem in the S. cerevisiae, Mus musculus and Homo sapiens organisms, while lowering memory requirements. In addition, to show the potential applicability of the method to huge biomolecular networks, we predicted node labels in artificially generated sparse networks involving hundreds of thousands to millions of nodes

    The NASA computer science research program plan

    Get PDF
    A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified

    Efficient means of Achieving Composability using Transactional Memory

    Get PDF
    The major focus of software transaction memory systems (STMs) has been to facilitate the multiprocessor programming and provide parallel programmers with an abstraction for fast development of the concurrent and parallel applications. Thus, STMs allow the parallel programmers to focus on the logic of parallel programs rather than worrying about synchronization. Heart of such applications is the underlying concurrent data-structure. The design of the underlying concurrent data-structure is the deciding factor whether the software application would be efficient, scalable and composable. However, achieving composition in concurrent data structures such that they are efficient as well as easy to program poses many consistency and design challenges. We say a concurrent data structure compose when multiple operations from same or different object instances of the concurrent data structure can be glued together such that the new operation also behaves atomically. For example, assume we have a linked-list as the concurrent data structure with lookup, insert and delete as the atomic operations. Now, we want to implement the new move operation, which would delete a node from one position of the list and would insert into the another or same list. Such a move operation may not be atomic(transactional) as it may result in an execution where another process may access the inconsistent state of the linked-list where the node is deleted but not yet inserted into the list. Thus, this inability of composition in the concurrent data structures may hinder their practical use. In this context, the property of compositionality provided by the transactions in STMs can be handy. STMs provide easy to program and compose transactional interface which can be used to develop concurrent data structures thus the parallel software applications. However, whether this can be achieved efficiently is a question we would try to answer in this thesis. Most of the STMs proposed in the literature are based on read/write primitive operations(or methods) on memory buffers and hence denoted RWSTMs. These lower level read/write primitive operations do not provide any other useful information except that a write operation always needs to be ordered with any other read or write. Thus limiting the number of possible concurrent executions. In this thesis, we consider Object-based STMs or OSTMs which operate on higher level objects rather than read/write operations on memory locations. The main advantage of considering OSTMs is that with the greater semantic information provided by the methods of the object, the conflicts among the transactions can be reduced and as a result, the number of aborts will also be less. This allows for larger number of permissive concurrent executions leading to more concurrency. Hence, OSTMs could be an efficient means of achieving composability of higher-level operations in the software applications using the concurrent data structures. This would allow parallel programmers to leverage underlying multi-core architecture. To design the OSTM, we have adopted the transactional tree model developed for databases. We extend the traditional notion of conflicts and legality to higher level operations in STMs which allows efficient composability. Using these notions we define the standard STM correctness notion of Conflict-Opacity. The OSTM model can be easily extended to implement concurrent lists, sets, queues or other concurrent data structures. We use the proposed theoretical OSTM model to design HT-OSTM - an OSTM with underlying hash table object. We noticed that major concurrency hot-spot is the chaining data structure within the hash table. So, we have used Lazyskip-list approach which is time efficient compared to normal lists in terms of traversal overhead. At the transactional level, we use timestamp ordering protocol to ensure that the executions are conflict-opaque. We provide a detailed handcrafted proof of correctness starting from operational level to the transactional level. At the operational level we show that HT-OSTM generates legal sequential history. At vi transactional level we show that every such sequential history would be opaque thus co-opaque. The HT-OSTM exports STM insert, STM lookup and STM delete methods to the programmer along-with STM begin and STM trycommit. Using these higher level operations user may easily and efficiently program any parallel software application involving concurrent hash table. To demonstrate the efficiency of composition we build a test application which executes the number of hash-tab methods (generated with a given probability) atomically in a transaction. Finally, we evaluate HT-OSTM against ESTM based hash table of synchrobench and the hash-table designed for RWSTM based on basic time stamp ordering protocol. We observe that HT-OSTM outperforms ESTM by the average magnitude of 106 transactions per second(throughput) for both lookup intensive and update intensive work load. HT-OSTM outperforms RWSTM by 3% & 3.4% update intensive and lookup intensive workload respectivel

    A semi-formal comparison between the Common Object Request Broker Architecture (COBRA) and the Distributed Component Object Model (DCOM)

    Get PDF
    The way in which application systems and software are built has changed dramatically over the past few years. This is mainly due to advances in hardware technology, programming languages, as well as the requirement to build better software application systems in less time. The importance of mondial (worldwide) communication between systems is also growing exponentially. People are using network-based applications daily, communicating not only locally, but also globally. The Internet, the global network, therefore plays a significant role in the development of new software. Distributed object computing is one of the computing paradigms that promise to solve the need to develop clienVserver application systems, communicating over heterogeneous environments. This study, of limited scope, concentrates on one crucial element without which distributed object computing cannot be implemented. This element is the communication software, also called middleware, which allows objects situated on different hardware platforms to communicate over a network. Two of the most important middleware standards for distributed object computing today are the Common Object Request Broker Architecture (CORBA) from the Object Management Group, and the Distributed Component Object Model (DCOM) from Microsoft Corporation. Each of these standards is implemented in commercially available products, allowing distributed objects to communicate over heterogeneous networks. In studying each of the middleware standards, a formal way of comparing CORBA and DCOM is presented, namely meta-modelling. For each of these two distributed object infrastructures (middleware), meta-models are constructed. Based on this uniform and unbiased approach, a comparison of the two distributed object infrastructures is then performed. The results are given as a set of tables in which the differences and similarities of each distributed object infrastructure are exhibited. By adopting this approach, errors caused by misunderstanding or misinterpretation are minimised. Consequently, an accurate and unbiased comparison between CORBA and DCOM is made possible, which constitutes the main aim of this dissertation.ComputingM. Sc. (Computer Science
    corecore