127 research outputs found

    Visões progressivas de computações distribuidas

    Get PDF
    Orientador : Luiz Eduardo BuzatoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Um checkpoint é um estado selecionado por um processo durante a sua execução. Um checkpoint global é composto por um checkpoint de cada processo e é consistente se representa urna foto­grafia da computação que poderia ter sido capturada por um observador externo. Soluções para vários problemas em sistemas distribuídos necessitam de uma seqüência de checkpoints globais consistentes que descreva o progresso de urna computação distribuída. Corno primeira contri­buição desta tese, apresentamos um conjunto de algoritmos para a construção destas seqüências, denominadas visões progressivas. Outras contribuições provaram que certas suposições feitas na literatura eram falsas utilizando o argumento de que algumas propriedades precisam ser válidas ao longo de todo o progresso da computação. Durante algumas computações distribuídas, todas as dependências de retrocesso entre check­points podem ser rastreadas em tempo de execução. Esta propriedade é garantida através da indução de checkpoints imediatamente antes da formação de um padrão de mensagens que poderia dar origem a urna dependência de retrocesso não rastreável. Estudos teóricos e de simu­lação indicam que, na maioria das vezes, quanto mais restrito o padrão de mensagens, menor o número de checkpoints induzidos. Acreditava-se que a caracterização minimal para a obtenção desta propriedade estava estabelecida e que um protocolo baseado nesta caracterização precisa­ria da manutenção e propagação de informações de controle com complexidade O(n2), onde n é o número de processos na computação. A complexidade quadrática tornava o protocolo base­ado na caracterização mimimal menos interessante que protocolos baseados em caracterizações maiores, mas com complexidade linear.A segunda contribuição desta tese é uma prova de que a caracterização considerada minimal podia ser eduzida, embora a complexidade requerida por um protocolo baseado nesta nova caracterização minimal continuasse indicando ser quadrática. A terceira contribuição desta tese é a proposta de um pequeno relaxamento na caracterização minimal que propicia a implementação de um protocolo com complexidade linear e desempenho semelhante à solução quadrática. Como última contribuição, através de um estudo detalhado das variações da informação de controle durante o progresso de urna computação, propomos um protocolo que implementa exatamente a caracterização minimal, mas com complexidade linearAbstract: A checkpoint is a state selected by a process during its execution. A global checkpoint is composed of one checkpoint from each process and it is consistent if it represents a snapshot of the computation that could have been taken by an external observer. The solution to many problems in distributed systems requires a sequence of consistent global checkpoints that describes the progress of a distributed computation. As the first contribution of this thesis, we present a set of algorithms to the construction of these sequences, called progressive views. Additionally, the analysis of properties during the progress of a distributed computation allowed us to verify that some assumptions made in the literature were false. Some checkpoint patterns present only on-line trackable rollback-dependencies among check­points. This property is enforced by taking a checkpoint immediately before the formation of a message pattern that can produce a non-trackable rollback-dependency. Theoretical and simula­tion studies have shown that, most often, the more restricted the pattern, the more efficient the protocol. The minimal characterization was supposed to be known and its implementation was supposed to require the processes of the computation to maintain and propagate O(n2) control information, where n is the number of processes in the computation. The quadratic complexity makes the protocol based on the minimal characterization less interesting than protocols based on wider characterizations, but with a linear complexity. The second contribution of this thesis is a proof that the characterization that was supposed to be minimal could be reduced. However, the complexity required by a protocol based on the new minimal characterization seemed to be also quadratic. The third contribution of this thesis is a protocol based on a slightly weaker condition than the minimal characterization, but with linear complexity and performance similar to the quadratic solution. As the last contribution, through a detailed analysis of the control information computed and transmitted during the progress of distributed computations, we have proposed a protocol that implements exactly the minimal characterization, but with a linear complexityDoutoradoDoutor em Ciência da Computaçã

    Cognition procedures for optical network design and optimization

    Get PDF
    Telecom carriers have to adapt their networks to accommodate a growing volume of users, services and traffic. Thus, they have to search a continuous maximization of efficiency and reduction in costs. This thesis identifies an opportunity to accomplish this aim by reducing operation margins applied in the optical link power budgets, in optical transport networks. From an operational perspective, margin reduction will lead to a fall of the required investments on transceivers in the whole transport network. Based on how human learn, a cognitive approach is introduced and evaluated to reduce the System Margin. This operation margin takes into account, among other constraints, the long-term ageing process of the network infrastructure. Telecom operators normally apply a conservative and fixed value established during the design and commissioning phases. The cognitive approach proposes a flexible and variable value, adapted to the network conditions. It is based on the case-based reasoning machine learning technique, which has been further developped. Novel learning schemes are presented and evaluated. The cognition solution proposes a new lower launched power guaranteeing the quality of service of the new incoming lightpath. It will lead to provide transmission power savings with appropiate success rates when applying the cognitive approach. To this end, it relies on transmission values applied in past and successful similar network situations. They are stored in a knowledge base or memory of the system. Moreover, regarding the knowledge base, a static and a dynamic approaches have been developped and presented. In the last case, five new dynamic learning algorithms are presented and evaluated. In the static context, savings in transmission power up to 48% are achieved and the resulting System Margin reduction. Furthermore, the dynamic renewal of the knowledge base improves mean savings in launched power up to 7% or 18% with respect to the static approach, depending on the path. Thus, the cognitive approach appears as useful to be applied in commercial optical transport networks with the aim of reducing the operational System Margin.Los operadores de telecomunicaciones tienen que adaptar constantemente sus redes para acoger el volumen creciente de usuarios, servicios y tráfico asociado. Han de buscar constantemente una maximización de la eficiencia en la operación, así como una reducción continua de costes. Esta tesis identifica una oportunidad para alcanzar este objetivo por medio de la reducción de los márgenes operacionales aplicados en los balances de potencia en una red óptica de transporte. Desde un punto de vista operacional, la reducción de márgenes operativos conlleva una optimización de las inversiones requeridas en transceivers, entre otros puntos. Así, basándonos en cómo aprendemos los humanos, se introduce y evalúa una aproximación cognitiva para reducir el System Margin. Este margen operativo se introduce en el balance de potencia, entre otros puntos, para compensar el proceso de envejecimiento a largo plazo de la infraestrcutura de red. Los operadores emplean normalmente un valor fijo y conservador, que se establece durante el diseño y comisionado de la red. Nuestra aproximación cognitiva propone en su lugar un valor flexible y variable, que se adapta a las condiciones de red actuales. Se basa en la técnica de machine learning conocida como case-based reasoning, que se desarrolla más profundamente. Se han propuesto y evaluado nuevos esquemas de aprendizaje. La solución cognitiva propone un nuevo valor más bajo de potencia transmitida, que garantiza la calidad de servicio requerida por el nuevo lighpath entrante. La propuesta logra ahorros en la potencia transmitida, a la vez que garantiza una tasa de éxito correcta cuando aplicamos esta solución cognitiva. Para ello, se apoya en la potencia transmitida en situaciones pasadas y similares a la actual, donde se transmitió una potencia que aseguró el correcto establecimiento del lighpath. Esta información se almacena en una base de conocimiento. En este sentido, se han desarrollado y presentado dos aproximaciones: una base de conocimiento estática y otra dinámica. En el caso del contexto dinámico, se han desarrollado y evaluado cinco nuevos algoritmos de aprendizaje. En el contexto estático, se consigue un ahorro en potencia de hasta un 48%, con la correspondiente reducción del System Margin. En el contexto dinámico, la actualización online de la base de conocimiento proporciona adicionalmente una ganancia en potencia transmitida con respecto a la aproximación estática de hasta un 7% o un 18%, dependiendo de la ruta. De esta forma se comprueba que la propuesta cognitiva se revela como útil y aplicable sobre una red óptica de transporte comercial con el objetivo de reducir el margen operativo conocido como System Margin

    Cognition procedures for optical network design and optimization

    Get PDF
    Telecom carriers have to adapt their networks to accommodate a growing volume of users, services and traffic. Thus, they have to search a continuous maximization of efficiency and reduction in costs. This thesis identifies an opportunity to accomplish this aim by reducing operation margins applied in the optical link power budgets, in optical transport networks. From an operational perspective, margin reduction will lead to a fall of the required investments on transceivers in the whole transport network. Based on how human learn, a cognitive approach is introduced and evaluated to reduce the System Margin. This operation margin takes into account, among other constraints, the long-term ageing process of the network infrastructure. Telecom operators normally apply a conservative and fixed value established during the design and commissioning phases. The cognitive approach proposes a flexible and variable value, adapted to the network conditions. It is based on the case-based reasoning machine learning technique, which has been further developped. Novel learning schemes are presented and evaluated. The cognition solution proposes a new lower launched power guaranteeing the quality of service of the new incoming lightpath. It will lead to provide transmission power savings with appropiate success rates when applying the cognitive approach. To this end, it relies on transmission values applied in past and successful similar network situations. They are stored in a knowledge base or memory of the system. Moreover, regarding the knowledge base, a static and a dynamic approaches have been developped and presented. In the last case, five new dynamic learning algorithms are presented and evaluated. In the static context, savings in transmission power up to 48% are achieved and the resulting System Margin reduction. Furthermore, the dynamic renewal of the knowledge base improves mean savings in launched power up to 7% or 18% with respect to the static approach, depending on the path. Thus, the cognitive approach appears as useful to be applied in commercial optical transport networks with the aim of reducing the operational System Margin.Los operadores de telecomunicaciones tienen que adaptar constantemente sus redes para acoger el volumen creciente de usuarios, servicios y tráfico asociado. Han de buscar constantemente una maximización de la eficiencia en la operación, así como una reducción continua de costes. Esta tesis identifica una oportunidad para alcanzar este objetivo por medio de la reducción de los márgenes operacionales aplicados en los balances de potencia en una red óptica de transporte. Desde un punto de vista operacional, la reducción de márgenes operativos conlleva una optimización de las inversiones requeridas en transceivers, entre otros puntos. Así, basándonos en cómo aprendemos los humanos, se introduce y evalúa una aproximación cognitiva para reducir el System Margin. Este margen operativo se introduce en el balance de potencia, entre otros puntos, para compensar el proceso de envejecimiento a largo plazo de la infraestrcutura de red. Los operadores emplean normalmente un valor fijo y conservador, que se establece durante el diseño y comisionado de la red. Nuestra aproximación cognitiva propone en su lugar un valor flexible y variable, que se adapta a las condiciones de red actuales. Se basa en la técnica de machine learning conocida como case-based reasoning, que se desarrolla más profundamente. Se han propuesto y evaluado nuevos esquemas de aprendizaje. La solución cognitiva propone un nuevo valor más bajo de potencia transmitida, que garantiza la calidad de servicio requerida por el nuevo lighpath entrante. La propuesta logra ahorros en la potencia transmitida, a la vez que garantiza una tasa de éxito correcta cuando aplicamos esta solución cognitiva. Para ello, se apoya en la potencia transmitida en situaciones pasadas y similares a la actual, donde se transmitió una potencia que aseguró el correcto establecimiento del lighpath. Esta información se almacena en una base de conocimiento. En este sentido, se han desarrollado y presentado dos aproximaciones: una base de conocimiento estática y otra dinámica. En el caso del contexto dinámico, se han desarrollado y evaluado cinco nuevos algoritmos de aprendizaje. En el contexto estático, se consigue un ahorro en potencia de hasta un 48%, con la correspondiente reducción del System Margin. En el contexto dinámico, la actualización online de la base de conocimiento proporciona adicionalmente una ganancia en potencia transmitida con respecto a la aproximación estática de hasta un 7% o un 18%, dependiendo de la ruta. De esta forma se comprueba que la propuesta cognitiva se revela como útil y aplicable sobre una red óptica de transporte comercial con el objetivo de reducir el margen operativo conocido como System Margin.Postprint (published version

    Maintaining retrieval knowledge in a case-based reasoning system.

    Get PDF
    The knowledge stored in a case base is central to the problem solving of a case-based reasoning (CBR) system. Therefore, case-base maintenance is a key component of maintaining a CBR system. However, other knowledge sources, such as indexing and similarity knowledge for improved case retrieval, also play an important role in CBR problem solving. For many CBR applications, the refinement of this retrieval knowledge is a necessary component of CBR maintenance. This article focuses on optimization of the parameters and feature selections/weights for the indexing and nearest-neighbor algorithms used by CBR retrieval. Optimization is applied after case-base maintenance and refines the CBR retrieval to reflect changes that have occurred to cases in the case base. The optimization process is generic and automatic, using knowledge contained in the cases. In this article we demonstrate its effectiveness on a real tablet formulation application in two maintenance scenarios. One scenario, a growing case base, is provided by two snapshots of a formulation database. A change in the company's formulation policy results in a second, more fundamental requirement for CBR maintenance. We show that after case-base maintenance, the CBR system did indeed benefit from also refining the retrieval knowledge. We believe that existing CBR shells would benefit from including an option to automatically optimize the retrieval process

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    On Random Subspace Optimization-Based Hybrid Computing Models Predicting the California Bearing Ratio of Soils

    Get PDF
    The California Bearing Ratio (CBR) is an important index for evaluating the bearing capacity of pavement subgrade materials. In this research, random subspace optimization-based hybrid computing models were trained and developed for the prediction of the CBR of soil. Three models were developed, namely reduced error pruning trees (REPTs), random subsurface-based REPT (RSS-REPT), and RSS-based extra tree (RSS-ET). An experimental database was compiled from a total of 214 soil samples, which were classified according to AASHTO M 145, and included 26 samples of A-2-6 (clayey gravel and sand soil), 3 samples of A-4 (silty soil), 89 samples of A-6 (clayey soil), and 96 samples of A-7-6 (clayey soil). All CBR tests were performed in soaked conditions. The input parameters of the models included the particle size distribution, gravel content (G), coarse sand content (CS), fine sand content (FS), silt clay content (SC), organic content (O), liquid limit (LL), plastic limit (PL), plasticity index (PI), optimum moisture content (OMC), and maximum dry density (MDD). The accuracy of the developed models was assessed using numerous performance indexes, such as the coefficient of determination, relative error, MAE, and RMSE. The results show that the highest prediction accuracy was obtained using the RSS-based extra tree optimization technique

    Flexible distributed computing with volunteered resources

    Get PDF
    PhDNowadays, computational grids have evolved to a stage where they can comprise many volunteered resources owned by different individual users and/or institutions, such as desktop grids and volunteered computing grids. This brings benefits for large-scale computing, as more resources are available to exploit. On the other hand, the inherent characteristics of the volunteered resources bring some challenges for efficiently exploiting them. For example, jobs may not be able to be executed by some resources, as the computing resources can be heterogeneous. Furthermore, the resources can be volatile as the resource owners usually have the right to decide when and how to donate the idle Central Processing Unit (CPU) cycles of their computers. Therefore, in order to utilise volunteered resources efficiently, this research investigated solutions from different aspects. Firstly, this research proposes a new computational Grid architecture based on Java and Java application migration technologies to provide fundamental support for coping with these challenges. This proposed architecture supports heterogeneous resources, ensuring local activities are not affected by Grid jobs and enabling resources to carry out live and automatic Java application migration. Secondly, this research work proposes some job-scheduling and migration algorithms based on resource availability prediction and/or artificial intelligence techniques. To examine the proposed algorithms, this work includes a series of experiments in both synthetic and practical scenarios and compares the performance of the proposed algorithms with existing ones across a variety of scenarios. According to the critical assessment, each algorithm has its own distinct advantages and performs well when certain conditions are met. In addition, this research analyses the characteristics of resources in terms of the availability pattern of practical volunteer-based grids. The analysis shows that each environment has its own characteristics and each volunteered resource’s availability tends to possess weak correlations across different days and times-of-day.British Telco

    Using features for automated problem solving

    Get PDF
    We motivate and present an architecture for problem solving where an abstraction layer of "features" plays the key role in determining methods to apply. The system is presented in the context of theorem proving with Isabelle, and we demonstrate how this approach to encoding control knowledge is expressively different to other common techniques. We look closely at two areas where the feature layer may offer benefits to theorem proving — semi-automation and learning — and find strong evidence that in these particular domains, the approach shows compelling promise. The system includes a graphical theorem-proving user interface for Eclipse ProofGeneral and is available from the project web page, http://feasch.heneveld.org

    Clinical evaluation of a novel adaptive bolus calculator and safety system in Type 1 diabetes

    Get PDF
    Bolus calculators are considered state-of-the-art for insulin dosing decision support for people with Type 1 diabetes (T1D). However, they all lack the ability to automatically adapt in real-time to respond to an individual’s needs or changes in insulin sensitivity. A novel insulin recommender system based on artificial intelligence has been developed to provide personalised bolus advice, namely the Patient Empowerment through Predictive Personalised Decision Support (PEPPER) system. Besides adaptive bolus advice, the decision support system is coupled with a safety system which includes alarms, predictive glucose alerts, predictive low glucose suspend for insulin pump users, personalised carbohydrate recommendations and dynamic bolus insulin constraint. This thesis outlines the clinical evaluation of the PEPPER system in adults with T1D on multiple daily injections (MDI) and insulin pump therapy. The hypothesis was that the PEPPER system is safe, feasible and effective for use in people with TID using MDI or pump therapy. Safety and feasibility of the safety system was initially evaluated in the first phase, with the second phase evaluating feasibility of the complete system (safety system and adaptive bolus advisor). Finally, the whole system was clinically evaluated in a randomised crossover trial with 58 participants. No significant differences were observed for percentage times in range between the PEPPER and Control groups. For quality of life, participants reported higher perceived hypoglycaemia with the PEPPER system despite no objective difference in time spent in hypoglycaemia. Overall, the studies demonstrated that the PEPPER system is safe and feasible for use when compared to conventional therapy (continuous glucose monitoring and standard bolus calculator). Further studies are required to confirm overall effectiveness.Open Acces
    corecore