6 research outputs found

    Case-based reasoning combined with statistics for diagnostics and prognosis

    Get PDF
    Many approaches used for diagnostics today are based on a precise model. This excludes diagnostics of many complex types of machinery that cannot be modelled and simulated easily or without great effort. Our aim is to show that by including human experience it is possible to diagnose complex machinery when there is no or limited models or simulations available. This also enables diagnostics in a dynamic application where conditions change and new cases are often added. In fact every new solved case increases the diagnostic power of the system. We present a number of successful projects where we have used feature extraction together with case-based reasoning to diagnose faults in industrial robots, welding, cutting machinery and we also present our latest project for diagnosing transmissions by combining Case-Based Reasoning (CBR) with statistics. We view the fault diagnosis process as three consecutive steps. In the first step, sensor fault signals from machines and/or input from human operators are collected. Then, the second step consists of extracting relevant fault features. In the final diagnosis/prognosis step, status and faults are identified and classified. We view prognosis as a special case of diagnosis where the prognosis module predicts a stream of future features

    Case Authoring from Text and Historical Experiences

    Full text link

    Raciocínio baseado em casos: uma abordagem fuzzy para diagnóstico nutricional

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro TecnológicoRaciocínio Baseado em Casos (RBC) é uma técnica da Inteligência Artificial (IA), recentemente desenvolvida, para amenizar a elaboração de regras existentes em sistemas especialistas que modelam a cognição humana para resolver problemas. O RBC pode ser usado em circunstâncias específicas tais como para a determinação de um Diagnóstico Nutricional e Prescrição Dietética. Em muitos aspectos, é basicamente diferente de outras maiores abordagens da IA. Por acreditar que a matemática clássica sozinha não contempla todos os aspectos das escolhas desenvolvidas pela mente humana é que sentimos a necessidade de uma ferramenta mais flexível, onde possamos ter respostas em graus de cinza. É hábil para utilizar um específico conhecimento de experiências prévias, problemas com situações concretas (casos) e repetir o ato humano de relembrar prévios episódios resolvendo um dado problema por reconhecimento de outras afinidades. Para isso, integramos a metodologia do RBC e o modelo da Lógica Difusa no desenvolvimento deste sistema. Diagnóstico Nutricional e Prescrições Dietéticas são muito complexos, descrevendo estas considerações em parâmetros Fuzzy, como obesidade, comportamento individual, idade e tendências genéticas. O objetivo deste estudo, foi desenvolver um sistema inteligente que satisfizesse as necessidades de um especialista nutricionista ao determinar um Diagnóstico Nutricional e fornecer uma Prescrição Dietética a um indivíduo, utilizando-se ferramentas rápidas e próximas a cognição humana. A base de casos deste sistema, foi obtida através de um estudo realizado na instituição Pell Heart Survey Regional Municipality of Pell em 1997, na província de Ontário, Canadá. Esta pesquisa foi caracterizada como um estudo de casos, transversal e qualitativo. O objetivo da Instituição Peel neste estudo, foi determinar o estado nutricional desta população assim como diagnosticar doenças crônico degenerativas não transmissíveis. A Instituição Pell coletou o tamanho de uma amostra de 2000 sujeitos, foram aleatoriamente selecionados em uma população adulta entre 18 e 59 anos de idade, de ambos os sexos. Os riscos nutricionais desta população foram determinados pelas variáveis de índices de massa corporal (IMC), avaliação dietética, necessidades energéticas totais (NET), o nível de atividade física, pressão arterial, colesterol sangüíneo, história familiar, tabagismo, sexo e idade. Pegou-se as amostras da Pell e fez-se um tratamento e análise dos dados. Aplicou-se o modelo fuzzy para valorar os atributos e a metodologia do Raciocínio Baseado em Casos, utilizou-se para compor os casos reais e suas devidas soluções, na base de casos. Constituiu-se um conjunto de protótipos para facilitar a aquisição dos casos, agilizando a recuperação dos mesmos, diminuindo-se a necessidade de adaptação. A ferramenta utilizada para testar os pesos, foi a shell Esteem 1.4 da Esteem Software e o programa estatístico utilizado foi SPSS, versão 8.0. O tamanho da amostra foi adequada. Verificou-se a sustentabilidade da capacidade do modelo tendo em vista a importância do aprendizado com a experiência. A validação do modelo fuzzy e do RBC chegou próximo a 100

    Finding hidden semantics of text tables

    Get PDF
    Combining data from different sources for further automatic processing is often hindered by differences in the underlying semantics and representation. Therefore when linking information presented in documents in tabular form with data held in databases, it is important to determine as much information about the table and its content. Important information about the table data is often given in the text surrounding the table in that document. The table's creators cannot clarify all the semantics in the table itself therefore they use the table context or the text around it to give further information. These semantics are very useful when integrating and using this data, but are often difficult to detect automatically. We propose a solution to part of this problem based on a domain ontology. The input to our system is a document that contains tabular data and the system aims to find semantics in the document that are related to the tabular data. The output of our system is a set of detected semantics linked to the corresponding table. The system uses elements of semantic detection, semantic representation, and data integration. Semantic detection uses a domain ontology, in which we store concepts of that domain. This allows us to analyse the content of the document (text) and detect context information about the tables present in a document containing tabular data. Our approach consists of two components: (1) extract, from the domain ontology, concepts, synonyms, and relations that correspond to the table data. (2) Build a tree for the paragraphs and use this tree to detect the hidden semantics by searching for words matching the extracted concepts. Semantic representation techniques then allow representation of the detected semantics of the table data. Our system represents the detected semantics, as either 'semantic units' or 'enhanced metadata'. Semantic units are a flexible set of meta-attributes that describe the meaning of the data item along with the detected semantics. In addition, each semantic unit has a concept label associated with it that specifies the relationship between the unit and the real world aspects it describes. In the enhanced metadata, table metadata is enhanced with the semantics and representation context found in the text. Integrating data in our proposed system takes place in two steps. First, the semantic units are converted to a common context, reflecting the application. This is achieved by using appropriate conversion functions. Secondly, the semantically identical semantic units, will be identified and integrated into a common representation. This latter is the subject of future work. Thus the research has shown that semantics about a table are in the text and how it is possible to locate and use these semantics by transforming them into an appropriate form to enhance the basic table metadata

    A case-based reasoning methodology to formulating polyurethanes

    Get PDF
    Formulation of polyurethanes is a complex problem poorly understood as it has developed more as an art rather than a science. Only a few experts have mastered polyurethane (PU) formulation after years of experience and the major raw material manufacturers largely hold such expertise. Understanding of PU formulation is at present insufficient to be developed from first principles. The first principle approach requires time and a detailed understanding of the underlying principles that govern the formulation process (e.g. PU chemistry, kinetics) and a number of measurements of process conditions. Even in the simplest formulations, there are more that 20 variables often interacting with each other in very intricate ways. In this doctoral thesis the use of the Case-Based Reasoning and Artificial Neural Network paradigm is proposed to enable support for PUs formulation tasks by providing a framework for the collection, structure, and representation of real formulating knowledge. The framework is also aimed at facilitating the sharing and deployment of solutions in a consistent and referable way, when appropriate, for future problem solving. Two basic problems in the development of a Case-Based Reasoning tool that uses past flexible PU foam formulation recipes or cases to solve new problems were studied. A PU case was divided into a problem description (i. e. PU measured mechanical properties) and a solution description (i. e. the ingredients and their quantities to produce a PU). The problems investigated are related to the retrieval of former PU cases that are similar to a new problem description, and the adaptation of the retrieved case to meet the problem constraints. For retrieval, an alternative similarity measure based on the moment's description of a case when it is represented as a two dimensional image was studied. The retrieval using geometric, central and Legendre moments was also studied and compared with a standard nearest neighbour algorithm using nine different distance functions (e.g. Euclidean, Canberra, City Block, among others). It was concluded that when cases were represented as 2D images and matching is performed by using moment functions in a similar fashion to the approaches studied in image analysis in pattern recognition, low order geometric and Legendre moments and central moments of any order retrieve the same case as the Euclidean distance does when used in a nearest neighbour algorithm. This means that the Euclidean distance acts a low moment function that represents gross level case features. Higher order (moment's order>3) geometric and Legendre moments while enabling finer details about an image to be represented had no standard distance function counterpart. For the adaptation of retrieved cases, a feed-forward back-propagation artificial neural network was proposed to reduce the adaptation knowledge acquisition effort that has prevented building complete CBR systems and to generate a mapping between change in mechanical properties and formulation ingredients. The proposed network was trained with the differences between problem descriptions (i.e. mechanical properties of a pair of foams) as input patterns and the differences between solution descriptions (i.e. formulation ingredients) as the output patterns. A complete data set was used based on 34 initial formulations and a 16950 epochs trained network with 1102 training exemplars, produced from the case differences, gave only 4% error. However, further work with a data set consisting of a training set and a small validation set failed to generalise returning a high percentage of errors. Further tests on different training/test splits of the data also failed to generalise. The conclusion reached is that the data as such has insufficient common structure to form any general conclusions. Other evidence to suggest that the data does not contain generalisable structure includes the large number of hidden nodes necessary to achieve convergence on the complete data set.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Resource-aware plan recognition in instrumented environments

    Get PDF
    This thesis addresses the problem of plan recognition in instrumented environments, which is to infer an agent';s plans by observing its behavior. In instrumented environments such observations are made by physical sensors. This introduces specific challenges, of which the following two are considered in this thesis: - Physical sensors often observe state information instead of actions. As classical plan recognition approaches usually can only deal with action observations, this requires a cumbersome and error-prone inference of executed actions from observed states. - Due to limited physical resources of the environment it is often not possible to run all sensors at the same time, thus sensor selection techniques have to be applied. Current plan recognition approaches are not able to support the environment in selecting relevant subsets of sensors. This thesis proposes a two-stage approach to solve the problems described above. Firstly, a DBN-based plan recognition approach is presented which allows for the explicit representation and consideration of state knowledge. Secondly, a POMDP-based utility model for observation sources is presented which can be used with generic utility-based sensor selection algorithms. Further contributions include the presentation of a software toolkit that realizes plan recognition and sensor selection in instrumented environments, and an empirical evaluation of the validity and performance of the proposed models.Diese Arbeit behandelt das Problem der Planerkennung in instrumentierten Umgebungen. Ziel ist dabei das Erschließen der Pläne des Nutzers anhand der Beobachtung seiner Handlungen. In instrumentierten Umgebungen erfolgt diese Beobachtung über physische Sensoren. Dies wirft spezifische Probleme auf, von denen zwei in dieser Arbeit näher betrachtet werden: - Physische Sensoren beobachten in der Regel Zustände anstelle direkter Nutzeraktionen. Klassische Planerkennungsverfahren basieren jedoch auf der Beobachtung von Aktionen, was bisher eine aufwendige und fehlerträchtige Ableitung von Aktionen aus Zustandsbeobachtungen notwendig macht. - Aufgrund beschränkter Resourcen der Umgebung ist es oft nicht möglich alle Sensoren gleichzeitig zu aktivieren. Aktuelle Planerkennungsverfahren bieten keine Möglichkeit, die Umgebung bei der Auswahl einer relevanten Teilmenge von Sensoren zu unterstützen. Diese Arbeit beschreibt einen zweistufigen Ansatz zur Lösung der genannten Probleme. Zunächst wird ein DBN-basiertes Planerkennungsverfahren vorgestellt, das Zustandswissen explizit repräsentiert und in Schlussfolgerungen berücksichtigt. Dieses Verfahren bildet die Basis für ein POMDP-basiertes Nutzenmodell für Beobachtungsquellen, das für den Zweck der Sensorauswahl genutzt werden kann. Des Weiteren wird ein Toolkit zur Realisierung von Planerkennungs- und Sensorauswahlfunktionen vorgestellt sowie die Gültigkeit und Performanz der vorgestellten Modelle in einer empirischen Studie evaluiert
    corecore