7 research outputs found

    Transductive-Weighted Neuro-fuzzy Inference System for Tool Wear Prediction in a Turning Process

    Get PDF
    This paper presents the application to the modeling of a novel technique of artificial intelligence. Through a transductive learning process, a neuro-fuzzy inference system enables to create a different model for each input to the system at issue. The model was created from a given number of known data with similar features to data input. The sum of these individual models yields greater accuracy to the general model because it takes into account the particularities of each input. To demonstrate the benefits of this kind of modeling, this system is applied to the tool wear modeling for turning process.This work was supported by DPI2008-01978 COGNETCON and CIT-420000-2008-13 NANOCUT-INT projects of the Spanish Ministry of Science and Innovation.Peer reviewe

    Исследование возможности использования вибраций в системе резания для косвенной диагностики состояния режущего инструмента в процессе точения закаленной стали

    Get PDF
    В статье представлены результаты экспериментального исследования зависимости мощности вибрационного сигнала от режимов резания (глубины, подачи и скорости) и величины износа резца по задней поверхности в процессе продольного точения закаленной стали. Доказано, что мощность вибрационного сигнала в процессе обработки может быть использована для диагностики состояния режущего инструмента.The article presents the results of experimental investigation of dependence between vibration signal power and cutting conditions (depth, feed and speed) as well as flank wear rate of cutting tool during axial turning of hardened steel. It has been proved that vibration signal power during machining may be used for diagnosis of cutting tool condition

    Inteligencia computacional embebida para la supervisión de procesos de microfabricación

    Full text link
    En este artículo se presenta el desarrollo e implementación de una estrategia de supervisión de un proceso de microfabricación. El método propuesto está basado en técnicas de Inteligencia Artificial, embebidas en una plataforma de tiempo real para la monitorización inteligente de procesos. La contribución se centra fundamentalmente en dos modelos para la estimación en proceso (on-line) de la rugosidad superficial (Ra), a partir de la mínima información sensorial posible. El primero de estos modelos está basado en un algoritmo para el modelado híbrido incremental (HIM), cuyos parámetros óptimos se obtienen a partir de un método estocástico, representado por el temple simulado. El segundo está basado en un algoritmo de agrupamiento borroso generalizado (GFCM), incorporado en un sistema de inferencia de una estructura neuroborrosa. Esta estrategia se embebe en una plataforma para una ejecución en tiempo real y en paralelo junto con el resto de estrategias y métodos. Finalmente, se hace una validación en una plataforma experimental, utilizada como soporte tecnológico, lo cual permite el aprovechamiento mutuo de las experiencias alcanzadas y la mejora de los resultados obtenidos. Este resultado científico y técnico, supone un salto cualitativo importante sin precedentes en la investigación industrial en el campo de la microfabricación

    Digital twin-based Optimization on the basis of Grey Wolf Method. A Case Study on Motion Control Systems

    Get PDF
    Nowadays, digital twins are fostering the development of plug, simulate and optimize behavior in industrial cyber-physical systems. This paper presents a digital twin-based optimization of a motion system on the basis of a grey wolf optimization (GWO) method. The digital twin of the whole ultraprecision motion system with friction and backlash including a P-PI cascade controller is used as a basement to minimize the maximum position error. The simulation study and the real-time experiments in trajectory control are performed to compare the performance of the proposed GWO algorithm and the industrial method called Fine tune (FT) method. The simulation study shows that the digital twin-based optimization using GWO outperformed FT method with improvement of 66.4% in the reduction of the maximum position error. The real-time experimental results obtained show also the advantage of GWO method with 18% of improvement in the maximum peak error and 16% in accuracy

    Monitorización Inteligente de los Procesos de Corte en la Nanoescala.

    Get PDF
    En la actualidad una de las claves de los procesos de fabricación es la generación de nuevos conocimientos técnicos y científicos a través del estudio e investigación aplicada del proceso de corte en la micro y nano-escala de aleaciones para aplicaciones aeronáuticas y aeroespaciales y el diseño de un sistema de monitorización inteligente y en red. En este informe se recoge una revisión del estado del arte en los campos y temáticas afines al proceso de corte en la micro y nanoescala, técnicas de inteligencia artificial y monitorización en red de procesos de fabricación. Igualmente se presenta el diseño e implementación de un sistema de monitorización del proceso de corte y la medición de las señales que lo caracterizan

    Semi-supervised machine learning techniques for classification of evolving data in pattern recognition

    Get PDF
    The amount of data recorded and processed over recent years has increased exponentially. To create intelligent systems that can learn from this data, we need to be able to identify patterns hidden in the data itself, learn these pattern and predict future results based on our current observations. If we think about this system in the context of time, the data itself evolves and so does the nature of the classification problem. As more data become available, different classification algorithms are suitable for a particular setting. At the beginning of the learning cycle when we have a limited amount of data, online learning algorithms are more suitable. When truly large amounts of data become available, we need algorithms that can handle large amounts of data that might be only partially labeled as a result of the bottleneck in the learning pipeline from human labeling of the data. An excellent example of evolving data is gesture recognition, and it is present throughout our work. We need a gesture recognition system to work fast and with very few examples at the beginning. Over time, we are able to collect more data and the system can improve. As the system evolves, the user expects it to work better and not to have to become involved when the classifier is unsure about decisions. This latter situation produces additional unlabeled data. Another example of an application is medical classification, where experts’ time is a rare resource and the amount of received and labeled data disproportionately increases over time. Although the process of data evolution is continuous, we identify three main discrete areas of contribution in different scenarios. When the system is very new and not enough data are available, online learning is used to learn after every single example and to capture the knowledge very fast. With increasing amounts of data, offline learning techniques are applicable. Once the amount of data is overwhelming and the teacher cannot provide labels for all the data, we have another setup that combines labeled and unlabeled data. These three setups define our areas of contribution; and our techniques contribute in each of them with applications to pattern recognition scenarios, such as gesture recognition and sketch recognition. An online learning setup significantly restricts the range of techniques that can be used. In our case, the selected baseline technique is the Evolving TS-Fuzzy Model. The semi-supervised aspect we use is a relation between rules created by this model. Specifically, we propose a transductive similarity model that utilizes the relationship between generated rules based on their decisions about a query sample during the inference time. The activation of each of these rules is adjusted according to the transductive similarity, and the new decision is obtained using the adjusted activation. We also propose several new variations to the transductive similarity itself. Once the amount of data increases, we are not limited to the online learning setup, and we can take advantage of the offline learning scenario, which normally performs better than the online one because of the independence of sample ordering and global optimization with respect to all samples. We use generative methods to obtain data outside of the training set. Specifically, we aim to improve the previously mentioned TS Fuzzy Model by incorporating semi-supervised learning in the offline learning setup without unlabeled data. We use the Universum learning approach and have developed a method called UFuzzy. This method relies on artificially generated examples with high uncertainty (Universum set), and it adjusts the cost function of the algorithm to force the decision boundary to be close to the Universum data. We were able to prove the hypothesis behind the design of the UFuzzy classifier that Universum learning can improve the TS Fuzzy Model and have achieved improved performance on more than two dozen datasets and applications. With increasing amounts of data, we use the last scenario, in which the data comprises both labeled data and additional non-labeled data. This setting is one of the most common ones for semi-supervised learning problems. In this part of our work, we aim to improve the widely popular tecjniques of self-training (and its successor help-training) that are both meta-frameworks over regular classifier methods but require probabilistic representation of output, which can be hard to obtain in the case of discriminative classifiers. Therefore, we develop a new algorithm that uses the modified active learning technique Query-by-Committee (QbC) to sample data with high certainty from the unlabeled set and subsequently embed them into the original training set. Our new method allows us to achieve increased performance over both a range of datasets and a range of classifiers. These three works are connected by gradually relaxing the constraints on the learning setting in which we operate. Although our main motivation behind the development was to increase performance in various real-world tasks (gesture recognition, sketch recognition), we formulated our work as general methods in such a way that they can be used outside a specific application setup, the only restriction being that the underlying data evolve over time. Each of these methods can successfully exist on its own. The best setting in which they can be used is a learning problem where the data evolve over time and it is possible to discretize the evolutionary process. Overall, this work represents a significant contribution to the area of both semi-supervised learning and pattern recognition. It presents new state-of-the-art techniques that overperform baseline solutions, and it opens up new possibilities for future research
    corecore