15 research outputs found

    User-Friendly MES Interfaces:Recommendations for an AI-Based Chatbot Assistance in Industry 4.0 Shop Floors

    Get PDF
    The purpose of this paper is to study an Industry 4.0 scenario of ‘technical assistance’ and use manufacturing execution systems (MES) to address the need for easy information extraction on the shop floor. We identify specific requirements for a user-friendly MES interface to develop (and test) an approach for technical assistance and introduce a chatbot with a prediction system as an interface layer for MES. The chatbot is aimed at production coordination by assisting the shop floor workforce and learn from their inputs, thus acting as an intelligent assistant. We programmed a prototype chatbot as a proof of concept, where the new interface layer provided live updates related to production in natural language and added predictive power to MES. The results indicate that the chatbot interface for MES is beneficial to the shop floor workforce and provides easy information extraction, compared to the traditional search techniques. The paper contributes to the manufacturing information systems field and demonstrates a human-AI collaboration system in a factory. In particular, this paper recommends the manner in which MES based technical assistance systems can be developed for the purpose of easy information retrieval

    Confidence in prediction: an approach for dynamic weighted ensemble.

    Get PDF
    Combining classifiers in an ensemble is beneficial in achieving better prediction than using a single classifier. Furthermore, each classifier can be associated with a weight in the aggregation to boost the performance of the ensemble system. In this work, we propose a novel dynamic weighted ensemble method. Based on the observation that each classifier provides a different level of confidence in its prediction, we propose to encode the level of confidence of a classifier by associating with each classifier a credibility threshold, computed from the entire training set by minimizing the entropy loss function with the mini-batch gradient descent method. On each test sample, we measure the confidence of each classifier’s output and then compare it to the credibility threshold to determine whether a classifier should be attended in the aggregation. If the condition is satisfied, the confidence level and credibility threshold are used to compute the weight of contribution of the classifier in the aggregation. By this way, we are not only considering the presence but also the contribution of each classifier based on the confidence in its prediction on each test sample. The experiments conducted on a number of datasets show that the proposed method is better than some benchmark algorithms including a non-weighted ensemble method, two dynamic ensemble selection methods, and two Boosting methods

    Radio Frequency Identification (RFID) in health care: where are we? A scoping review

    Get PDF
    Purpose (RFID) is a technology that uses radio waves for data collection and transfer, so data is captured efficiently, automatically and in real time without human intervention. This technology, alone or in addition to other technologies has been considered as a possible solution to reduce problems that endanger public health or to improve its management. This scoping review aims to provide readers with an up-to-date picture of the use of this technology in health care settings. Methods This scoping review examines the state of RFID technology in the healthcare area for the period 2017-2022, specifically addressing RFID versatility and investigating how this technology can contribute to radically change the management of public health. The guidelines of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) have been followed. Literature reviews or surveys were excluded. Only articles describing technologies implemented on a real environment or on prototypes were included. Results The search returned 366 results. After screening, based on title and abstract, 58 articles were considered suitable for this work. 11 articles were reviewed because they met the qualifying requirements. The study of the selected articles highlighted six matters that can be profitably impacted by this technology Conclusion The selected papers show that this technology can improve patient safety by reducing medical errors, that can occur within operating rooms. It can also be the solution to overcome the problem of the black market in counterfeiting drugs, or as a prevention tool. Further research is needed, especially on data management, security, and privacy, given the sensitive nature of medical information

    Radio Frequency Identification (RFID) in health care: where are we? A scoping review

    Get PDF
    Purpose: (RFID) is a technology that uses radio waves for data collection and transfer, so data is captured efficiently, automatically and in real time without human intervention. This technology, alone or in addition to other technologies has been considered as a possible solution to reduce problems that endanger public health or to improve its management. This scoping review aims to provide readers with an up-to-date picture of the use of this technology in health care settings. Methods: This scoping review examines the state of RFID technology in the healthcare area for the period 2017-2022, specifically addressing RFID versatility and investigating how this technology can contribute to radically change the management of public health. The guidelines of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) have been followed. Literature reviews or surveys were excluded. Only articles describing technologies implemented on a real environment or on prototypes were included. Results: The search returned 366 results. After screening, based on title and abstract, 58 articles were considered suitable for this work. 11 articles were reviewed because they met the qualifying requirements. The study of the selected articles highlighted six matters that can be profitably impacted by this technology Conclusion: The selected papers show that this technology can improve patient safety by reducing medical errors, that can occur within operating rooms. It can also be the solution to overcome the problem of the black market in counterfeiting drugs, or as a prevention tool. Further research is needed, especially on data management, security, and privacy, given the sensitive nature of medical information. Graphical Abstract: [Figure not available: see fulltext.

    Application of Machine Learning in Melanoma Detection and the Identification of 'Ugly Duckling' and Suspicious Naevi: A Review

    Full text link
    Skin lesions known as naevi exhibit diverse characteristics such as size, shape, and colouration. The concept of an "Ugly Duckling Naevus" comes into play when monitoring for melanoma, referring to a lesion with distinctive features that sets it apart from other lesions in the vicinity. As lesions within the same individual typically share similarities and follow a predictable pattern, an ugly duckling naevus stands out as unusual and may indicate the presence of a cancerous melanoma. Computer-aided diagnosis (CAD) has become a significant player in the research and development field, as it combines machine learning techniques with a variety of patient analysis methods. Its aim is to increase accuracy and simplify decision-making, all while responding to the shortage of specialized professionals. These automated systems are especially important in skin cancer diagnosis where specialist availability is limited. As a result, their use could lead to life-saving benefits and cost reductions within healthcare. Given the drastic change in survival when comparing early stage to late-stage melanoma, early detection is vital for effective treatment and patient outcomes. Machine learning (ML) and deep learning (DL) techniques have gained popularity in skin cancer classification, effectively addressing challenges, and providing results equivalent to that of specialists. This article extensively covers modern Machine Learning and Deep Learning algorithms for detecting melanoma and suspicious naevi. It begins with general information on skin cancer and different types of naevi, then introduces AI, ML, DL, and CAD. The article then discusses the successful applications of various ML techniques like convolutional neural networks (CNN) for melanoma detection compared to dermatologists' performance. Lastly, it examines ML methods for UD naevus detection and identifying suspicious naevi

    Content-Aware Quantization Index Modulation:Leveraging Data Statistics for Enhanced Image Watermarking

    Full text link
    Image watermarking techniques have continuously evolved to address new challenges and incorporate advanced features. The advent of data-driven approaches has enabled the processing and analysis of large volumes of data, extracting valuable insights and patterns. In this paper, we propose two content-aware quantization index modulation (QIM) algorithms: Content-Aware QIM (CA-QIM) and Content-Aware Minimum Distortion QIM (CAMD-QIM). These algorithms aim to improve the embedding distortion of QIM-based watermarking schemes by considering the statistics of the cover signal vectors and messages. CA-QIM introduces a canonical labeling approach, where the closest coset to each cover vector is determined during the embedding process. An adjacency matrix is constructed to capture the relationships between the cover vectors and messages. CAMD-QIM extends the concept of minimum distortion (MD) principle to content-aware QIM. Instead of quantizing the carriers to lattice points, CAMD-QIM quantizes them to close points in the correct decoding region. Canonical labeling is also employed in CAMD-QIM to enhance its performance. Simulation results demonstrate the effectiveness of CA-QIM and CAMD-QIM in reducing embedding distortion compared to traditional QIM. The combination of canonical labeling and the minimum distortion principle proves to be powerful, minimizing the need for changes to most cover vectors/carriers. These content-aware QIM algorithms provide improved performance and robustness for watermarking applications.Comment: 12 pages, 10 figure

    Knowledge Management and Data Analysis Techniques for Data-Driven Financial Companies

    Get PDF
    In today’s fast-paced financial industry, knowledge management and data-driven decision making have become essential for the success of financial technology (FinTech) companies. Big data (BD) is a prevalent phenomenon that can be found across many industries, including finance. Despite its complexity and difficulty to comprehend, big data is a critical component of financial services enterprises and technology architectures. We examine BD from various aspects, considering data science (DS) techniques and methodologies that can be applied during the operation of an enterprise. Our aim is to provide an overview of knowledge management (KM) practices and data analysis (DA) strategies and techniques in the daily operations of financial companies. We address the role of knowledge management, data analytics in a financial institution. The paper demonstrates financial institutions’ enablement for new services resulting from technological advancements

    Enabling the Smart Factory with Industrial Internet of Things-Connected MES/MOM

    Get PDF

    Sentiment Analysis for Fake News Detection

    Get PDF
    [Abstract] In recent years, we have witnessed a rise in fake news, i.e., provably false pieces of information created with the intention of deception. The dissemination of this type of news poses a serious threat to cohesion and social well-being, since it fosters political polarization and the distrust of people with respect to their leaders. The huge amount of news that is disseminated through social media makes manual verification unfeasible, which has promoted the design and implementation of automatic systems for fake news detection. The creators of fake news use various stylistic tricks to promote the success of their creations, with one of them being to excite the sentiments of the recipients. This has led to sentiment analysis, the part of text analytics in charge of determining the polarity and strength of sentiments expressed in a text, to be used in fake news detection approaches, either as a basis of the system or as a complementary element. In this article, we study the different uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2020/11This work has been funded by FEDER/Ministerio de Ciencia, Innovación y Universidades — Agencia Estatal de Investigación through the ANSWERASAP project (TIN2017-85160-C2-1-R); and by Xunta de Galicia through a Competitive Reference Group grant (ED431C 2020/11). CITIC, as Research Center of the Galician University System, is funded by the Consellería de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund (ERDF/FEDER) with 80%, the Galicia ERDF 2014-20 Operational Programme, and the remaining 20% from the Secretaría Xeral de Universidades (ref. ED431G 2019/01). David Vilares is also supported by a 2020 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation. Carlos Gómez-Rodríguez has also received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant No. 714150

    Empirical Perturbation Analysis of Two Adversarial Attacks: Black Box versus White Box

    Get PDF
    Through the addition of humanly imperceptible noise to an image classified as belonging to a category ca, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target class ct≠ca. To achieve a better understanding of the inner workings of adversarial attacks, this study analyzes the adversarial images created by two completely opposite attacks against 10 ImageNet-trained CNNs. A total of 2×437 adversarial images are created by EAtarget,C, a black-box evolutionary algorithm (EA), and by the basic iterative method (BIM), a white-box, gradient-based attack. We inspect and compare these two sets of adversarial images from different perspectives: the behavior of CNNs at smaller image regions, the image noise frequency, the adversarial image transferability, the image texture change, and penultimate CNN layer activations. We find that texture change is a side effect rather than a means for the attacks and that ct-relevant features only build up significantly from image regions of size 56×56 onwards. In the penultimate CNN layers, both attacks increase the activation of units that are positively related to ct and units that are negatively related to ca. In contrast to EAtarget,C’s white noise nature, BIM predominantly introduces low-frequency noise. BIM affects the original ca features more than EAtarget,C, thus producing slightly more transferable adversarial images. However, the transferability with both attacks is low, since the attacks’ ct-related information is specific to the output layers of the targeted CNN. We find that the adversarial images are actually more transferable at regions with sizes of 56×56 than at full scale
    corecore