10 research outputs found

    Optimization of Manufacturing Production and Process

    Get PDF
    This chapter mainly introduces production processing optimization, especially for machining processing optimization on CNC. The sensor collects the original vibration data in time domain and converts them to the main feature vector using signal processing technologies, such as fast Fourier transform (FFT), short-time Fourier transform (STFT), and wavelet packet in the 5G AI edge computing. Subsequently, the main feature will be sent for cloud computing using genetic programming, Space Vector Machine (SVM), etc. to obtain optimization results. The optimization parameters in this work include machining spindle rotation velocity, cutting speed, and cutting depth, while, the result is the optimized main spindle rotation speed range of CNC, which met machining roughness requirements. Finally, the relationship between vibration velocity and machining quality is further studied to optimize the three operational parameters

    Radio Systems and Computing at the Edge for IoT Sensor Nodes

    Get PDF
    Many Internet of Things (IoT) applications use wireless links to communicate data back. Wireless system performance limits data rates. This data rate limit is what ultimately drives the location of computing resources—on the edge or in the cloud. To understand the limits of performance, it is instructive to look at the evolution of cellular and other radio systems. The emphasis will be on the RF front-end architectures and requirements as well as the modulation schemes used. Wireless sensor nodes will often need to run off batteries and be low-cost, and this will constrain the choice of wireless communications system. Generally cheap and power efficient radio front ends will not support high data rates which will mean that more computing will need to move to the edge. We will look at some examples to understand the choice of radio system for communication. We will also consider the use of radio in the sensor itself with a radar sensor system

    Applications of Machine Learning in Healthcare

    Get PDF
    Machine learning techniques in healthcare use the increasing amount of health data provided by the Internet of Things to improve patient outcomes. These techniques provide promising applications as well as significant challenges. The three main areas machine learning is applied to include medical imaging, natural language processing of medical documents, and genetic information. Many of these areas focus on diagnosis, detection, and prediction. A large infrastructure of medical devices currently generates data but a supporting infrastructure is oftentimes not in place to effectively utilize such data. The many different forms medical information exist in also creates some challenges in data formatting and can increase noise. We examine a brief history of machine learning, some basic knowledge regarding the techniques, and the current state of this technology in healthcare

    Energy Harvesting Technology for IoT Edge Applications

    Get PDF
    The integration of energy harvesting technologies with Internet of things (IoTs) leads to the automation of building and homes. The IoT edge devices, which include end user equipment connected to the networks and interact with other networks and devices, may be located in remote locations where the main power is not available or battery replacement is not feasible. The energy harvesting technologies can reduce or eliminate the need of batteries for edge devices by using super capacitors or rechargeable batteries to recharge them in the field. The proposed chapter provides a brief discussion about possible energy harvesting technologies and their potential power densities and techniques to minimize power requirements of edge devices, so that energy harvesting solutions will be sufficient to meet the power requirements

    Remote Management of Autonomous Factory

    Get PDF
    In today’s mass production era, the world is making things (products and systems) so quickly and systematically in huge volume. The demand for these products is very high and, at the same time, consumers are still in search for a need for making the production very personalized. Hence, the “one mold fits all” approach may not seem to be enough. The present approach is facing the lack of networking between the automation pyramid levels, that is, especially between enterprise resource planning (ERP) and manufacturing execution system (MES) layers and, in turn, communicating directly with the lower layers is not possible. This missing communication among the process equipment like machineries and field control systems like PLCs at the production shop floors implies that customization at the product layer for the consumer is still in progress in classical manufacturing. Mini-MES is a new concept being introduced here to solve the existing techniques reported in the literature and is followed by industry best practices. The novel mini-MES platform provides an avenue for the technology process level (the most bottom layer) to interplay interconnectivity and interoperability with its higher levels until the above pain points are addressed holistically. The chapter is going to focus mainly on the factory production of digital manufacturing and on describing the 3-Cs implementation plan, the enabling technology, and the achievable outcome ahead

    Artificial IoT and Data Interoperability: Future Directions and Research Agenda

    Get PDF
    The Internet of Things (IoT) has grown from devices being connected to and controlled through the Internet to autonomous platforms and networked devices that communicate with each other. The utilization of Artificial Intelligence with IoT (AIoT) has further increased the capabilities and services provided by devices but also imposed various challenges such as data interoperability. This panel is composed of leading experts in academia and industry that will discuss the current state and future directions of AIoT along with data interoperability to identify opportunities and challenges along with future directions in research

    The Use of Machine Learning Methods for Image Classification in Medical Data

    Get PDF
    Integrating medical imaging with computing technologies, such as Artificial Intelligence (AI) and its subsets: Machine learning (ML) and Deep Learning (DL) has advanced into an essential facet of present-day medicine, signaling a pivotal role in diagnostic decision-making and treatment plans (Huang et al., 2023). The significance of medical imaging is escalated by its sustained growth within the realm of modern healthcare (Varoquaux and Cheplygina, 2022). Nevertheless, the ever-increasing volume of medical images compared to the availability of imaging experts. Biomedical experts and radiologists have resulted in a widening disparity, causing an excess and overwhelming workload on these healthcare professionals (Chen et al., 2021). Several studies indicate that the present-day biomedical radiologist is now saddled with the daunting task of interpreting an image almost every 10 seconds to keep pace with the burgeoning clinical demands (McDonald et al., 2015; Hosny et al., 2018; Lantsman et al., 2022). This cognitive drain has invariably led to inevitable consequences such as delays in diagnosis and an amplified risk of diagnostic errors – thus, the biomedical imaging aspect is in dire need of methods that would aid accurate diagnostics and analytics for improved decision making. In this review, the importance of AI-related technologies such as ML and/or DL methods are reviewed in relation to the processing of medical or biomedical images along with their potentials, challenges, and possible suggestions for future studies in the health landscape. The focus will be on machine learning methods associated with the medical field of image classification

    Medical Image Classification with Machine Learning Classifier

    Get PDF
    In contemporary healthcare, medical image categorization is essential for illness prediction, diagnosis, and therapy planning. The emergence of digital imaging technology has led to a significant increase in research into the use of machine learning (ML) techniques for the categorization of images in medical data. We provide a thorough summary of recent developments in this area in this review, using knowledge from the most recent research and cutting-edge methods.We begin by discussing the unique challenges and opportunities associated with medical image classification, including the complexity of anatomical structures, variability in imaging modalities, and the need for interpretability and reliability in clinical settings. Subsequently, we survey a wide range of ML algorithms and techniques employed for medical image classification, including traditional methods such as support vector machines and k-nearest neighbors, as well as deep learning approaches like convolutional neural networks (CNNs) and recurrent neural networks (RNNs). Furthermore, we examine key considerations in dataset preparation, feature extraction, and model evaluation specific to medical image classification tasks. We highlight the importance of large, annotated datasets, transfer learning, and data augmentation techniques in enhancing model performance and generalization

    Recent progress on artificial intelligence application for COVID-19 monitoring and mitigation system in Indonesia

    Get PDF
    COVID-19 is a serious public health issue in the world. The spread of COVID-19 has gone global. Indonesia is a country with a high rate of infectious cases worldwide. Multidisciplinary science also brings together to fight against COVID-19 in Indonesia. One of the important roles is artificial intelligence (AI). AI was also reported to help a vital role in public health. As a developing country, Indonesia also brings innovative AI approaches to help people with COVID-19, including classification, detection, diagnosis, prediction, telemedicine, and many more. This study elaborates the recent progress on AI application in Indonesia contributing to the fight against COVID-19 which has been reported in scientific papers/reports. Although research in this field is still not so developed, the existing innovations provide a new enthusiasm for public health and the benefits of the latest technology that can support human health, especially in facing a pandemic at Indonesia. It was found that there can be more research by utilizing AI approaches concerning public health and medicine in the future research work particularly in facing a pandemic condition

    Academic Libraries

    Get PDF
    As we begin to fundamentally redefine our world, informed through the Fourth Industrial Revolution (4IR) lens, entire industries are gearing up for this disruptive event. Library practices have been no exception. With the advent of advanced digital technology, knowledge is becoming more readily accessible. This book focuses on how libraries need to respond, adapt, and transform to become meaningful spaces in our rapidly changing 21st century, within the 4IR and coupled with the restrictions of the pandemic. Tracing the evolution of technology over the centuries, the changing role of the library as a response to disruptions is discussed
    corecore