9 research outputs found

    A new methods of mobile object measurement by using radio frequency identification

    Get PDF
    In this study, the mobile robot conducts tag of RFID and the antennas’ reader was scattered at the indoor-outdoor environment, which represents the novelty of the study, as this has not been done in the previous studies. This protects the mobile robot from weight increase reduces the consumption of the battery. Moreover, mobile object increase demands an increase in cheap passive Radio Frequency Identification tags in the system of navigation. Techniques of Signal processing utilize both accompanied by the theories of electromagnetics in locating the robot’s position. Numerous antennas usage provides a breadth of comparisons. In this work, have been provide a new RFID tracking approach that can also be used for interior positioning. This technique employs RSS to gather the signal intensity of reference tags before they are used. The next step is to send a signal. Setting up Power Level ranges via reference tags uses strength as a setting parameter. Then, based on the intensity of the signal, you can determine how far away you are. Reference tags are used to match the signal intensity of track tags. Finally, when track tags are installed in indoor locations, they can be used to monitor the movement of people. It will use the arithmetic mean of the positions of surrounding reference tags to determine the location. Values. According to preliminary results from an experiment, our approach is more precise than the antenna system. Approximately 10 to 20 lines

    Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    Get PDF
    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors, and are practical for use in an operational environment. The proposed anomaly detection methodology adapts multivariate outlier detection algorithms for use with hyperspectral datasets containing tens of thousands of non-homogeneous, high-dimensional spectral signatures. In so doing, the limitations of existing, non-robust, anomaly detectors are identified, an autonomous clustering methodology is developed to divide an image into homogeneous background materials, and competing multivariate outlier detection methods are evaluated for their ability to uncover hyperspectral anomalies. To arrive at a final detection algorithm, robust parameter design methods are employed to determine parameter settings that achieve good detection performance over a range of hyperspectral images and targets, thereby removing the burden of these decisions from the user. The final anomaly detection algorithm is tested against existing local and global anomaly detectors, and is shown to achieve superior detection accuracy when applied to a diverse set of hyperspectral images. The proposed signature matching methodology employs image-based atmospheric correction techniques in an automated process to transform a target reflectance signature library into a set of image signatures. This set of signatures is combined with an existing linear filter to form a target detector that is shown to perform as well or better relative to detectors that rely on complicated, information-intensive, atmospheric correction schemes. The performance of the proposed methodology is assessed using a range of target materials in both woodland and desert hyperspectral scenes

    Kernel Methods and Measures for Classification with Transparency, Interpretability and Accuracy in Health Care

    Get PDF
    Support vector machines are a popular method in machine learning. They learn from data about a subject, for example, lung tumors in a set of patients, to classify new data, such as, a new patient’s tumor. The new tumor is classified as either cancerous or benign, depending on how similar it is to the tumors of other patients in those two classes—where similarity is judged by a kernel. The adoption and use of support vector machines in health care, however, is inhibited by a perceived and actual lack of rationale, understanding and transparency for how they work and how to interpret information and results from them. For example, a user must select the kernel, or similarity function, to be used, and there are many kernels to choose from but little to no useful guidance on choosing one. The primary goal of this thesis is to create accurate, transparent and interpretable kernels with rationale to select them for classification in health care using SVM—and to do so within a theoretical framework that advances rationale, understanding and transparency for kernel/model selection with atomic data types. The kernels and framework necessarily co-exist. The secondary goal of this thesis is to quantitatively measure model interpretability for kernel/model selection and identify the types of interpretable information which are available from different models for interpretation. Testing my framework and transparent kernels with empirical data I achieve classification accuracy that is better than or equivalent to the Gaussian RBF kernels. I also validate some of the model interpretability measures I propose

    A Digital Triplet for Utilizing Offline Environments to Train Condition Monitoring Systems for Rolling Element Bearings

    Get PDF
    Manufacturing competitiveness is related to making a quality product while incurring the lowest costs. Unexpected downtime caused by equipment failure negatively impacts manufacturing competitiveness due to the ensuing defects and delays caused by the downtime. Manufacturers have adopted condition monitoring (CM) techniques to reduce unexpected downtime to augment maintenance strategies. The CM adoption has transitioned maintenance from Breakdown Maintenance (BM) to Condition-Based Maintenance (CbM) to anticipate impending failures and provide maintenance actions before equipment failure. CbM is the umbrella term for maintenance strategies that use condition monitoring techniques such as Preventive Maintenance (PM) and Predictive Maintenance (PdM). Preventive Maintenance involves providing periodic checks based on either time or sensory input. Predictive Maintenance utilizes continuous or periodic sensory inputs to determine the machine health state to predict the equipment failure. The overall goal of the work is to improve bearing diagnostic and prognostic predictions for equipment health by utilizing surrogate systems to generate failure data that represents production equipment failure, thereby providing training data for condition monitoring solutions without waiting for real world failure data. This research seeks to address the challenges of obtaining failure data for CM systems by incorporating a third system into monitoring strategies to create a Digital Triplet (DTr) for condition monitoring to increase the amount of possible data for condition monitoring. Bearings are a critical component in rotational manufacturing systems with wide application to other industries outside of manufacturing, such as energy and defense. The reinvented DTr system considers three components: the physical, surrogate, and digital systems. The physical system represents the real-world application in production that cannot fail. The surrogate system represents a physical component in a test system in an offline environment where data is generated to fill in gaps from data unavailable in the real-world system. The digital system is the CM system, which provides maintenance recommendations based on the ingested data from the real world and surrogate systems. In pursuing the research goal, a comprehensive bearing dataset detailing these four failure modes over different collection operating parameters was created. Subsequently, the collections occurred under different operating conditions, such as speed-varying, load-varying, and steadystate. Different frequency and time measures were used to analyze and identify differentiating criteria between the different failure classes over the differing operating conditions. These empirical observations were recreated using simulations to filter out potential outliers. The outputs of the physical model were combined with knowledge from the empirical observations to create ”spectral deltas” to augment existing bearing data and create new failure data that resemble similar frequency criteria to the original data. The primary verification occurred on a laboratory-bearing test stand. A conjecture is provided on how to scale to a larger system by analyzing a larger system from a local manufacturer. From the subsequent analysis of machine learning diagnosis and prognosis models, the original and augmented bearing data can complement each other during model training. The subsequent data substitution verifies that bearing data collected under different operating conditions and sizes can be substituted between different systems. Ostensibly, the full formulation of the digital triplet system is that bearing data generated at a smaller size can be scaled to train predictive failure models for larger bearing sizes. Future work should consider implementing this method for other systems outside of bearings, such as gears, non-rotational equipment, such as pumps, or even larger complex systems, such as computer numerically controlled machine tools or car engines. In addition, the method and process should not be restricted to only mechanical systems and could be applied to electrical systems, such as batteries. Furthermore, an investigation should consider further data-driven approximations to specific bearing characteristics related to the stiffness and damping parameters needed in modeling. A final consideration is for further investigation into the scalability quantities within the data and how to track these changes through different system levels

    Collected Papers (on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics), Volume XI

    Get PDF
    This eleventh volume of Collected Papers includes 90 papers comprising 988 pages on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics, written between 2001-2022 by the author alone or in collaboration with the following 84 co-authors (alphabetically ordered) from 19 countries: Abhijit Saha, Abu Sufian, Jack Allen, Shahbaz Ali, Ali Safaa Sadiq, Aliya Fahmi, Atiqa Fakhar, Atiqa Firdous, Sukanto Bhattacharya, Robert N. Boyd, Victor Chang, Victor Christianto, V. Christy, Dao The Son, Debjit Dutta, Azeddine Elhassouny, Fazal Ghani, Fazli Amin, Anirudha Ghosha, Nasruddin Hassan, Hoang Viet Long, Jhulaneswar Baidya, Jin Kim, Jun Ye, Darjan Karabašević, Vasilios N. Katsikis, Ieva Meidutė-Kavaliauskienė, F. Kaymarm, Nour Eldeen M. Khalifa, Madad Khan, Qaisar Khan, M. Khoshnevisan, Kifayat Ullah,, Volodymyr Krasnoholovets, Mukesh Kumar, Le Hoang Son, Luong Thi Hong Lan, Tahir Mahmood, Mahmoud Ismail, Mohamed Abdel-Basset, Siti Nurul Fitriah Mohamad, Mohamed Loey, Mai Mohamed, K. Mohana, Kalyan Mondal, Muhammad Gulfam, Muhammad Khalid Mahmood, Muhammad Jamil, Muhammad Yaqub Khan, Muhammad Riaz, Nguyen Dinh Hoa, Cu Nguyen Giap, Nguyen Tho Thong, Peide Liu, Pham Huy Thong, Gabrijela Popović‬‬‬‬‬‬‬‬‬‬, Surapati Pramanik, Dmitri Rabounski, Roslan Hasni, Rumi Roy, Tapan Kumar Roy, Said Broumi, Saleem Abdullah, Muzafer Saračević, Ganeshsree Selvachandran, Shariful Alam, Shyamal Dalapati, Housila P. Singh, R. Singh, Rajesh Singh, Predrag S. Stanimirović, Kasan Susilo, Dragiša Stanujkić, Alexandra Şandru, Ovidiu Ilie Şandru, Zenonas Turskis, Yunita Umniyati, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Binyamin Yusoff, Edmundas Kazimieras Zavadskas, Zhao Loon Wang.‬‬‬

    Discriminative, generative, and imitative learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.Includes bibliographical references (leaves 201-212).I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the Expectation-Maximization (EM) algorithm for latent discriminative learning (or latent MED).(cont.) While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior.by Tony Jebara.Ph.D

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    An integrative computational modelling of music structure apprehension

    Get PDF
    corecore