20 research outputs found

    The Encyclopedia of Neutrosophic Researchers - vol. 3

    Get PDF
    This is the third volume of the Encyclopedia of Neutrosophic Researchers, edited from materials offered by the authors who responded to the editor’s invitation. The authors are listed alphabetically. The introduction contains a short history of neutrosophics, together with links to the main papers and books. Neutrosophic set, neutrosophic logic, neutrosophic probability, neutrosophic statistics, neutrosophic measure, neutrosophic precalculus, neutrosophic calculus and so on are gaining significant attention in solving many real life problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistent, and indeterminacy. In the past years the fields of neutrosophics have been extended and applied in various fields, such as: artificial intelligence, data mining, soft computing, decision making in incomplete / indeterminate / inconsistent information systems, image processing, computational modelling, robotics, medical diagnosis, biomedical engineering, investment problems, economic forecasting, social science, humanistic and practical achievements

    A Bipolar Single Valued Neutrosophic Isolated Graphs: Revisited

    Get PDF
    In this research paper, the graph of the bipolar single-valued neutrosophic set model (BSVNS) is proposed. The graphs of single valued neutrosophic set models is generalized by this graph. For the BSVNS model, several results have been proved on complete and isolated graphs. Adding, an important and suitable condition for the graphs of the BSVNS model to become an isolated graph of the BSVNS model has been demonstrated

    Generalized Interval Valued Neutrosophic Graphs of First Type

    Get PDF
    In this paper, motivated by the notion of generalized single valued neutrosophic graphs of first type, we defined a new neutrosophic graphs named generalized interval valued neutrosophic graphs of first type (GIVNG1) and presented a matrix representation for it and studied few properties of this new concept. The concept of GIVNG1 is an extension of generalized fuzzy graphs (GFG1) and generalized single valued neutrosophic of first type (GSVNG1)

    An Evaluation of Triangular Neutrosophic PERT Analysis for Real-Life Project Time and Cost Estimation

    Get PDF
    The textile industry sector's time and cost management issue led to a quest for contemporary tools that provide the best possible project time and cost prediction. Using a case study of Esa Textile in India, this paper assesses quantitative decision-making techniques in the textile industry. An extensive approach is provided so that specialists may utilise Triangular Neutrosophic Numbers (TNNs) to express their views about identifying features and indicators of a successful project. Determining the best approach to deal with removing interruptions that can cause delays and unnecessary expenses is also an essential responsibility. While commonly employed, traditional estimating methods like the Programme Evaluation and Review Technique (PERT) may find it difficult to adequately address the uncertainties present in real-world projects. This study examines and assesses the use of Triangular Neutrosophic PERT (TNP) analysis for project time and cost estimation in order to overcome this restriction. Neutrosophy, which allows for the depiction of inconsistent, ambiguous and partial data available in project parameters, is incorporated into the suggested TNP analysis. The efficiency of the suggested strategy has been verified by this analysis, and the network's unknown parameters are represented by triangle Neutrosophic numbers. This innovative method gives each of the three potential estimates—optimistic, most probable, and pessimistic which are all consisting of degree of membership, indeterminacy, or non-membership. This study's objective is to locate the work-network in a logical order once all of the processes at the Esa textile units have been completed. Planning is developed using the Triangular Neutrosophic Programme Evaluation and Review Techniques (TNP) even there is a time difference, which will speed up production and cut expenses. TNP provides a more thorough and adaptable depiction of uncertainty by utilizing the neutrosophic framework, which better captures the dynamic character of real-life projects

    A Multi Objective Programming Approach to Solve Integer Valued Neutrosophic Shortest Path Problems

    Get PDF
    Neutrosophic (NS) set hypothesis gives another way to deal with the vulnerabilities of the shortest path problems (SPP). Several researchers have worked on fuzzy shortest path problem (FSPP) in a fuzzy graph with vulnerability data and completely different applications in real world eventualities. However, the uncertainty related to the inconsistent information and indeterminate information isn't properly expressed by fuzzy set. The neutrosophic set deals these forms of uncertainty. This paper presents a model for shortest path problem with various arrangements of integer-valued trapezoidal neutrosophic (INVTpNS) and integer-valued triangular neutrosophic (INVTrNS). We characterized this issue as Neutrosophic Shortest way problem (NSSPP). The established linear programming (LP) model solves the classical SPP that consists of crisp parameters

    Fishing for meta-knowledge : a case for transdisciplinary validation

    Get PDF
    Purpose – The purpose of this paper is to explore the problem of validating new transdisciplinary knowledge. The problem of validating new knowledge is always hard, but in case of mono-disciplinary knowledge, we at least have the disciplinary knowledge against which to validate. However, when transdisciplinary knowledge is created, two additional problems appear. On the one hand, the new knowledge links to concepts in more than one discipline, which are thus likely to belong to different intellectual traditions. On the other hand, the new knowledge does not belong to any of these disciplines, and thus the usual ways of validating fail us. Design/methodology/approach – In this paper we choose the electric car (represented by the Tesla), which we look at from the viewpoint of mathematics, physics, psychology, and economics. For each discipline we consider a simplistic approach that we label ‘dogma’ and a more sophisticated approach that we label ‘philosophy’. We speculate about how new knowledge can be created within these disciplines as well as in a multidisciplinary, interdisciplinary and transdisciplinary manner. Then we examine the problem of validating transdisciplinary knowledge. We conceptualise a three-step validation process for the new transdisciplinary knowledge and show how it can be supported using a knowledge-based expert system. Originality/value – Validating is always a difficult problem in academic research but in the case of transdisciplinary knowledge, it gains an additional level of complexity. In contrast, practitioners validate all the time, and their validation is nearly always transdisciplinary. Furthermore, what works well in academic research is validating experimental findings and similar results based on hard evidence. There are continuous attempts to develop validation principles in qualitative research but there is still no agreement or guidelines on how to execute validation correctly or, at least, in an acceptable way. Validating in case of conceptual results is virtually non-existent. The little that exists can be reduced to examining the consistency of new knowledge with the existing disciplinary knowledge. Therefore in this paper we initiate what can be a long journey of developing principles of validation in the case of new transdisciplinary knowledge resulting from a conceptual inquiry. This is what we call validating meta-knowledge. Practical implications – We believe that the most significant implication of our work in transdisciplinary validation will be education, particularly at the highest doctoral level. However, we also believe that creative problem solvers, academics and practitioners alike will also benefit from a better understanding of transdisciplinary validation

    An Enhanced Moth-Flame Optimization with Multiple Flame Guidance Mechanism for Parameter Extraction of Photovoltaic Models

    Get PDF
    How to accurately and efficiently extract photovoltaic (PV) model parameters is the primary problem of photovoltaic system optimization. To accurately and efficiently extract the parameters of PV models, an enhanced moth-flame optimization (EMFO) with multiple flame guidance mechanism is proposed in this study. In EMFO, an adaptive flame number updating mechanism is used to adaptively control the flame number, which enhances the local and global exploration capabilities of MFO. Meanwhile, a multiple flame guidance mechanism is designed for the full use of the position information of flames, which enhances the global diversity of the population. The EMFO is evaluated with other variants of the MFO on 25 benchmark functions of CEC2005, 28 functions of CEC2017, and 5 photovoltaic model parameter extraction problems. Experimental results show that the EMFO has obtained a better performance than other compared algorithms, which proves the effectiveness of the proposed EMFO. The method proposed in this study provides MFO researchers with ideas for adaptive research and making full use of flame population information

    A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring

    Get PDF
    The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society

    Collected Papers (on Neutrosophic Theory and Applications), Volume VI

    Get PDF
    This sixth volume of Collected Papers includes 74 papers comprising 974 pages on (theoretic and applied) neutrosophics, written between 2015-2021 by the author alone or in collaboration with the following 121 co-authors from 19 countries: Mohamed Abdel-Basset, Abdel Nasser H. Zaied, Abduallah Gamal, Amir Abdullah, Firoz Ahmad, Nadeem Ahmad, Ahmad Yusuf Adhami, Ahmed Aboelfetouh, Ahmed Mostafa Khalil, Shariful Alam, W. Alharbi, Ali Hassan, Mumtaz Ali, Amira S. Ashour, Asmaa Atef, Assia Bakali, Ayoub Bahnasse, A. A. Azzam, Willem K.M. Brauers, Bui Cong Cuong, Fausto Cavallaro, Ahmet Çevik, Robby I. Chandra, Kalaivani Chandran, Victor Chang, Chang Su Kim, Jyotir Moy Chatterjee, Victor Christianto, Chunxin Bo, Mihaela Colhon, Shyamal Dalapati, Arindam Dey, Dunqian Cao, Fahad Alsharari, Faruk Karaaslan, Aleksandra Fedajev, Daniela Gîfu, Hina Gulzar, Haitham A. El-Ghareeb, Masooma Raza Hashmi, Hewayda El-Ghawalby, Hoang Viet Long, Le Hoang Son, F. Nirmala Irudayam, Branislav Ivanov, S. Jafari, Jeong Gon Lee, Milena Jevtić, Sudan Jha, Junhui Kim, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Darjan Karabašević, Songül Karabatak, Abdullah Kargın, M. Karthika, Ieva Meidute-Kavaliauskiene, Madad Khan, Majid Khan, Manju Khari, Kifayat Ullah, K. Kishore, Kul Hur, Santanu Kumar Patro, Prem Kumar Singh, Raghvendra Kumar, Tapan Kumar Roy, Malayalan Lathamaheswari, Luu Quoc Dat, T. Madhumathi, Tahir Mahmood, Mladjan Maksimovic, Gunasekaran Manogaran, Nivetha Martin, M. Kasi Mayan, Mai Mohamed, Mohamed Talea, Muhammad Akram, Muhammad Gulistan, Raja Muhammad Hashim, Muhammad Riaz, Muhammad Saeed, Rana Muhammad Zulqarnain, Nada A. Nabeeh, Deivanayagampillai Nagarajan, Xenia Negrea, Nguyen Xuan Thao, Jagan M. Obbineni, Angelo de Oliveira, M. Parimala, Gabrijela Popovic, Ishaani Priyadarshini, Yaser Saber, Mehmet Șahin, Said Broumi, A. A. Salama, M. Saleh, Ganeshsree Selvachandran, Dönüș Șengür, Shio Gai Quek, Songtao Shao, Dragiša Stanujkić, Surapati Pramanik, Swathi Sundari Sundaramoorthy, Mirela Teodorescu, Selçuk Topal, Muhammed Turhan, Alptekin Ulutaș, Luige Vlădăreanu, Victor Vlădăreanu, Ştefan Vlăduţescu, Dan Valeriu Voinea, Volkan Duran, Navneet Yadav, Yanhui Guo, Naveed Yaqoob, Yongquan Zhou, Young Bae Jun, Xiaohong Zhang, Xiao Long Xin, Edmundas Kazimieras Zavadskas

    Metodología multi-criterio de optimización de recursos en sistemas embebidos para implementación de algoritmos de clasificación supervisados

    Get PDF
    [ES] En la actualidad, hemos visto un aumento en el uso de los sistemas embebidos debido a su flexibilidad de instalación y su capacidad de recopilar datos por medio de sensores. Estos sistemas tienen como base la combinación entre las Tecnologías de la Información y la Comunicación (TIC), el concepto de Internet of Things (IoT) y la Inteligencia Artificial (IA). Sin embargo, muchos desarrolladores e investigadores, no realizan un proceso exhaustivo sobre la veracidad de la información que busca representar el fenómeno estudiado. Se debe tener en cuenta, que los valores obtenidos por los sensores, son una aproximación del valor real, debido a la transformación de la señal de naturaleza física hacia una eléctrica. Esto ha ocasionado que la forma de almacenar dicha información esté más orientada a la cantidad que a la calidad. En consecuencia, la búsqueda de conocimiento útil a través de los sistemas embebidos, por medio de algoritmos de aprendizaje automático, se vuelve una tarea complicada. Tomando también en consideración, que el desarrollador del dispositivo electrónico, en ocasiones, no tiene un pleno conocimiento sobre el área de estudio donde va a ser empleado el sistema. La presente tesis doctoral, propone una metodología multi-criterio de optimización de recursos en sistemas embebidos para la implementación de algoritmos de clasificación empleando criterios de aprendizaje automático. Para hacer esto, se busca reducir el ruido obtenido por el porcentaje de incertidumbre ocasionado por los sensores, mediante el análisis de criterios de acondicionamiento de la señal. Además, se ha visto que, emplear un servidor externo para el almacenamiento de datos y posterior análisis de la información, influye en el tiempo de respuesta del sistema. Por esta razón, una vez cumplida la tarea de encontrar una señal depurada, se realiza un análisis de los diferentes criterios de selección de características de los datos, que permitan reducir el conjunto almacenado, para cumplir dos funciones principales. La primera, evitar la saturación de servicios computacionales con información almacenada innecesariamente. La segunda, implementar estos criterios de aprendizaje automático dentro de los propios sistemas embebidos, con el fin de que puedan tomar sus propias decisiones sin la interacción con el ser humano. Esta transformación, hace que el sistema se vuelva inteligente, ya que puede elegir información relevante y cómo puede adaptarse a su entorno de trabajo. Sin embargo, la codificación de estos modelos matemáticos que representan los algoritmos de aprendizaje automático, deben cumplir requisitos de funcionalidad, basados en la capacidad computacional disponible en un sistema embebido. Por esta razón, se presenta una nueva clasificación de sistemas embebidos, con una novedosa taxonomía de sensores, enfocados a la adquisición y análisis de datos. Concretamente, se diseña un esquema de acoplamiento de datos entre el sensor y el sistema procesador de información, que brinda una recomendación de uso del criterio de filtrado de datos, en relación con la capacidad de recursos computacionales y la forma de envío de información dentro del sistema embebido. Este proceso se valida mediante métricas de rendimiento de sensores. Por otra parte, una vez que se tenga una base de datos adecuada, se presenta una técnica de selección de los algoritmos basados en aprendizaje supervisado, que se ajuste a los requisitos de funcionalidad del sistema embebido y a su capacidad de procesar información. Específicamente, se analizan los criterios de selección de características, prototipos y reducción de dimensionalidad que se adapten a los diferentes algoritmos de clasificación para la elección de los más adecuados
    corecore