111 research outputs found

    Modelos y algoritmos de predicción fuzzy

    Get PDF
    En un mundo tan globalizado la complejidad macro-económica, la volatilidad en los mercados y la estabilidad político-social de un país, afectan directamente a las cotizaciones de los valores bursátiles del país de referencia y añaden complejidad a la toma de decisiones. La riqueza de un análisis o inferencia sobre las cotizaciones de valores en mercados bursátiles tiene que servir como soporte para la toma de decisiones, por lo que no sólo han de basarse en una predicción puntual, de ahí radica la gran relevancia de las series temporales difusas. Ya que las series temporales difusas pueden tratar entornos donde hay incertidumbre en los datos (comportamientos o relaciones entre los datos) y/o conceptos expresados en términos lingüísticos (riesgo de la inversión o rendimiento esperado), muy difíciles de traducir en términos matemáticos clásicos. Adicionalmente te permiten hacer predicciones difusas (conjuntos difusos) y de éstas extraer más información que la meramente puntual (predicción por intervalos). Dada la ambigüedad de los datos (generalmente no lineales) y la asimetría de la volatilidad a lo largo del tiempo, en esta tesis se han analizado las series temporales de cotización de ciertos índices de referencia desde un punto de vista difuso (credibilista y posibilista). Aplicando la Teoría de Conjuntos difusos se ha observado que muchos investigadores han plasmado multitud de ideas y de líneas de investigación interesantes para la aplicación de modelos de series temporales difusas. Así en esta tesis se han presentado varios modelos de predicción. Desde un punto de vista posibilista con unos modelos basados en nuevos operadores de ponderación para la predicción de series temporales difusas y otros modelos basados en una combinación lineal convexa de dichos operadores, pero siempre con predicciones difusas a una etapa. Y desde un punto de vista credibilista con modelos que utilizan variables difusas, rompiendo con los esquemas clásicos de predicción de series temporales difusas. Trabajar desde este punto de vista ha permitido hacer predicciones a varias etapas y, adicionalmente, predicciones por intervalos. En los modelos basados en series temporales difusas ponderadas, se ha propuesto utilizar un conjunto de pesos obtenidos principalmente de tres formas: la primera forma sería a partir de la secuencia cronológica de relaciones lógicas difusas, la segunda forma utilizando la información proporcionada por las relaciones lógicas difusas una a una con los saltos y en la tercera forma mediante una versión generalizada del modelo de saltos utilizando rachas. En estos modelos, para considerar la posible tendencia de las series temporales difusas, se han presentado un conjunto de ponderaciones que miden la frecuencia relativa y la magnitud de los saltos observados en la serie temporal difusa. Estos modelos también proporcionan números difusos trapezoidales como resultado del proceso de predicción. Nuestro enfoque ha sido probado utilizando datos históricos de series temporales de algunos valores que componen el IBEX35 y de cuatro índices bursátiles (IBEX35, NIKKEI225 japonés, DAX30 alemán y TAIEX taiwanés), para los modelos de series temporales difusas ponderadas y la base de datos de la Competición M4 para los modelos basados en variables difusas. En los experimentos numéricos, se han comparado nuestras propuestas con otros métodos bien conocidos de series temporales difusas, series temporales difusas ponderadas y modelos clásicos estadísticos de predicción, y, generalmente, se han conseguido buenos resultados, obteniendo una mayor precisión de predicción. También se ha presentado un predictor y un SAD (sistema de ayuda a la decisión) para el análisis de series temporales difusas, con las diferentes estrategias o modelos incluidos en el árbol de decisión. Aplicándose a series de cotizaciones diarias de índices del mercado de valores y obteniéndose predicciones difusas (números difusos trapezoidales) de una etapa, donde se han observado resultados prometedores

    A systematic literature review of soft set theory

    Get PDF
    [EN] Soft set theory, initially introduced through the seminal article ‘‘Soft set theory—First results’’ in 1999, has gained considerable attention in the field of mathematical modeling and decision-making. Despite its growing prominence, a comprehensive survey of soft set theory, encompassing its foundational concepts, developments, and applications, is notably absent in the existing literature. We aim to bridge this gap. This survey delves into the basic elements of the theory, including the notion of a soft set, the operations on soft sets, and their semantic interpretations. It describes various generalizations and modifications of soft set theory, such as N-soft sets, fuzzy soft sets, and bipolar soft sets, highlighting their specific characteristics. Furthermore, this work outlines the fundamentals of various extensions of mathematical structures from the perspective of soft set theory. Particularly, we present basic results of soft topology and other algebraic structures such as soft algebras and sigma-algebras. This article examines a selection of notable applications of soft set theory in different fields, including medicine and economics, underscoring its versatile nature. The survey concludes with a discussion on the challenges and future directions in soft set theory, emphasizing the need for further research to enhance its theoretical foundations and broaden its practical applications. Overall, this survey of soft set theory serves as a valuable resource for practitioners, researchers, and students interested in understanding and utilizing this flexible mathematical framework for tackling uncertainty in decision-making processes

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    Visualizing Uncertainty in Sets

    Get PDF
    Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness

    Access and Resource Management for Clinical Care and Clinical Research in Multi-class Stochastic Queueing Networks.

    Full text link
    In healthcare delivery systems, proper coordination between patient visits and the health care resources they rely upon is an area in which important new planning capabilities are very valuable to provide greater value to all stakeholders. Managing supply and demand, while providing an appropriate service level for various types of care and patients of differing levels of urgency is a difficult task to achieve. This task becomes even more complex when planning for (i) stochastic demand, (ii) multi-class customers (i.e., patients with different urgency levels), and (iii) multiple services/visit types (which includes multi-visit itineraries of clinical care and/or clinical research visits that are delivered according to research protocols). These complications in the demand stream require service waiting times and itineraries of visits that may span multiple days/weeks and may utilize many different resources in the organization (each resource with at least one specific service being provided). The key objective of this dissertation is to develop planning models for the optimization of capacity allocation while considering the coordination between resources and patient demand in these multi-class stochastic queueing networks in order to meet the service/access levels required for each patient class. This control can be managed by allocating resources to specific patient types/visits over a planning horizon. In this dissertation, we control key performance metrics that relate to patient access management and resource capacity planning in various healthcare settings with chapters devoted to outpatient services, and clinical research units. The methods developed forecast and optimize (1) the access to care (in a medical specialty) for each patient class, (2) the Time to First Available Visit for clinical research participants enrolling in clinical trials, and (3) the access to downstream resources in an itinerary of care, which we call the itinerary flow time. We also model and control how resources are managed, by incorporating (4) workload/utilization metrics, as well as (5) blocking/overtime probabilities of those resources. We control how to allocate resource capacity along the various multi-visit resource requirements of the patient itineraries, and by doing so, we capture the key correlations between patient access, and resource allocation, coordination, and utilization.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116770/1/jivan_1.pd

    Collected Papers (on Neutrosophic Theory and Applications), Volume VI

    Get PDF
    This sixth volume of Collected Papers includes 74 papers comprising 974 pages on (theoretic and applied) neutrosophics, written between 2015-2021 by the author alone or in collaboration with the following 121 co-authors from 19 countries: Mohamed Abdel-Basset, Abdel Nasser H. Zaied, Abduallah Gamal, Amir Abdullah, Firoz Ahmad, Nadeem Ahmad, Ahmad Yusuf Adhami, Ahmed Aboelfetouh, Ahmed Mostafa Khalil, Shariful Alam, W. Alharbi, Ali Hassan, Mumtaz Ali, Amira S. Ashour, Asmaa Atef, Assia Bakali, Ayoub Bahnasse, A. A. Azzam, Willem K.M. Brauers, Bui Cong Cuong, Fausto Cavallaro, Ahmet Çevik, Robby I. Chandra, Kalaivani Chandran, Victor Chang, Chang Su Kim, Jyotir Moy Chatterjee, Victor Christianto, Chunxin Bo, Mihaela Colhon, Shyamal Dalapati, Arindam Dey, Dunqian Cao, Fahad Alsharari, Faruk Karaaslan, Aleksandra Fedajev, Daniela Gîfu, Hina Gulzar, Haitham A. El-Ghareeb, Masooma Raza Hashmi, Hewayda El-Ghawalby, Hoang Viet Long, Le Hoang Son, F. Nirmala Irudayam, Branislav Ivanov, S. Jafari, Jeong Gon Lee, Milena Jevtić, Sudan Jha, Junhui Kim, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Darjan Karabašević, Songül Karabatak, Abdullah Kargın, M. Karthika, Ieva Meidute-Kavaliauskiene, Madad Khan, Majid Khan, Manju Khari, Kifayat Ullah, K. Kishore, Kul Hur, Santanu Kumar Patro, Prem Kumar Singh, Raghvendra Kumar, Tapan Kumar Roy, Malayalan Lathamaheswari, Luu Quoc Dat, T. Madhumathi, Tahir Mahmood, Mladjan Maksimovic, Gunasekaran Manogaran, Nivetha Martin, M. Kasi Mayan, Mai Mohamed, Mohamed Talea, Muhammad Akram, Muhammad Gulistan, Raja Muhammad Hashim, Muhammad Riaz, Muhammad Saeed, Rana Muhammad Zulqarnain, Nada A. Nabeeh, Deivanayagampillai Nagarajan, Xenia Negrea, Nguyen Xuan Thao, Jagan M. Obbineni, Angelo de Oliveira, M. Parimala, Gabrijela Popovic, Ishaani Priyadarshini, Yaser Saber, Mehmet Șahin, Said Broumi, A. A. Salama, M. Saleh, Ganeshsree Selvachandran, Dönüș Șengür, Shio Gai Quek, Songtao Shao, Dragiša Stanujkić, Surapati Pramanik, Swathi Sundari Sundaramoorthy, Mirela Teodorescu, Selçuk Topal, Muhammed Turhan, Alptekin Ulutaș, Luige Vlădăreanu, Victor Vlădăreanu, Ştefan Vlăduţescu, Dan Valeriu Voinea, Volkan Duran, Navneet Yadav, Yanhui Guo, Naveed Yaqoob, Yongquan Zhou, Young Bae Jun, Xiaohong Zhang, Xiao Long Xin, Edmundas Kazimieras Zavadskas

    Fuzzy Techniques for Decision Making 2018

    Get PDF
    Zadeh's fuzzy set theory incorporates the impreciseness of data and evaluations, by imputting the degrees by which each object belongs to a set. Its success fostered theories that codify the subjectivity, uncertainty, imprecision, or roughness of the evaluations. Their rationale is to produce new flexible methodologies in order to model a variety of concrete decision problems more realistically. This Special Issue garners contributions addressing novel tools, techniques and methodologies for decision making (inclusive of both individual and group, single- or multi-criteria decision making) in the context of these theories. It contains 38 research articles that contribute to a variety of setups that combine fuzziness, hesitancy, roughness, covering sets, and linguistic approaches. Their ranges vary from fundamental or technical to applied approaches

    Z-Numbers-Based Approach to Hotel Service Quality Assessment

    Get PDF
    In this study, we are analyzing the possibility of using Z-numbers for measuring the service quality and decision-making for quality improvement in the hotel industry. Techniques used for these purposes are based on consumer evalu- ations - expectations and perceptions. As a rule, these evaluations are expressed in crisp numbers (Likert scale) or fuzzy estimates. However, descriptions of the respondent opinions based on crisp or fuzzy numbers formalism not in all cases are relevant. The existing methods do not take into account the degree of con- fidence of respondents in their assessments. A fuzzy approach better describes the uncertainties associated with human perceptions and expectations. Linguis- tic values are more acceptable than crisp numbers. To consider the subjective natures of both service quality estimates and confidence degree in them, the two- component Z-numbers Z = (A, B) were used. Z-numbers express more adequately the opinion of consumers. The proposed and computationally efficient approach (Z-SERVQUAL, Z-IPA) allows to determine the quality of services and iden- tify the factors that required improvement and the areas for further development. The suggested method was applied to evaluate the service quality in small and medium-sized hotels in Turkey and Azerbaijan, illustrated by the example
    corecore