103 research outputs found

    Solving a Fully Fuzzy Linear Programming Problem through Compromise Programming

    Get PDF
    In the current literatures, there are several models of fully fuzzy linear programming (FFLP) problems where all the parameters and variables were fuzzy numbers but the constraints were crisp equality or inequality. In this paper, an FFLP problem with fuzzy equality constraints is discussed, and a method for solving this FFLP problem is also proposed. We first transform the fuzzy equality constraints into the crisp inequality ones using the measure of the similarity, which is interpreted as the feasibility degree of constrains, and then transform the fuzzy objective into two crisp objectives by considering expected value and uncertainty of fuzzy objective. Since the feasibility degree of constrains is in conflict with the optimal value of objective function, we finally construct an auxiliary three-objective linear programming problem, which is solved through a compromise programming approach, to solve the initial FFLP problem. To illustrate the proposed method, two numerical examples are solved

    Interval and fuzzy optimization. Applications to data envelopment analysis

    Get PDF
    Enhancing concern in the efficiency assessment of a set of peer entities termed Decision Making Units (DMUs) in many fields from industry to healthcare has led to the development of efficiency assessment models and tools. Data Envelopment Analysis (DEA) is one of the most important methodologies to measure efficiency assessment through the comparison of a group of DMUs. It permits the use of multiple inputs/outputs without any functional form. It is vastly applied to production theory in Economics and benchmarking in Operations Research. In conventional DEA models, the observed inputs and outputs possess precise and realvalued data. However, in the real world, some problems consider imprecise and integer data. For example, the number of defect-free lamps, the fleet size, the number of hospital beds or the number of staff can be represented in some cases as imprecise and integer data. This thesis considers several novel approaches for measuring the efficiency assessment of DMUs where the inputs and outputs are interval and fuzzy data. First, an axiomatic derivation of the fuzzy production possibility set is presented and a fuzzy enhanced Russell graph measure is formulated using a fuzzy arithmetic approach. The proposed approach uses polygonal fuzzy sets and LU-fuzzy partial orders and provides crisp efficiency measures (and associated efficiency ranking) as well as fuzzy efficient targets. The second approach is a new integer interval DEA, with the extension of the corresponding arithmetic and LU-partial orders to integer intervals. Also, a new fuzzy integer DEA approach for efficiency assessment is presented. The proposed approach considers a hybrid scenario involving trapezoidal fuzzy integer numbers and trapezoidal fuzzy numbers. Fuzzy integer arithmetic and partial orders are introduced. Then, using appropriate axioms, a fuzzy integer DEA technology can be derived. Finally, an inverse DEA based on the non-radial slacks-based model in the presence of uncertainty, employing both integer and continuous interval data is presented

    Improved two-phase solution strategy for multiobjective fuzzy stochastic linear programming problems with uncertain probability distribution

    Get PDF
    Multiobjective Fuzzy Stochastic Linear Programming (MFSLP) problem where the linear inequalities on the probability are fuzzy is called a Multiobjective Fuzzy Stochastic Linear Programming problem with Fuzzy Linear Partial Information on Probability Distribution (MFSLPPFI). The uncertainty presents unique difficulties in constrained optimization problems owing to the presence of conflicting goals and randomness surrounding the data. Most existing solution techniques for MFSLPPFI problems rely heavily on the expectation optimization model, the variance minimization model, the probability maximization model, pessimistic/optimistic values and compromise solution under partial uncertainty of random parameters. Although these approaches recognize the fact that the interval values for probability distribution have important significance, nevertheless they are restricted by the upper and lower limitations of probability distribution and neglected the interior values. This limitation motivated us to search for more efficient strategies for MFSLPPFI which address both the fuzziness of the probability distributions, and the fuzziness and randomness of the parameters. The proposed strategy consists two phases: fuzzy transformation and stochastic transformation. First, ranking function is used to transform the MFSLPPFI to Multiobjective Stochastic Linear Programming Problem with Fuzzy Linear Partial Information on Probability Distribution (MSLPPFI). The problem is then transformed to its corresponding Multiobjective Linear Programming (MLP) problem by using a-cut technique of uncertain probability distribution and linguistic hedges. In addition, Chance Constraint Programming (CCP), and expectation of random coefficients are applied to the constraints and the objectives respectively. Finally, the MLP problem is converted to a single-objective Linear Programming (LP) problem via an Adaptive Arithmetic Average Method (AAAM), and then solved by using simplex method. The algorithm used to obtain the solution requires fewer iterations and faster generation of results compared to existing solutions. Three realistic examples are tested which show that the approach used in this study is efficient in solving the MFSLPPFI

    Conflicting Objectives in Decisions

    Get PDF
    This book deals with quantitative approaches in making decisions when conflicting objectives are present. This problem is central to many applications of decision analysis, policy analysis, operational research, etc. in a wide range of fields, for example, business, economics, engineering, psychology, and planning. The book surveys different approaches to the same problem area and each approach is discussed in considerable detail so that the coverage of the book is both broad and deep. The problem of conflicting objectives is of paramount importance, both in planned and market economies, and this book represents a cross-cultural mixture of approaches from many countries to the same class of problem

    How to Normalize Co-Occurrence Data? An Analysis of Some Well-Known Similarity Measures

    Get PDF
    In scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we theoretically analyze the properties of similarity measures for co-occurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that co-occurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research

    Collected Papers (on Neutrosophic Theory and Applications), Volume VII

    Get PDF
    This seventh volume of Collected Papers includes 70 papers comprising 974 pages on (theoretic and applied) neutrosophics, written between 2013-2021 by the author alone or in collaboration with the following 122 co-authors from 22 countries: Mohamed Abdel-Basset, Abdel-Nasser Hussian, C. Alexander, Mumtaz Ali, Yaman Akbulut, Amir Abdullah, Amira S. Ashour, Assia Bakali, Kousik Bhattacharya, Kainat Bibi, R. N. Boyd, Ümit Budak, Lulu Cai, Cenap Özel, Chang Su Kim, Victor Christianto, Chunlai Du, Chunxin Bo, Rituparna Chutia, Cu Nguyen Giap, Dao The Son, Vinayak Devvrat, Arindam Dey, Partha Pratim Dey, Fahad Alsharari, Feng Yongfei, S. Ganesan, Shivam Ghildiyal, Bibhas C. Giri, Masooma Raza Hashmi, Ahmed Refaat Hawas, Hoang Viet Long, Le Hoang Son, Hongbo Wang, Hongnian Yu, Mihaiela Iliescu, Saeid Jafari, Temitope Gbolahan Jaiyeola, Naeem Jan, R. Jeevitha, Jun Ye, Anup Khan, Madad Khan, Salma Khan, Ilanthenral Kandasamy, W.B. Vasantha Kandasamy, Darjan Karabašević, Kifayat Ullah, Kishore Kumar P.K., Sujit Kumar De, Prasun Kumar Nayak, Malayalan Lathamaheswari, Luong Thi Hong Lan, Anam Luqman, Luu Quoc Dat, Tahir Mahmood, Hafsa M. Malik, Nivetha Martin, Mai Mohamed, Parimala Mani, Mingcong Deng, Mohammed A. Al Shumrani, Mohammad Hamidi, Mohamed Talea, Kalyan Mondal, Muhammad Akram, Muhammad Gulistan, Farshid Mofidnakhaei, Muhammad Shoaib, Muhammad Riaz, Karthika Muthusamy, Nabeela Ishfaq, Deivanayagampillai Nagarajan, Sumera Naz, Nguyen Dinh Hoa, Nguyen Tho Thong, Nguyen Xuan Thao, Noor ul Amin, Dragan Pamučar, Gabrijela Popović, S. Krishna Prabha, Surapati Pramanik, Priya R, Qiaoyan Li, Yaser Saber, Said Broumi, Saima Anis, Saleem Abdullah, Ganeshsree Selvachandran, Abdulkadir Sengür, Seyed Ahmad Edalatpanah, Shahbaz Ali, Shahzaib Ashraf, Shouzhen Zeng, Shio Gai Quek, Shuangwu Zhu, Shumaiza, Sidra Sayed, Sohail Iqbal, Songtao Shao, Sundas Shahzadi, Dragiša Stanujkić, Željko Stević, Udhayakumar Ramalingam, Zunaira Rashid, Hossein Rashmanlou, Rajkumar Verma, Luige Vlădăreanu, Victor Vlădăreanu, Desmond Jun Yi Tey, Selçuk Topal, Naveed Yaqoob, Yanhui Guo, Yee Fei Gan, Yingcang Ma, Young Bae Jun, Yuping Lai, Hafiz Abdul Wahab, Wei Yang, Xiaohong Zhang, Edmundas Kazimieras Zavadskas, Lemnaouar Zedam

    Resource Generation from Structured Documents for Low-density Languages

    Get PDF
    The availability and use of electronic resources for both manual and automated language related processing has increased tremendously in recent years. Nevertheless, many resources still exist only in printed form, restricting their availability and use. This especially holds true in low density languages or languages with limited electronic resources. For these documents, automated conversion into electronic resources is highly desirable. This thesis focuses on the semi-automated conversion of printed structured documents (dictionaries in particular) to usable electronic representations. In the first part we present an entry tagging system that recognizes, parses, and tags the entries of a printed dictionary to reproduce the representation. The system uses the consistent layout and structure of the dictionaries, and the features that impose this structure, to capture and recover lexicographic information. We accomplish this by adapting two methods: rule-based and HMM-based. The system is designed to produce results quickly with minimal human assistance and reasonable accuracy. The use of an adaptive transformation-based learning as a post-processor at two points in the system yields significant improvements, even with an extremely small amount of user provided training data. The second part of this thesis presents Morphology Induction from Noisy Data (MIND), a natural language morphology discovery framework that operates on information from limited, noisy data obtained from the conversion process. To use the resulting resources effectively, however, users must be able to search for them using the root form of morphologically deformed variant found in the text. Stemming and data driven methods are not suitable when data are sparse. The approach is based on the novel application of string searching algorithms. The evaluations show that MIND can segment words into roots and affixes from the noisy, limited data contained in a dictionary, and it can extract prefixes, suffixes, circumfixes, and infixes. MIND can also identify morphophonemic changes, i.e., phonemic variations between allomorphs of a morpheme, specifically point-of-affixation stem changes. This, in turn, allows non-native speakers to perform multilingual tasks for applications where response must be rapid, and they have limited knowledge. In addition, this analysis can feed other natural language processing tools requiring lexicons

    Methodological aspects of a decision aid for transportation choices under uncertainty

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1982.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.Bibliography: leaves 253-266.by Hani Sobhi Mahmassani.Ph.D

    A Hybrid Approach to the Sentiment Analysis Problem at the Sentence Level

    Get PDF
    This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naïve Bayes and Maximum Entropy (ME)- when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: *the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method
    corecore