10,493 research outputs found

    Measuring Technical Efficiency of Dairy Farms with Imprecise Data: A Fuzzy Data Envelopment Analysis Approach

    Get PDF
    This article integrates fuzzy set theory in Data Envelopment Analysis (DEA) framework to compute technical efficiency scores when input and output data are imprecise. The underlying assumption in convectional DEA is that inputs and outputs data are measured with precision. However, production agriculture takes place in an uncertain environment and, in some situations, input and output data may be imprecise. We present an approach of measuring efficiency when data is known to lie within specified intervals and empirically illustrate this approach using a group of 34 dairy producers in Pennsylvania. Compared to the convectional DEA scores that are point estimates, the computed fuzzy efficiency scores allow the decision maker to trace the performance of a decision-making unit at different possibility levels.fuzzy set theory, Data Envelopment Analysis, membership function, α-cut level, technical efficiency, Farm Management, Production Economics, Productivity Analysis, Research Methods/ Statistical Methods, Risk and Uncertainty, D24, Q12, C02, C44, C61,

    Optimization of fuzzy analogy in software cost estimation using linguistic variables

    Get PDF
    One of the most important objectives of software engineering community has been the increase of useful models that beneficially explain the development of life cycle and precisely calculate the effort of software cost estimation. In analogy concept, there is deficiency in handling the datasets containing categorical variables though there are innumerable methods to estimate the cost. Due to the nature of software engineering domain, generally project attributes are often measured in terms of linguistic values such as very low, low, high and very high. The imprecise nature of such value represents the uncertainty and vagueness in their elucidation. However, there is no efficient method that can directly deal with the categorical variables and tolerate such imprecision and uncertainty without taking the classical intervals and numeric value approaches. In this paper, a new approach for optimization based on fuzzy logic, linguistic quantifiers and analogy based reasoning is proposed to improve the performance of the effort in software project when they are described in either numerical or categorical data. The performance of this proposed method exemplifies a pragmatic validation based on the historical NASA dataset. The results were analyzed using the prediction criterion and indicates that the proposed method can produce more explainable results than other machine learning methods.Comment: 14 pages, 8 figures; Journal of Systems and Software, 2011. arXiv admin note: text overlap with arXiv:1112.3877 by other author

    Real-time Loss Estimation for Instrumented Buildings

    Get PDF
    Motivation. A growing number of buildings have been instrumented to measure and record earthquake motions and to transmit these records to seismic-network data centers to be archived and disseminated for research purposes. At the same time, sensors are growing smaller, less expensive to install, and capable of sensing and transmitting other environmental parameters in addition to acceleration. Finally, recently developed performance-based earthquake engineering methodologies employ structural-response information to estimate probabilistic repair costs, repair durations, and other metrics of seismic performance. The opportunity presents itself therefore to combine these developments into the capability to estimate automatically in near-real-time the probabilistic seismic performance of an instrumented building, shortly after the cessation of strong motion. We refer to this opportunity as (near-) real-time loss estimation (RTLE). Methodology. This report presents a methodology for RTLE for instrumented buildings. Seismic performance is to be measured in terms of probabilistic repair cost, precise location of likely physical damage, operability, and life-safety. The methodology uses the instrument recordings and a Bayesian state-estimation algorithm called a particle filter to estimate the probabilistic structural response of the system, in terms of member forces and deformations. The structural response estimate is then used as input to component fragility functions to estimate the probabilistic damage state of structural and nonstructural components. The probabilistic damage state can be used to direct structural engineers to likely locations of physical damage, even if they are concealed behind architectural finishes. The damage state is used with construction cost-estimation principles to estimate probabilistic repair cost. It is also used as input to a quantified, fuzzy-set version of the FEMA-356 performance-level descriptions to estimate probabilistic safety and operability levels. CUREE demonstration building. The procedure for estimating damage locations, repair costs, and post-earthquake safety and operability is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era, 7-story, nonductile reinforced-concrete moment-frame building located in Van Nuys, California. The building is instrumented with 16 channels at five levels: ground level, floors 2, 3, 6, and the roof. We used the records obtained after the 1994 Northridge earthquake to hindcast performance in that earthquake. The building is analyzed in its condition prior to the 1994 Northridge Earthquake. It is found that, while hindcasting of the overall system performance level was excellent, prediction of detailed damage locations was poor, implying that either actual conditions differed substantially from those shown on the structural drawings, or inappropriate fragility functions were employed, or both. We also found that Bayesian updating of the structural model using observed structural response above the base of the building adds little information to the performance prediction. The reason is probably that Real-Time Loss Estimation for Instrumented Buildings ii structural uncertainties have only secondary effect on performance uncertainty, compared with the uncertainty in assembly damageability as quantified by their fragility functions. The implication is that real-time loss estimation is not sensitive to structural uncertainties (saving costly multiple simulations of structural response), and that real-time loss estimation does not benefit significantly from installing measuring instruments other than those at the base of the building. Kajima demonstration building. The Kajima demonstration is performed using a real 1960s-era office building in Kobe, Japan. The building, a 7-story reinforced-concrete shearwall building, was not instrumented in the 1995 Kobe earthquake, so instrument recordings are simulated. The building is analyzed in its condition prior to the earthquake. It is found that, while hindcasting of the overall repair cost was excellent, prediction of detailed damage locations was poor, again implying either that as-built conditions differ substantially from those shown on structural drawings, or that inappropriate fragility functions were used, or both. We find that the parameters of the detailed particle filter needed significant tuning, which would be impractical in actual application. Work is needed to prescribe values of these parameters in general. Opportunities for implementation and further research. Because much of the cost of applying this RTLE algorithm results from the cost of instrumentation and the effort of setting up a structural model, the readiest application would be to instrumented buildings whose structural models are already available, and to apply the methodology to important facilities. It would be useful to study under what conditions RTLE would be economically justified. Two other interesting possibilities for further study are (1) to update performance using readily observable damage; and (2) to quantify the value of information for expensive inspections, e.g., if one inspects a connection with a modeled 50% failure probability and finds that the connect is undamaged, is it necessary to examine one with 10% failure probability

    A new fuzzy set merging technique using inclusion-based fuzzy clustering

    Get PDF
    This paper proposes a new method of merging parameterized fuzzy sets based on clustering in the parameters space, taking into account the degree of inclusion of each fuzzy set in the cluster prototypes. The merger method is applied to fuzzy rule base simplification by automatically replacing the fuzzy sets corresponding to a given cluster with that pertaining to cluster prototype. The feasibility and the performance of the proposed method are studied using an application in mobile robot navigation. The results indicate that the proposed merging and rule base simplification approach leads to good navigation performance in the application considered and to fuzzy models that are interpretable by experts. In this paper, we concentrate mainly on fuzzy systems with Gaussian membership functions, but the general approach can also be applied to other parameterized fuzzy sets

    POWER TRANSFORMER HEALTH INDEX ESTIMATION USING EVIDENTIAL REASONING

    Get PDF
    Market-oriented power distribution system requires a well-planned budget with scheduled preventive and corrective maintenance during a replacement of units that are in an unsatisfactory condition. In recent years, the concept of the transformer health index as an integral part of resource management was adopted for the condition assessment and ranking of ETs. However, because of the lack of regular measurement and inspections, the confidence in health index value is greatly reduced.The paper proposes a novel methodology for the ET condition assessment and the lifetime increase through the establishment of priorities for control and maintenance. The solution is based on the upgraded health index, where the confidence to the measurement results is calculated using Evidential reasoning algorithm based on Dempster – Shafer theory. A novel, two – level hierarchical model of ET health index is proposed, with real weighting factors values. This way, the methodology for ET ranking includes the value of available information to describe ET current state. The proposed methodology is tested on real data of an installed ET and compared with the traditional health index calculation

    The application of ANFIS prediction models for thermal error compensation on CNC machine tools

    Get PDF
    Thermal errors can have significant effects on CNC machine tool accuracy. The errors come from thermal deformations of the machine elements caused by heat sources within the machine structure or from ambient temperature change. The effect of temperature can be reduced by error avoidance or numerical compensation. The performance of a thermal error compensation system essentially depends upon the accuracy and robustness of the thermal error model and its input measurements. This paper first reviews different methods of designing thermal error models, before concentrating on employing an adaptive neuro fuzzy inference system (ANFIS) to design two thermal prediction models: ANFIS by dividing the data space into rectangular sub-spaces (ANFIS-Grid model) and ANFIS by using the fuzzy c-means clustering method (ANFIS-FCM model). Grey system theory is used to obtain the influence ranking of all possible temperature sensors on the thermal response of the machine structure. All the influence weightings of the thermal sensors are clustered into groups using the fuzzy c-means (FCM) clustering method, the groups then being further reduced by correlation analysis. A study of a small CNC milling machine is used to provide training data for the proposed models and then to provide independent testing data sets. The results of the study show that the ANFIS-FCM model is superior in terms of the accuracy of its predictive ability with the benefit of fewer rules. The residual value of the proposed model is smaller than ±4 μm. This combined methodology can provide improved accuracy and robustness of a thermal error compensation system

    What attracts vehicle consumers’ buying:A Saaty scale-based VIKOR (SSC-VIKOR) approach from after-sales textual perspective?

    Get PDF
    Purpose: The increasingly booming e-commerce development has stimulated vehicle consumers to express individual reviews through online forum. The purpose of this paper is to probe into the vehicle consumer consumption behavior and make recommendations for potential consumers from textual comments viewpoint. Design/methodology/approach: A big data analytic-based approach is designed to discover vehicle consumer consumption behavior from online perspective. To reduce subjectivity of expert-based approaches, a parallel Naïve Bayes approach is designed to analyze the sentiment analysis, and the Saaty scale-based (SSC) scoring rule is employed to obtain specific sentimental value of attribute class, contributing to the multi-grade sentiment classification. To achieve the intelligent recommendation for potential vehicle customers, a novel SSC-VIKOR approach is developed to prioritize vehicle brand candidates from a big data analytical viewpoint. Findings: The big data analytics argue that “cost-effectiveness” characteristic is the most important factor that vehicle consumers care, and the data mining results enable automakers to better understand consumer consumption behavior. Research limitations/implications: The case study illustrates the effectiveness of the integrated method, contributing to much more precise operations management on marketing strategy, quality improvement and intelligent recommendation. Originality/value: Researches of consumer consumption behavior are usually based on survey-based methods, and mostly previous studies about comments analysis focus on binary analysis. The hybrid SSC-VIKOR approach is developed to fill the gap from the big data perspective

    Downtime Estimation of Buildings and Infrastructures Using Fuzzy Logic

    Get PDF
    Extreme natural events (e.g. earthquakes, floods, fire) are the major sources of threat to society and infrastructure. Communities that are able to absorb the impacts, recover quickly after disasters, and adapt to adverse events are fairly resilient communities. Economic and public health consequences from natural disasters have increased over time and motivated discussion of a new resilience management worldwide. The key parameter to estimate the resilience of buildings and infrastructures is the downtime (DT). Several strategies have been investigated to reduce disaster risk and evaluate the recovery time of buildings and infrastructures following dangerous events. However, the estimation of the DT is still challenging due to the uncertainty and vagueness of the data available. This paper introduces a method to predict the DT of buildings and infrastructures following earthquakes through a Fuzzy Logic hierarchical scheme. The use of expert-based systems can be helpful to deal with uncertainties, randomness, and limited data availability in the context of risk analysis and management. Fuzzy theory describes the behavior of a complex system through linguistic variables and it is based on deterministic functions. Two different DT models are introduced in this work for residential buildings and infrastructures, since different are the input parameters involved in the estimation process. In the first model, the DT can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by irrational delays (DT2); and downtime due to utilities disruption (DT3). DT1 is evaluated by relating the building damageability to given repair times of the building’s components. A rapid visual screening survey is filled out by an expert to acquire information about the analyzed building. Then, fuzzy logic is implemented to determine the building vulnerability, which is combined with a given earthquake intensity to obtain the building damageability. DT2 and DT3 are estimated using the REDITM Guidelines. DT2 considers irrational components through a specific sequence, which defines the order of components repair, while DT3 depends on the site seismic hazard and on the infrastructure vulnerability. The downtime of the building is finally estimated by combining the three components above, identifying three recovery states: re-occupancy, functional recovery, and full recovery. For estimating the recovery time of buried infrastructures, 31 indicators have been selected from previous publications and studies referring to programs and policies intending to reduce risk and increase recovery. The DT model is designed by aggregating four downtime indices: exposed infrastructure, earthquake intensity, human resources, and infrastructure type. The collected information on the potentially damaged lifelines are aggregated into a fuzzy hierarchical scheme and combined to obtain the DT. The methodology can be used to effectively support decision-makers in managing and minimizing the impacts of earthquakes and to recover damaged infrastructure promptly
    corecore