4,232 research outputs found

    Fuzzy covering based rough sets revisited

    Get PDF
    In this paper we review four fuzzy extensions of the so-called tight pair of covering based rough set approximation operators. Furthermore, we propose two new extensions of the tight pair: for the first model, we apply the technique of representation by levels to define the approximation operators, while the second model is an intuitive extension of the crisp operators. For the six models, we study which theoretical properties they satisfy. Moreover, we discuss interrelationships between the models

    Induction, complexity, and economic methodology

    Get PDF
    This paper focuses on induction, because the supposed weaknesses of that process are the main reason for favouring falsificationism, which plays an important part in scientific methodology generally; the paper is part of a wider study of economic methodology. The standard objections to, and paradoxes of, induction are reviewed, and this leads to the conclusion that the supposed ‘problem’ or ‘riddle’ of induction is a false one. It is an artefact of two assumptions: that the classic two-valued logic (CL) is appropriate for the contexts in which induction is relevant; and that it is the touchstone of rational thought. The status accorded to CL is the result of historical and cultural factors. The material we need to reason about falls into four distinct domains; these are explored in turn, while progressively relaxing the restrictions that are essential to the valid application of CL. The restrictions include the requirement for a pre-existing, independently-guaranteed classification, into which we can fit all new cases with certainty; and non-ambiguous relationships between antecedents and consequents. Natural kinds, determined by the existence of complex entities whose characteristics cannot be unbundled and altered in a piecemeal, arbitrary fashion, play an important part in the review; so also does fuzzy logic (FL). These are used to resolve two famous paradoxes about induction (the grue and raven paradoxes); and the case for believing that conventional logic is a subset of fuzzy logic is outlined. The latter disposes of all questions of justifying induction deductively. The concept of problem structure is used as the basis for a structured concept of rationality that is appropriate to all four of the domains mentioned above. The rehabilitation of induction supports an alternative definition of science: that it is the business of developing networks of contrastive, constitutive explanations of reproducible, inter-subjective (‘objective’) data. Social and psychological obstacles ensure the progress of science is slow and convoluted; however, the relativist arguments against such a project are rejected.induction; economics; methodology; complexity

    Trial by Traditional Probability, Relative Plausibility, or Belief Function?

    Get PDF
    Almost incredible is that no one has ever formulated an adequate model for applying the standard of proof. What does the law call for? The usual formulation is that the factfinder must roughly test the finding on a scale of likelihood. So, the finding in a civil case must at least be more likely than not or, for the theoretically adventuresome, more than fifty percent probable. Yet everyone concedes that this formulation captures neither how human factfinders actually work nor, more surprisingly, how theory tells us that factfinders should work. An emerging notion that the factfinder should compare the plaintiff’s story to the defendant’s story might be a step forward, but this relative plausibility conjecture has its problems. I contend instead that the mathematical theory of belief functions provides an alternative without those problems, and that the law in fact conforms to this theory. Under it, the standards of proof reveal themselves as instructions for the factfinder to compare the affirmative belief in the finding to any belief in its contradiction, but only after setting aside the range of belief that imperfect evidence leaves uncommitted. Accordingly, rather than requiring a civil case’s elements to exceed fifty percent or comparing best stories, belief functions focus on whether the perhaps smallish imprecise belief exceeds its smallish imprecise contradiction. Belief functions extend easily to the other standards of proof. Moreover, belief functions nicely clarify the workings of burdens of persuasion and production

    Enabling security and risk-based operation of container line supply chains under high uncertainties

    Get PDF
    Container supply chains are vulnerable to many risks. Vulnerability can be defined as an exposure to serious disturbances arising from the risks within the supply chain as well as the risks external to the supply chain. Vulnerability can also be defined as exposure to serious disturbances arising from a hazard or a threat. Containers are one of the major sources of security concerns and have been used, for example, to smuggle illegal immigrants, weapons, and drugs. The consequences of the use of a weapon of mass destruction or discovery of such a device in a container are serious. Estimates suggest that a weapon of mass destruction explosion and the resulting port closure could cost billions of dollars. The annual cost of container losses as consequences of serious disturbances arising from hazards is estimated as $500 million per year. The literature review, historical failure data, and statistical analysis in the context of containerships' accidents from a safety point of view clearly indicate that the container cargo damage, machinery failure, collision, grounding, fire/explosion, and contact are the most significant accident categories with high percentages of occurrences. Another important finding from the literature review is that the most significant basic event contributing to the supply chains' vulnerability is human error. Therefore, firstly, this research makes full use of the Evidential Reasoning (ER) advantages and further develops and extends the Fuzzy Evidential Reasoning (FER) by exploiting a conceptual and sound methodology for the assessment of a seafarer's reliability. Accordingly, control options to enhance seafarers' reliability are suggested. The proposed methodology enables and facilitates the decision makers to measure the reliability of a seafarer before his/her designation to any activities and during his/her seafaring period. Secondly, this research makes full use of the Bayesian Networks (BNs) advantages and further develops and extends the Fuzzy Bayesian Networks (FBNs) and a "symmetric method" by exploiting a conceptual and sound methodology for the assessment of human reliability. Furthermore a FBN model (i. e. dependency network), which is capable of illustrating the dependency among the variables, is constructed. By exploiting the proposed FBN model, a general equation for the reduction of human reliability attributable to a person's continuous hours of wakefulness, acute sleep loss and cumulative sleep debt is formulated and tested.A container supply chain includes dozens of stakeholders who can physically come into contact with containers and their contents and are potentially related with the container trade and transportation. Security-based disruptions can occur at various points along the supply chain. Experience has shown that a limited percentage of inspection, coupled with a targeted approach based on risk analysis, can provide an acceptable security level. Thus, in order not to hamper the logistics process in an intolerable manner, the number of physical checks should be chosen cautiously. Thirdly, a conceptual and sound methodology (i. e. FBN model) for evaluating a container's security score, based on the importer security filling, shipping documents, ocean or sea carriers' reliability, and the security scores of various commercial operators and premises, is developed. Accordingly, control options to avoid unnecessary delays and security scanning are suggested. Finally, a decision making model for assessing the security level of a port associated with ship/port interface and based on the security score of the ship's cargo containers, is developed. It is further suggested that regardless of scanning all import cargo containers, one realistic way to secure the supply chain, due to lack of information and number of variables, is to enhance the ocean or sea carriers' reliability through enhancing their ship staff's reliability. Accordingly a decision making model to analyse the cost and benefit (i.e. CBA) is developed

    A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme

    Get PDF
    Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of experts’ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of experts’ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of experts’ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version

    A new versatile in-process monitoring system for milling

    Get PDF
    International audienceTool condition monitoring (TCM) systems can improve productivity and ensure workpiece quality, yet, there is a lack of reliable TCM solutions for small-batch or one-off manufacturing of industrial parts. TCM methods which include the characteristics of the cut seem to be particularly suitable for these demanding applications. In the first section of this paper, three process-based indicators have been retrieved from literature dealing with TCM. They are analysed using a cutting force model and experiments are carried out in industrial conditions. Specific transient cuttings encountered during the machining of the test part reveal the indicators to be unreliable. Consequently, in the second section, a versatile in-process monitoring method is suggested. Based on experiments carried out under a range of different cutting conditions, an adequate indicator is proposed: the relative radial eccentricity of the cutters is estimated at each instant and characterizes the tool state. It is then compared with the previous tool state in order to detect cutter breakage or chipping. Lastly, the new approach is shown to be reliable when implemented during the machining of the test part
    corecore