4,042 research outputs found

    Rough Set Theory for Real Estate Appraisal: An Application to Directional District of Naples

    Get PDF
    This paper proposes an application of Rough Set Theory (RST) to the real estate field, in order to highlight its operational potentialities for mass appraisal purposes. RST allows one to solve the appraisal of real estate units regardless of the deterministic relationship between characteristics that contribute to the formation of the property market price and the same real estate prices. RST was applied to a real estate sample (office units located in Directional District of Naples) and was also integrated with a functional extension so-called Valued Tolerance Relation (VTR) in order to improve its flexibility. A multiple regression analysis (MRA) was developed on the same real estate sample with the aim to compare RST and MRA results. The case study is followed by a brief discussion on basic theoretical connotations of this methodology

    Tracking Uncertainty Propagation from Model to Formalization: Illustration on Trust Assessment

    Get PDF
    International audienceThis paper investigates the use of the URREF ontology to characterize and track uncertainties arising within the modeling and formalization phases. Estimation of trust in reported information, a real-world problem of interest to practitioners in the field of security, was adopted for illustration purposes. A functional model of trust was developed to describe the analysis of reported information, and it was implemented with belief functions. When assessing trust in reported information, the uncertainty arises not only from the quality of sources or information content, but also due to the inability of models to capture the complex chain of interactions leading to the final outcome and to constraints imposed by the representation formalism. A primary goal of this work is to separate known approximations, imperfections and inaccuracies from potential errors, while explicitly tracking the uncertainty from the modeling to the formalization phases. A secondary goal is to illustrate how criteria of the URREF ontology can offer a basis for analyzing performances of fusion systems at early stages, ahead of implementation. Ideally, since uncertainty analysis runs dynamically, it can use the existence or absence of observed states and processes inducing uncertainty to adjust the tradeoff between precision and performance of systems on-the-fly

    Linear superposition as a core theorem of quantum empiricism

    Get PDF
    Clarifying the nature of the quantum state Ψ|\Psi\rangle is at the root of the problems with insight into (counterintuitive) quantum postulates. We provide a direct-and math-axiom free-empirical derivation of this object as an element of a vector space. Establishing the linearity of this structure-quantum superposition-is based on a set-theoretic creation of ensemble formations and invokes the following three principia: (I)(\textsf{I}) quantum statics, (II)(\textsf{II}) doctrine of a number in the physical theory, and (III)(\textsf{III}) mathematization of matching the two observations with each other; quantum invariance. All of the constructs rest upon a formalization of the minimal experimental entity: observed micro-event, detector click. This is sufficient for producing the C\mathbb C-numbers, axioms of linear vector space (superposition principle), statistical mixtures of states, eigenstates and their spectra, and non-commutativity of observables. No use is required of the concept of time. As a result, the foundations of theory are liberated to a significant extent from the issues associated with physical interpretations, philosophical exegeses, and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements; 70 pages(+2), further improvement

    The quest for effective regulatory enforcement:A goal-displacement perspective

    Get PDF

    The quest for effective regulatory enforcement:A goal-displacement perspective

    Get PDF

    Knowledge formalization for vector data matching using belief theory

    Get PDF
    Nowadays geographic vector data is produced both by public and private institutions using well defined specifications or crowdsourcing via Web 2.0 mapping portals. As a result, multiple representations of the same real world objects exist, without any links between these different representations. This becomes an issue when integration, updates, or multi-level analysis needs to be performed, as well as for data quality assessment. In this paper a multi-criteria data matching approach allowing the automatic definition of links between identical features is proposed. The originality of the approach is that the process is guided by an explicit representation and fusion of knowledge from various sources. Moreover the imperfection (imprecision, uncertainty, and incompleteness) is explicitly modeled in the process. Belief theory is used to represent and fuse knowledge from different sources, to model imperfection, and make a decision. Experiments are reported on real data coming from different producers, having different scales and either representing relief (isolated points) or road networks (linear data)

    Making AI Meaningful Again

    Get PDF
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy
    corecore