211 research outputs found

    Reduction of attributes in averaged similarities

    Get PDF
    Similarity Relations may be constructed from a set of fuzzy attributes. Each fuzzy attribute generates a simple similarity, and these simple similarities are combined into a complex similarity afterwards. The Representation Theorem establishes one such way of combining similarities, while averaging them is a different and more realistic approach in applied domains. In this paper, given an averaged similarity by a family of attributes, we propose a method to find families of new attributes having fewer elements that generate the same similarity. More generally, the paper studies the structure of this important class of fuzzy relations.Peer ReviewedPostprint (author's final draft

    Uncertainty and indistinguishability. Application to modelling with words.

    Get PDF
    El concepte d'igualtat és fonamental en qualsevol teoria donat que és una noció essencial a l'hora de discernir entre els elements objecte del seu estudi i possibilitar la definició de mecanismes de classificació.Quan totes les propietats són perfectament precises (absència d'incertesa), hom obtè la igualtat clàssica a on dos objectes són considerats iguals si i només si comparteixen el mateix conjunt de propietats. Però, què passa quan considerem l'aparició d'incertesa, com en el cas a on els objectes compleixen una determinada propietat només fins a un cert grau?. Llavors, donat que alguns objectes seran més similars entre si que d'altres, sorgeix la necessitat de una noció gradual del concepte d'igualtat.Aquestes consideracions refermen la idea de que certs contextos requereixen una definició més flexible, que superi la rigidesa de la noció clàssica d'igualtat. Els operadors de T-indistingibilitat semblen bons candidats per aquest nou tipus d'igualtat que cerquem.D'altra banda, La Teoria de l'Evidència de Dempster-Shafer, com a marc pel tractament d'evidències, defineix implícitament una noció d'indistingibilitat entre els elements del domini de discurs basada en la seva compatibilitat relativa amb l'evidència considerada. El capítol segon analitza diferents mètodes per definir l'operador de T-indistingibilitat associat a una evidència donada.En el capítol tercer, després de presentar un exhaustiu estat de l'art en mesures d'incertesa, ens centrem en la qüestió del còmput de l'entropia quan sobre els elements del domini s'ha definit una relació d'indistingibilitat. Llavors, l'entropia hauria de ser mesurada no en funció de l'ocurrència d'events diferents, sinó d'acord amb la variabilitat percebuda per un observador equipat amb la relació d'indistingibilitat considerada. Aquesta interpretació suggereix el "paradigma de l'observador" que ens porta a la introducció del concepte d'entropia observacional.La incertesa és un fenomen present al món real. El desenvolupament de tècniques que en permetin el tractament és doncs, una necessitat. La 'computació amb paraules' ('computing with words') pretén assolir aquest objectiu mitjançant un formalisme basat en etiquetes lingüístiques, en contrast amb els mètodes numèrics tradicionals. L'ús d'aquestes etiquetes millora la comprensibilitat del llenguatge de representació delconeixement, a l'hora que requereix una adaptació de les tècniques inductives tradicionals.En el quart capítol s'introdueix un nou tipus d'arbre de decisió que incorpora les indistingibilitats entre elements del domini a l'hora de calcular la impuresa dels nodes. Hem anomenat arbres de decisió observacionals a aquests nou tipus, donat que es basen en la incorporació de l'entropia observacional en la funció heurística de selecció d'atributs. A més, presentem un algorisme capaç d'induir regles lingüístiques mitjançant un tractament adient de la incertesa present a les etiquetes lingüístiques o a les dades mateixes. La definició de l'algorisme s'acompanya d'una comparació formal amb altres algorismes estàndards.The concept of equality is a fundamental notion in any theory since it is essential to the ability of discerning the objects to whom it concerns, ability which in turn is a requirement for any classification mechanism that might be defined. When all the properties involved are entirely precise, what we obtain is the classical equality, where two individuals are considered equal if and only if they share the same set of properties. What happens, however, when imprecision arises as in the case of properties which are fulfilled only up to a degree? Then, because certain individuals will be more similar than others, the need for a gradual notion of equality arises.These considerations show that certain contexts that are pervaded with uncertainty require a more flexible concept of equality that goes beyond the rigidity of the classic concept of equality. T-indistinguishability operators seem to be good candidates for this more flexible and general version of the concept of equality that we are searching for.On the other hand, Dempster-Shafer Theory of Evidence, as a framework for representing and managing general evidences, implicitly conveys the notion of indistinguishability between the elements of the domain of discourse based on their relative compatibility with the evidence at hand. In chapter two we are concerned with providing definitions for the T-indistinguishability operator associated to a given body of evidence.In chapter three, after providing a comprehensive summary of the state of the art on measures of uncertainty, we tackle the problem of computing entropy when an indistinguishability relation has been defined over the elements of the domain. Entropy should then be measured not according to the occurrence of different events, but according to the variability perceived by an observer equipped with indistinguishability abilities as defined by the indistinguishability relation considered. This idea naturally leads to the introduction of the concept of observational entropy.Real data is often pervaded with uncertainty so that devising techniques intended to induce knowledge in the presence of uncertainty seems entirely advisable.The paradigm of computing with words follows this line in order to provide a computation formalism based on linguistic labels in contrast to traditional numerical-based methods.The use of linguistic labels enriches the understandability of the representation language, although it also requires adapting the classical inductive learning procedures to cope with such labels.In chapter four, a novel approach to building decision trees is introduced, addressing the case when uncertainty arises as a consequence of considering a more realistic setting in which decision maker's discernment abilities are taken into account when computing node's impurity measures. This novel paradigm results in what have been called --observational decision trees' since the main idea stems from the notion of observational entropy in order to incorporate indistinguishability concerns. In addition, we present an algorithm intended to induce linguistic rules from data by properly managing the uncertainty present either in the set of describing labels or in the data itself. A formal comparison with standard algorithms is also provided

    Fifty years of similarity relations: a survey of foundations and applications

    Get PDF
    On the occasion of the 50th anniversary of the publication of Zadeh's significant paper Similarity Relations and Fuzzy Orderings, an account of the development of similarity relations during this time will be given. Moreover, the main topics related to these fuzzy relations will be reviewed.Peer ReviewedPostprint (author's final draft

    Generation and Characterization of Fuzzy T-preorders

    Get PDF
    This article studies T-preorders that can be generated in a natural way by a single fuzzy subset. These T-preorders are called one-dimensional and are of great importance, because every T-preorder can be generated by combining one-dimensional T-preorders.; In this article, the relation between fuzzy subsets generating the same T-preorder is given, and one-dimensional T-preorders are characterized in two different ways: They generate linear crisp orderings on X and they satisfy a Sincov-like functional equation. This last characterization is used to approximate a given T-preorder by a one-dimensional one by relating the issue to Saaty matrices used in the Analytical Hierarchical Process. Finally, strong complete T-preorders, important in decision-making problems, are also characterized.Peer ReviewedPostprint (author’s final draft

    Practical Isolated Searchable Encryption in a Trusted Computing Environment

    Get PDF
    Cloud computing has become a standard computational paradigm due its numerous advantages, including high availability, elasticity, and ubiquity. Both individual users and companies are adopting more of its services, but not without loss of privacy and control. Outsourcing data and computations to a remote server implies trusting its owners, a problem many end-users are aware. Recent news have proven data stored on Cloud servers is susceptible to leaks from the provider, third-party attackers, or even from government surveillance programs, exposing users’ private data. Different approaches to tackle these problems have surfaced throughout the years. Naïve solutions involve storing data encrypted on the server, decrypting it only on the client-side. Yet, this imposes a high overhead on the client, rendering such schemes impractical. Searchable Symmetric Encryption (SSE) has emerged as a novel research topic in recent years, allowing efficient querying and updating over encrypted datastores in Cloud servers, while retaining privacy guarantees. Still, despite relevant recent advances, existing SSE schemes still make a critical trade-off between efficiency, security, and query expressiveness, thus limiting their adoption as a viable technology, particularly in large-scale scenarios. New technologies providing Isolated Execution Environments (IEEs) may help improve SSE literature. These technologies allow applications to be run remotely with privacy guarantees, in isolation from other, possibly privileged, processes inside the CPU, such as the operating system kernel. Prominent example technologies are Intel SGX and ARM TrustZone, which are being made available in today’s commodity CPUs. In this thesis we study these new trusted hardware technologies in depth, while exploring their application to the problem of searching over encrypted data, primarily focusing in SGX. In more detail, we study the application of IEEs in SSE schemes, improving their efficiency, security, and query expressiveness. We design, implement, and evaluate three new SSE schemes for different query types, namely Boolean queries over text, similarity queries over image datastores, and multimodal queries over text and images. These schemes can support queries combining different media formats simultaneously, envisaging applications such as privacy-enhanced medical diagnosis and management of electronic-healthcare records, or confidential photograph catalogues, running without the danger of privacy breaks in Cloud-based provisioned services

    Articles indexats publicats per autors de l'ETSAB

    Get PDF
    Aquest document recull els articles publicats per investigadors de l'ETSAB en revistes del Web of Science i de Scopus des de l'any 2000 fins el 2011.Preprin

    Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA

    Full text link
    The area of formal analysis of cryptographic protocols has been an active one since the mid 80’s. The idea is to verify communication protocols that use encryption to guarantee secrecy and that use authentication of data to ensure security. Formal methods are used in protocol analysis to provide formal proofs of security, and to uncover bugs and security flaws that in some cases had remained unknown long after the original protocol publication, such as the case of the well known Needham-Schroeder Public Key (NSPK) protocol. In this thesis we tackle problems regarding the three main pillars of protocol verification: modelling capabilities, verifiable properties, and efficiency. This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool. This tool is a model-checker for cryptographic protocol analysis that allows for the incorporation of different equational theories and operates in the unbounded session model without the use of data or control abstraction. An important contribution of this thesis is relative to theoretical aspects of protocol verification in Maude-NPA. First, we define a forwards operational semantics, using rewriting logic as the theoretical framework and the Maude programming language as tool support. This is the first time that a forwards rewriting-based semantics is given for Maude-NPA. Second, we also study the problem that arises in cryptographic protocol analysis when it is necessary to guarantee that certain terms generated during a state exploration are in normal form with respect to the protocol equational theory. We also study techniques to extend Maude-NPA capabilities to support the verification of a wider class of protocols and security properties. First, we present a framework to specify and verify sequential protocol compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a theoretical framework to specify and verify protocol indistinguishability in Maude-NPA. This kind of properties aim to verify that an attacker cannot distinguish between two versions of a protocol: for example, one using one secret and one using another, as it happens in electronic voting protocols. Finally, this thesis contributes to improve the efficiency of protocol verification in Maude-NPA. We define several techniques which drastically reduce the state space, and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.Santiago Pinazo, S. (2015). Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/4852

    Geometry and symmetries of multi-particle systems

    Get PDF
    The quantum dynamical evolution of atomic and molecular aggregates, from their compact to their fragmented states, is parametrized by a single collective radial parameter. Treating all the remaining particle coordinates in d dimensions democratically, as a set of angles orthogonal to this collective radius or by equivalent variables, bypasses all independent-particle approximations. The invariance of the total kinetic energy under arbitrary d-dimensional transformations which preserve the radial parameter gives rise to novel quantum numbers and ladder operators interconnecting its eigenstates at each value of the radial parameter. We develop the systematics and technology of this approach, introducing the relevant mathematics tutorially, by analogy to the familiar theory of angular momentum in three dimensions. The angular basis functions so obtained are treated in a manifestly coordinate-free manner, thus serving as a flexible generalized basis for carrying out detailed studies of wavefunction evolution in multi-particle systems.Comment: 37 pages, 2 eps figure
    • …
    corecore