2,438 research outputs found

    Cut-and-Choose Bilateral Oblivious Transfer and Its Application in Secure Two-party Computation

    Get PDF
    In secure two-party computation protocols, the cut-and-choose paradigm is used to prevent the malicious party who constructs the garbled circuits from cheating. In previous realization of the cut-and-choose technique on the garbled circuits, the delivery of the random keys is divided into multiple stages. Thus, the round complexity is high and the consistency of cut-and-choose challenge should be proved. In this paper, we introduce a new primitive called cut-and-choose bilateral oblivious transfer, which transfers all necessary keys of garbled circuits in one process. Specifically, in our oblivious transfer protocol, the sender inputs two pairs (x0,x1)(x_0,x_1), (y0,y1)(y_0,y_1) and a bit τ\tau; the receiver inputs two bits σ\sigma and jj. After the protocol execution, the receiver obtains xτ,yσx_{\tau},y_{\sigma} for j=1j=1, and x0,x1,y0,y1x_0,x_1,y_0,y_1 for j=0j=0. By the introduction of this new primitive, the round complexity of secure two-party computation protocol can be decreased; the cut-and-choose challenge jj is no need to be opened anymore, therefore the consistency proof of jj is omitted. In addition, the primitive is of independent interest and could be useful in many cut-and-choose scenarios

    Secure Comparison Under Ideal/Real Simulation Paradigm

    Get PDF
    Secure comparison problem, also known as Yao's Millionaires' problem, was introduced by Andrew Yao in 1982. It is a fundamental problem in secure multi-party computation. In this problem, two millionaires are interested in determining the richer one between them without revealing their actual wealth. Yao's millionaires' problem is a classic and fundamental problem in cryptography. The design of secure and efficient solutions to this problem provides effective building blocks for secure multi-party computation. However, only a few of the solutions in the literature have succeeded in resisting attacks of malicious adversaries, and none of these solutions has been proven secure in malicious model under ideal/real simulation paradigm. In this paper, we propose two secure solutions to Yao's millionaires' problem in the malicious model. One solution has full simulation security, and the other solution achieves one-sided simulation security. Both protocols are only based on symmetric cryptography. Experimental results indicate that our protocols can securely solve Yao's millionaires' problem with high efficiency and scalability. Furthermore, our solutions show better performance than the state-of-the-art solutions in terms of complexity and security. Specifically, our solutions only require O(U)O(|U|) symmetric operations at most to achieve simulation-based security against malicious adversaries, where UU denotes the universal set and U|U| denotes the size of UU

    United States-Japan Economic Relations

    Get PDF
    The bilateral relationship with Japan now dominates American thinking on the benefits and costs of foreign trade. This paper reevaluates the past and future course of U.S.-Japan economic relations. It identifies six distinct aspects of the relationship that may underlie the continuing friction: bilateral imbalance on merchandise trade, capital flows from Japan to the United States, the yen/dollar exchange rate, sectoral trade distortions, Japan's technological catch-up, and societal differences. For each source of conflict, the main causes and potential remedies are assessed. Several important conclusions emerge from the analysis. First, although the bilateral trade and capital-account imbalances were produced primarily by macroeconomic factors and can therefore be viewed as "temporary" rather than long-term developments, elimination of the imbalances without serious damage may be difficult to achieve. In terms of sectoral adjustments, the U.S.-Japan relationship is entering a new phase as the two nations grow more similar in terms of technology base, abundance of capital and skilled labor, and per capita income. Two-way trade in technology and in technology-based services will become increasingly important, while both nations will cope with similar problems of adjustment to pressure from a new tier of competitors in Asia and elsewhere. As the aggregate imbalances diminish, sectoral trade conflict will be concentrated on the two ends of the technology spectrum, with issues raised both by conflicting approaches to the phasing out of uncompetitive industries and by the nurturing of new technology-based industries.

    The Rules of the Game and the Morality of Efficient Breach

    Get PDF
    Moralists have long criticized the theory of efficient breach for its advocacy of promise breaking. But a fully developed theory of efficient breach has an internal morality of its own. It argues that sophisticated parties contract for efficient breach, which in the long run maximizes everyone’s welfare. And the theory marks some breaches—those that are opportunistic, obstructive, or otherwise inefficient—as wrongs that the law should deter, as transgressions that should not be priced but punished. That internal morality, however, does not excuse the theory from moral scrutiny. An extended comparison to Jean Renoir’s 1939 film, La Règle du Jeu (“The Rules of the Game”), illustrates what more sophisticated moral criticisms of the theory might look like. Renoir’s film depicts a society in which marital infidelity is a transgression that is tolerated, but only when done according to society’s rules. Renoir’s attitude toward that society suggests that moral critics of the efficient breach theory should focus not on its celebration of efficient breach, but on the value of the sort of moral community it imagines and on the theory’s effect on parties who are not playing the efficient breach game, whether because they do not understand its rules or because they seek a different type of obligation. The comparison to the film also highlights the theory’s own narrative elements, which both add to its persuasive power and, once identified, mark out its limits

    Preliminary specification and design documentation for software components to achieve catallaxy in computational systems

    Get PDF
    This Report is about the preliminary specifications and design documentation for software components to achieve Catallaxy in computational systems. -- Die Arbeit beschreibt die Spezifikation und das Design von Softwarekomponenten, um das Konzept der Katallaxie in Grid Systemen umzusetzen. Eine Einführung ordnet das Konzept der Katallaxie in bestehende Grid Taxonomien ein und stellt grundlegende Komponenten vor. Anschließend werden diese Komponenten auf ihre Anwendbarkeit in bestehenden Application Layer Netzwerken untersucht.Grid Computing

    A theoretical and computational basis for CATNETS

    Get PDF
    The main content of this report is the identification and definition of market mechanisms for Application Layer Networks (ALNs). On basis of the structured Market Engineering process, the work comprises the identification of requirements which adequate market mechanisms for ALNs have to fulfill. Subsequently, two mechanisms for each, the centralized and the decentralized case are described in this document. These build the theoretical foundation for the work within the following two years of the CATNETS project. --Grid Computing

    Oblivious Network Optimization and Security Modeling in Sustainable Smart Grids and Cities

    Get PDF
    Today\u27s interconnected world requires an inexpensive, fast, and reliable way of transferring information. There exists an increasingly important need for intelligent and adaptable routing of network flows. In the last few years, many researchers have worked toward developing versatile solutions to the problem of routing network flows in unpredictable circumstances. These attempts have evolved into a rich literature in the area of oblivious network design which typically route the network flows via a routing scheme that makes use of a spanning tree or a set of trees of the graph representation of the network. In the first chapter, we provide an introduction to network design. This introductory chapter has been designed to clarify the importance and position of oblivious routing problems in the context of network design as well as its containing field of research. Part I of this dissertation discusses the fundamental role of linked hierarchical data structures in providing the mathematical tools needed to construct rigorous versatile routing schemes and applies hierarchical routing tools to the process of constructing versatile routing schemes. Part II of this dissertation applies the routing tools generated in Part I to address real-world network optimization problems in the area of electrical power networks, clusters of micrograms, and content-centric networks. There is an increasing concern regarding the security and privacy of both physical and communication layers of smart interactive customer-driven power networks, better known as smart grids. Part III of this dissertation utilizes an advanced interdisciplinary approach to address existing security and privacy issues, proposing legitimate countermeasures for each of them from the standpoint of both computing and electrical engineering. The proposed methods are theoretically proven by mathematical tools and illustrated by real-world examples

    Scene understanding for interactive applications

    Get PDF
    Para interactuar con el entorno, es necesario entender que está ocurriendo en la escena donde se desarrolla la acción. Décadas de investigación en el campo de la visión por computador han contribuido a conseguir sistemas que permiten interpretar de manera automática el contenido en una escena a partir de información visual. Se podría decir el objetivo principal de estos sistemas es replicar la capacidad humana para extraer toda la información a partir solo de datos visuales. Por ejemplo, uno de sus objetivos es entender como percibimosel mundo en tres dimensiones o como podemos reconocer sitios y objetos a pesar de la gran variación en su apariencia. Una de las tareas básicas para entender una escena es asignar un significado semántico a cada elemento (píxel) de una imagen. Esta tarea se puede formular como un problema de etiquetado denso el cual especifica valores (etiquetas) a cada pixel o región de una imagen. Dependiendo de la aplicación, estas etiquetas puedenrepresentar conceptos muy diferentes, desde magnitudes físicas como la información de profundidad, hasta información semántica, como la categoría de un objeto. El objetivo general en esta tesis es investigar y desarrollar nuevas técnicas para incorporar automáticamente una retroalimentación por parte del usuario, o un conocimiento previo en sistemas inteligente para conseguir analizar automáticamente el contenido de una escena. en particular,esta tesis explora dos fuentes comunes de información previa proporcionado por los usuario: interacción humana y etiquetado manual de datos de ejemplo.La primera parte de esta tesis esta dedicada a aprendizaje de información de una escena a partir de información proporcionada de manera interactiva por un usuario. Las soluciones que involucran a un usuario imponen limitaciones en el rendimiento, ya que la respuesta que se le da al usuario debe obtenerse en un tiempo interactivo. Esta tesis presenta un paradigma eficiente que aproxima cualquier magnitud por píxel a partir de unos pocos trazos del usuario. Este sistema propaga los escasos datos de entrada proporcionados por el usuario a cada píxel de la imagen. El paradigma propuesto se ha validado a través detres aplicaciones interactivas para editar imágenes, las cuales requieren un conocimiento por píxel de una cierta magnitud, con el objetivo de simular distintos efectos.Otra estrategia común para aprender a partir de información de usuarios es diseñar sistemas supervisados de aprendizaje automático. En los últimos años, las redes neuronales convolucionales han superado el estado del arte de gran variedad de problemas de reconocimiento visual. Sin embargo, para nuevas tareas, los datos necesarios de entrenamiento pueden no estar disponibles y recopilar suficientes no es siempre posible. La segunda parte de esta tesis explora como mejorar los sistema que aprenden etiquetado denso semántico a partir de imágenes previamente etiquetadas por los usuarios. En particular, se presenta y validan estrategias, basadas en los dos principales enfoques para transferir modelos basados en deep learning, para segmentación semántica, con el objetivo de poder aprender nuevas clases cuando los datos de entrenamiento no son suficientes en cantidad o precisión.Estas estrategias se han validado en varios entornos realistas muy diferentes, incluyendo entornos urbanos, imágenes aereas y imágenes submarinas.In order to interact with the environment, it is necessary to understand what is happening on it, on the scene where the action is ocurring. Decades of research in the computer vision field have contributed towards automatically achieving this scene understanding from visual information. Scene understanding is a very broad area of research within the computer vision field. We could say that it tries to replicate the human capability of extracting plenty of information from visual data. For example, we would like to understand how the people perceive the world in three dimensions or can quickly recognize places or objects despite substantial appearance variation. One of the basic tasks in scene understanding from visual data is to assign a semantic meaning to every element of the image, i.e., assign a concept or object label to every pixel in the image. This problem can be formulated as a dense image labeling problem which assigns specific values (labels) to each pixel or region in the image. Depending on the application, the labels can represent very different concepts, from a physical magnitude, such as depth information, to high level semantic information, such as an object category. The general goal in this thesis is to investigate and develop new ways to automatically incorporate human feedback or prior knowledge in intelligent systems that require scene understanding capabilities. In particular, this thesis explores two common sources of prior information from users: human interactions and human labeling of sample data. The first part of this thesis is focused on learning complex scene information from interactive human knowledge. Interactive user solutions impose limitations on the performance where the feedback to the user must be at interactive rates. This thesis presents an efficient interaction paradigm that approximates any per-pixel magnitude from a few user strokes. It propagates the sparse user input to each pixel of the image. We demonstrate the suitability of the proposed paradigm through three interactive image editing applications which require per-pixel knowledge of certain magnitude: simulate the effect of depth of field, dehazing and HDR tone mapping. Other common strategy to learn from user prior knowledge is to design supervised machine-learning approaches. In the last years, Convolutional Neural Networks (CNNs) have pushed the state-of-the-art on a broad variety of visual recognition problems. However, for new tasks, enough training data is not always available and therefore, training from scratch is not always feasible. The second part of this thesis investigates how to improve systems that learn dense semantic labeling of images from user labeled examples. In particular, we present and validate strategies, based on common transfer learning approaches, for semantic segmentation. The goal of these strategies is to learn new specific classes when there is not enough labeled data to train from scratch. We evaluate these strategies across different environments, such as autonomous driving scenes, aerial images or underwater ones.<br /
    corecore