837 research outputs found

    Competition in online comparison shopping services

    Get PDF

    Quantifying Shannon's Work Function for Cryptanalytic Attacks

    Full text link
    Attacks on cryptographic systems are limited by the available computational resources. A theoretical understanding of these resource limitations is needed to evaluate the security of cryptographic primitives and procedures. This study uses an Attacker versus Environment game formalism based on computability logic to quantify Shannon's work function and evaluate resource use in cryptanalysis. A simple cost function is defined which allows to quantify a wide range of theoretical and real computational resources. With this approach the use of custom hardware, e.g., FPGA boards, in cryptanalysis can be analyzed. Applied to real cryptanalytic problems, it raises, for instance, the expectation that the computer time needed to break some simple 90 bit strong cryptographic primitives might theoretically be less than two years.Comment: 19 page

    On the use of self-organizing maps to accelerate vector quantization

    Full text link
    Self-organizing maps (SOM) are widely used for their topology preservation property: neighboring input vectors are quantified (or classified) either on the same location or on neighbor ones on a predefined grid. SOM are also widely used for their more classical vector quantization property. We show in this paper that using SOM instead of the more classical Simple Competitive Learning (SCL) algorithm drastically increases the speed of convergence of the vector quantization process. This fact is demonstrated through extensive simulations on artificial and real examples, with specific SOM (fixed and decreasing neighborhoods) and SCL algorithms.Comment: A la suite de la conference ESANN 199

    A Heckit Model of Sales Dynamics in Turkish Art Auctions: 2005-2008

    Get PDF
    Unsold artworks are excluded from a traditional hedonic price index as no observable price can be attached to them. The question is whether the exclusion of unsold artworks lead to a sample selection bias in the traditionally constructed art price indices. In this paper, we examine the art auction sales performance in Turkey for the period between January 2005 and February 2008 using a unique database which contains 11,212 sales records including unsold items. We employ the two-stage Heckit model. Our empirical model combines demand-side influences with supply-side characteristics as well as the auction microstructure. We find that there is no sample selection bias created by unsold works. This finding also provides an explanation for why the attempts in the literature to identify which works are (not) sold turned out to be largely unsuccessful. On the behavioural side, we confirm the existence of the âafternoon effectâ in both sales rates and in sales prices in Turkish art auctions. There is also some evidence for the âdeath effectâ and âmaster effectâ in both sales rates and sales prices. Finally, we find that the returns in the Turkish art market serve as a hedge against inflation in our sample perio

    Yearly update : exascale projections for 2013.

    Get PDF
    The HPC architectures of today are significantly different for a decade ago, with high odds that further changes will occur on the road to Exascale. This paper discusses the %E2%80%9Cperfect storm%E2%80%9D in technology that produced this change, the classes of architectures we are dealing with, and probable trends in how they will evolve. These properties and trends are then evaluated in terms of what it likely means to future Exascale systems and applications.

    GHOST: Building blocks for high performance sparse linear algebra on heterogeneous systems

    Get PDF
    While many of the architectural details of future exascale-class high performance computer systems are still a matter of intense research, there appears to be a general consensus that they will be strongly heterogeneous, featuring "standard" as well as "accelerated" resources. Today, such resources are available as multicore processors, graphics processing units (GPUs), and other accelerators such as the Intel Xeon Phi. Any software infrastructure that claims usefulness for such environments must be able to meet their inherent challenges: massive multi-level parallelism, topology, asynchronicity, and abstraction. The "General, Hybrid, and Optimized Sparse Toolkit" (GHOST) is a collection of building blocks that targets algorithms dealing with sparse matrix representations on current and future large-scale systems. It implements the "MPI+X" paradigm, has a pure C interface, and provides hybrid-parallel numerical kernels, intelligent resource management, and truly heterogeneous parallelism for multicore CPUs, Nvidia GPUs, and the Intel Xeon Phi. We describe the details of its design with respect to the challenges posed by modern heterogeneous supercomputers and recent algorithmic developments. Implementation details which are indispensable for achieving high efficiency are pointed out and their necessity is justified by performance measurements or predictions based on performance models. The library code and several applications are available as open source. We also provide instructions on how to make use of GHOST in existing software packages, together with a case study which demonstrates the applicability and performance of GHOST as a component within a larger software stack.Comment: 32 pages, 11 figure

    High Performance Computing Instrumentation and Research Productivity in U.S. Universities

    Get PDF
    This paper studies the relationship between investments in High-Performance Computing (HPC) instrumentation and research competitiveness. Measures of institutional HPC investment are computed from data that is readily available from the Top 500 list, a list that has been published twice a year since 1993 that lists the fastest 500 computers in the world at that time. Institutions that are studied include US doctoral-granting institutions that fall into the very high or high research rankings according to the Carnegie Foundation classifications and additional institutions that have had entries in the Top 500 list. Research competitiveness is derived from federal funding data, compilations of scholarly publications, and institutional rankings. Correlation and Two Stage Least Square regression is used to analyze the research-related returns to investment in HPC. Two models are examined and give results that are both economically and statistically significant. Appearance on the Top 500 list is associated with a contemporaneous increase in NSF funding levels as well as a contemporaneous increase in the number of publications. The rate of depreciation in returns to HPC is rapid. The conclusion is that consistent investments in HPC at even modest levels are strongly correlated to research competitiveness
    • …
    corecore