208 research outputs found

    KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models

    Full text link
    Large language models (LLMs) demonstrate remarkable performance on knowledge-intensive tasks, suggesting that real-world knowledge is encoded in their model parameters. However, besides explorations on a few probing tasks in limited knowledge domains, it is not well understood how to evaluate LLMs' knowledge systematically and how well their knowledge abilities generalize, across a spectrum of knowledge domains and progressively complex task formats. To this end, we propose KGQuiz, a knowledge-intensive benchmark to comprehensively investigate the knowledge generalization abilities of LLMs. KGQuiz is a scalable framework constructed from triplet-based knowledge, which covers three knowledge domains and consists of five tasks with increasing complexity: true-or-false, multiple-choice QA, blank filling, factual editing, and open-ended knowledge generation. To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive tasks and knowledge domains. Extensive experiments demonstrate that LLMs achieve impressive performance in straightforward knowledge QA tasks, while settings and contexts requiring more complex reasoning or employing domain-specific facts still present significant challenges. We envision KGQuiz as a testbed to analyze such nuanced variations in performance across domains and task formats, and ultimately to understand, evaluate, and improve LLMs' knowledge abilities across a wide spectrum of knowledge domains and tasks

    SoK: Cryptographically Protected Database Search

    Full text link
    Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions: 1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms. 2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality. 3) An analysis of attacks against protected search for different base queries. 4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.Comment: 20 pages, to appear to IEEE Security and Privac

    Non-Convex Phase Retrieval Algorithms and Performance Analysis

    Get PDF
    University of Minnesota Ph.D. dissertation. April 2018. Major: Electrical Engineering. Advisor: Georgios Giannakis. 1 computer file (PDF); ix, 149 pages.High-dimensional signal estimation plays a fundamental role in various science and engineering applications, including optical and medical imaging, wireless communications, and power system monitoring. The ability to devise solution procedures that maintain high computational and statistical efficiency will facilitate increasing the resolution and speed of lensless imaging, identifying artifacts in products intended for military or national security, as well as protecting critical infrastructure including the smart power grid. This thesis contributes in both theory and methods to the fundamental problem of phase retrieval of high-dimensional (sparse) signals from magnitude-only measurements. Our vision is to leverage exciting advances in non-convex optimization and statistical learning to devise algorithmic tools that are simple, scalable, and easy-to-implement, while being computationally and statistically (near-)optimal. Phase retrieval is approached from a non-convex optimization perspective. To gain statistical and computational efficiency, the magnitude data (instead of the intensities) are fitted based on the least-squares or maximum likelihood criterion, which leads to optimization models that trade off smoothness for ‘low-order’ non-convexity. To solve the resultant challenging nonconvex and non-smooth optimization, the present thesis introduces a two-stage algorithmic framework, that is termed amplitude flow. The amplitude flows start with a careful initialization, which is subsequently refined by a sequence of regularized gradient-type iterations. Both stages are lightweight, and they scale well with problem dimensions. Due to the highly non-convex landscape, judicious gradient regularization techniques such as trimming (i.e., truncation) and iterative reweighting are devised to boost the exact phase recovery performance. It is shown that successive iterates of the amplitude flows provably converge to the global optimum at a geometric rate, corroborating their efficiency in terms of computational, storage, and data resources. The amplitude flows are also demonstrated to be stable vis-a-vis additive noise. Sparsity plays a instrumental role in many scientific fields - what has led to the upsurge of research referred to as compressive sampling. In diverse applications, the signal is naturally sparse or admits a sparse representation after some known and deterministic linear transformation. This thesis also accounts for phase retrieval of sparse signals, by putting forth sparsity-cognizant amplitude flow variants. Although analysis, comparisons, and corroborating tests focus on non-convex phase retrieval in this thesis, a succinct overview of other areas is provided to highlight the universality of the novel algorithmic framework to a number of intriguing future research directions

    Overview of the SV-Ident 2022 Shared Task on Survey Variable Identification in Social Science Publications

    Full text link
    In this paper, we provide an overview of the SV-Ident shared task as part of the 3rd Workshop on Scholarly Document Processing (SDP) at COLING 2022. In the shared task, participants were provided with a sentence and a vocabulary of variables, and asked to identify which variables, if any, are mentioned in individual sentences from scholarly documents in full text. Two teams made a total of 9 submissions to the shared task leaderboard. While none of the teams improve on the baseline systems, we still draw insights from their submissions. Furthermore, we provide a detailed evaluation. Data and baselines for our shared task are freely available at https://github.com/vadis-project/sv-iden

    Internet Predictions

    Get PDF
    More than a dozen leading experts give their opinions on where the Internet is headed and where it will be in the next decade in terms of technology, policy, and applications. They cover topics ranging from the Internet of Things to climate change to the digital storage of the future. A summary of the articles is available in the Web extras section

    EHI: End-to-end Learning of Hierarchical Index for Efficient Dense Retrieval

    Full text link
    Dense embedding-based retrieval is now the industry standard for semantic search and ranking problems, like obtaining relevant web documents for a given query. Such techniques use a two-stage process: (a) contrastive learning to train a dual encoder to embed both the query and documents and (b) approximate nearest neighbor search (ANNS) for finding similar documents for a given query. These two stages are disjoint; the learned embeddings might be ill-suited for the ANNS method and vice-versa, leading to suboptimal performance. In this work, we propose End-to-end Hierarchical Indexing -- EHI -- that jointly learns both the embeddings and the ANNS structure to optimize retrieval performance. EHI uses a standard dual encoder model for embedding queries and documents while learning an inverted file index (IVF) style tree structure for efficient ANNS. To ensure stable and efficient learning of discrete tree-based ANNS structure, EHI introduces the notion of dense path embedding that captures the position of a query/document in the tree. We demonstrate the effectiveness of EHI on several benchmarks, including de-facto industry standard MS MARCO (Dev set and TREC DL19) datasets. For example, with the same compute budget, EHI outperforms state-of-the-art (SOTA) in by 0.6% (MRR@10) on MS MARCO dev set and by 4.2% (nDCG@10) on TREC DL19 benchmarks

    Advanced Methods for Botnet Intrusion Detection Systems

    Get PDF
    corecore