21 research outputs found

    Relatório de Pesquisa: Aplicação de técnicas de Recuperação de Informações na seleção de editais de licitações para auditoria.

    Get PDF
    A Controladoria-Geral da União (CGU), composta pela Secretaria de Transparência e Prevenção da Corrupção (STPC), Secretaria Federal de Controle Interno (SFC), Corregedoria-Geral da União (CRG), Secretaria de Combate à Corrupção (SCC) e Ouvidoria-Geral da União (OGU)¹, tem a SFC como o órgão central do Sistema de Controle Interno do Poder Executivo Federal, a qual foram conferidas atribuições por meio da Lei 10.180², de 6 de fevereiro de 2001.18 páginasAnálise e Ciência de DadosPolíticas Pública

    Integrasi Framework Kivy dan Webix pada Pembangunan Framework Mobile Web Easy Development System(Integration Kivy and Webix Framework in Easy Development System Mobile Web Framework)

    Get PDF
    Pada dunia retail era sekarang, kemampuan untuk lebih meminimalkan biaya pengeluaran dengan inovasi teknologi modern menjadi salah satu poin penting dalam mendongkrak kemajuan bisnis. Hal tersebut diakibatkan karena perubahan kecil pada teknologi informasi dapat membantu mempercepat kinerja dan meningkatkan pendapatan. Salah satu perusahaan retail terbesar di Indonesia, PT. Sumber Alfaria Trijaya, Tbk (Alfamart) membangun sebuah framework berbasis web menggunakan framework Webix pada Front-End dan Python pada Back-End untuk dapat mengakomodasi perpindahan dari aplikasi server on-premises ke cloud computing. Namun, beberapa kendala ditemukan seperti dibutuhkannya waktu yang relatif lama ketika diakses melalui peramban smartphone, serta kode program dapat dilihat oleh siapapun ketika aplikasi berjalan pada browser. Oleh karena itu pada penelitian ini dibangun framework yang lebih fleksibel yaitu Mobile Web Easy Development System (M-EDS) yang dapat diakses oleh developer untuk mengembangkan aplikasi mobile web berbasis android dan iOS, serta pengguna akhir untuk mengakses aplikasi yang dikembangkan. M-EDS dapat mengakomodasi aktivitas pengguna seperti dapat mengakses Web-Based Framework dengan satu kali klik.  Dari sisi pengembang, Web M-EDS dapat membantu agar kode program aplikasi tidak dapat dilihat oleh pihak yang tidak bertanggung jawa

    Re-Rank - Expand - Repeat: Adaptive Query Expansion for Document Retrieval Using Words and Entities

    Full text link
    Sparse and dense pseudo-relevance feedback (PRF) approaches perform poorly on challenging queries due to low precision in first-pass retrieval. However, recent advances in neural language models (NLMs) can re-rank relevant documents to top ranks, even when few are in the re-ranking pool. This paper first addresses the problem of poor pseudo-relevance feedback by simply applying re-ranking prior to query expansion and re-executing this query. We find that this change alone can improve the retrieval effectiveness of sparse and dense PRF approaches by 5-8%. Going further, we propose a new expansion model, Latent Entity Expansion (LEE), a fine-grained word and entity-based relevance modelling incorporating localized features. Finally, we include an "adaptive" component to the retrieval process, which iteratively refines the re-ranking pool during scoring using the expansion model, i.e. we "re-rank - expand - repeat". Using LEE, we achieve (to our knowledge) the best NDCG, MAP and R@1000 results on the TREC Robust 2004 and CODEC adhoc document datasets, demonstrating a significant advancement in expansion effectiveness

    Efficient Document Re-Ranking for Transformers by Precomputing Term Representations

    Get PDF
    Deep pretrained transformer networks are effective at various ranking tasks, such as question answering and ad-hoc document ranking. However, their computational expenses deem them cost-prohibitive in practice. Our proposed approach, called PreTTR (Precomputing Transformer Term Representations), considerably reduces the query-time latency of deep transformer networks (up to a 42x speedup on web document ranking) making these networks more practical to use in a real-time ranking scenario. Specifically, we precompute part of the document term representations at indexing time (without a query), and merge them with the query representation at query time to compute the final ranking score. Due to the large size of the token representations, we also propose an effective approach to reduce the storage requirement by training a compression layer to match attention scores. Our compression technique reduces the storage required up to 95% and it can be applied without a substantial degradation in ranking performance
    corecore