569,387 research outputs found
A Selectivity based approach to Continuous Pattern Detection in Streaming Graphs
Cyber security is one of the most significant technical challenges in current
times. Detecting adversarial activities, prevention of theft of intellectual
properties and customer data is a high priority for corporations and government
agencies around the world. Cyber defenders need to analyze massive-scale,
high-resolution network flows to identify, categorize, and mitigate attacks
involving networks spanning institutional and national boundaries. Many of the
cyber attacks can be described as subgraph patterns, with prominent examples
being insider infiltrations (path queries), denial of service (parallel paths)
and malicious spreads (tree queries). This motivates us to explore subgraph
matching on streaming graphs in a continuous setting. The novelty of our work
lies in using the subgraph distributional statistics collected from the
streaming graph to determine the query processing strategy. We introduce a
"Lazy Search" algorithm where the search strategy is decided on a
vertex-to-vertex basis depending on the likelihood of a match in the vertex
neighborhood. We also propose a metric named "Relative Selectivity" that is
used to select between different query processing strategies. Our experiments
performed on real online news, network traffic stream and a synthetic social
network benchmark demonstrate 10-100x speedups over selectivity agnostic
approaches.Comment: in 18th International Conference on Extending Database Technology
(EDBT) (2015
Asymmetric Protocols for Scalable High-Rate Measurement-Device-Independent Quantum Key Distribution Networks
Measurement-device-independent quantum key distribution (MDI-QKD) can
eliminate detector side channels and prevent all attacks on detectors. The
future of MDI-QKD is a quantum network that provides service to many users over
untrusted relay nodes. In a real quantum network, the losses of various
channels are different and users are added and deleted over time. To adapt to
these features, we propose a type of protocols that allow users to
independently choose their optimal intensity settings to compensate for
different channel losses. Such protocol enables a scalable high-rate MDI-QKD
network that can easily be applied for channels of different losses and allows
users to be dynamically added/deleted at any time without affecting the
performance of existing users.Comment: Changed the title to better represent the generality of our method,
and added more discussions on its application to alternative protocols (in
Sec. II, the new Table II, and Appendix E with new Fig. 9). Added more
conceptual explanations in Sec. II on the difference between X and Z bases in
MDI-QKD. Added additional discussions on security of the scheme in Sec. II
and Appendix
Shared and Searchable Encrypted Data for Untrusted Servers
Current security mechanisms pose a risk for organisations that outsource their data management to untrusted servers. Encrypting and decrypting sensitive data at the client side is the normal approach in this situation but has high communication and computation overheads if only a subset of the data is required, for example, selecting records in a database table based on a keyword search. New cryptographic schemes have been proposed that support encrypted queries over encrypted data but all depend on a single set of secret keys, which implies single user access or sharing keys among multiple users, with key revocation requiring costly data re-encryption. In this paper, we propose an encryption scheme where each authorised user in the system has his own keys to encrypt and decrypt data. The scheme supports keyword search which enables the server to return only the encrypted data that satisfies an encrypted query without decrypting it. We provide two constructions of the scheme giving formal proofs of their security. We also report on the results of a prototype implementation.
This research was supported by the UKâs EPSRC research grant EP/C537181/1. The authors would like to thank the members of the Policy Research Group at Imperial College for their support
CC-interop : COPAC/Clumps Continuing Technical Cooperation. Final Project Report
As far as is known, CC-interop was the first project of its kind anywhere in the world and still is. Its basic aim was to test the feasibility of cross-searching between physical and virtual union catalogues, using COPAC and the three functioning "clumps" or virtual union catalogues (CAIRNS, InforM25, and RIDING), all funded or part-funded by JISC in recent years. The key issues investigated were technical interoperability of catalogues, use of collection level descriptions to search union catalogues dynamically, quality of standards in cataloguing and indexing practices, and usability of union catalogues for real users. The conclusions of the project were expected to, and indeed do, contribute to the development of the JISC Information Environment and to the ongoing debate as to the feasibility and desirability of creating a national UK catalogue. They also inhabit the territory of collection level descriptions (CLDs) and the wider services of JISC's Information Environment Services Registry (IESR). The results of this project will also have applicability for the common information environment, particularly through the landscaping work done via SCONE/CAIRNS. This work is relevant not just to HE and not just to digital materials, but encompasses other sectors and domains and caters for print resources as well. Key findings are thematically grouped as follows: System performance when inter-linking COPAC and the Z39.50 clumps. The various individual Z39.50 configurations permit technical interoperability relatively easily but only limited semantic interoperability is possible. Disparate cataloguing and indexing practices are an impairment to semantic interoperability, not just for catalogues but also for CLDs and descriptions of services (like those constituting JISC's IESR). Creating dynamic landscaping through CLDs: routines can be written to allow collection description databases to be output in formats that other UK users of CLDs, including developers of the JISC information environment. Searching a distributed (virtual) catalogue or clump via Z39.50: use of Z39.50 to Z39.50 middleware permits a distributed catalogue to be searched via Z39.50 from such disparate user services as another virtual union catalogue or clump, a physical union catalogue like COPAC, an individual Z client and other IE services. The breakthrough in this Z39.50 to Z39.50 conundrum came with the discovery that the JISC-funded JAFER software (a result of the 5/99 programme) meets many of the requirements and can be used by the current clumps services. It is technically possible for the user to select all or a sub-set of available end destination Z39.50 servers (we call this "landscaping") within this middleware. Comparing results processing between COPAC and clumps. Most distributed services (clumps) do not bring back complete results sets from associated Z servers (in order to save time for users). COPAC on-the-fly routines could feasibly be applied to the clumps services. An automated search set up to repeat its query of 17 catalogues in a clump (InforM25) hourly over nearly 3 months returned surprisingly good results; for example, over 90% of responses were received in less than one second, and no servers showed slower response times in periods of traditionally heavy OPAC use (mid-morning to early evening). User behaviour when cross-searching catalogues: the importance to users of a number of on-screen features, including the ability to refine a search and clear indication that a search is processing. The importance to users of information about the availability of an item as well as the holdings data. The impact of search tools such as Google and Amazon on user behaviour and the expectations of more information than is normally available from a library catalogue. The distrust of some librarians interviewed of the data sources in virtual union catalogues, thinking that there was not true interoperability
Collaborative searching for video using the FĂschlĂĄr system and a DiamondTouch table
Fischlar DT is one of a family of systems which support interactive searching and browsing through an archive of digital video information. Previous Fischlar systems have used a conventional screen, keyboard and mouse interface, but Fischlar-DT operates with using a horizontal, multiuser, touch sensitive tabletop known as a DiamondTouch. We present the Fischlar-DT system partly from a systems perspective, but mostly in terms of how its design and functionality supports collaborative searching. The contribution of the paper is thus the introduction of Fischlar-DT and a description of how design concerns for supporting collaborative search can be realised on a tabletop interface
Recommended from our members
R-PEKS: RBAC Enabled PEKS for Secure Access of Cloud Data
In the recent past, few works have been done by combining attribute-based access control with multi-user PEKS, i.e., public key encryption with keyword search. Such attribute enabled searchable encryption is most suitable for applications where the changing of privileges is done once in a while. However, to date, no efficient and secure scheme is available in the literature that is suitable for these applications where changing privileges are done frequently. In this paper our contributions are twofold. Firstly, we propose a new PEKS scheme for string search, which, unlike the previous constructions, is free from bi-linear mapping and is efficient by 97% compared to PEKS for string search proposed by Ray et.al in TrustCom 2017. Secondly, we introduce role based access control (RBAC) to multi-user PEKS, where an arbitrary group of users can search and access the encrypted files depending upon roles. We termed this integrated scheme as R-PEKS. The efficiency of R-PEKS over the PEKS scheme is up to 90%. We provide formal security proofs for the different components of R-PEKS and validate these schemes using a commercial dataset
Examining the contributions of automatic speech transcriptions and metadata sources for searching spontaneous conversational speech
The searching spontaneous speech can be enhanced by combining automatic speech transcriptions with semantically
related metadata. An important question is what can be expected from search of such transcriptions and different
sources of related metadata in terms of retrieval effectiveness. The Cross-Language Speech Retrieval (CL-SR) track at recent CLEF workshops provides a spontaneous speech
test collection with manual and automatically derived metadata fields. Using this collection we investigate the comparative search effectiveness of individual fields comprising automated transcriptions and the available metadata. A further important question is how transcriptions and metadata should be combined for the greatest benefit to search accuracy. We compare simple field merging of individual fields with the extended BM25 model for weighted field combination (BM25F). Results indicate that BM25F can produce improved search accuracy, but that it is currently important to set its parameters suitably using a suitable training set
- âŚ