377 research outputs found
Recommended from our members
Greatly improved cache update times for conditions data with Frontier/Squid
The CMS detector project loads copies of conditions data to over 100,000 computer cores worldwide by using a software subsystem called Frontier. This subsystem translates database queries into HTTP, looks up the results in a central database at CERN, and caches the results in an industry-standard HTTP proxy/caching server called Squid. One of the most challenging aspects of any cache system is coherency, that is, ensuring that changes made to the underlying data get propagated out to all clients in a timely manner. Recently, the Frontier system was enhanced to drastically reduce the time for changes to be propagated everywhere without heavily loading servers. The propagation time is now as low as 15 minutes for some kinds of data and no more than 60 minutes for the rest of the data. This was accomplished by taking advantage of an HTTP and Squid feature called If-Modified-Since. In order to use this feature, the Frontier server sends a Last-Modified timestamp, but since modification times are not normally tracked by Oracle databases, a PL/SQL program was developed to track the modification times of database tables. We discuss the details of this caching scheme and the obstacles overcome including database and Squid bugs
Attacking and securing Network Time Protocol
Network Time Protocol (NTP) is used to synchronize time between computer systems communicating over unreliable, variable-latency, and untrusted network paths. Time is critical for many applications; in particular it is heavily utilized by cryptographic protocols. Despite its importance, the community still lacks visibility into the robustness of the NTP ecosystem itself, the integrity of the timing information transmitted by NTP, and the impact that any error in NTP might have upon the security of other protocols that rely on timing information. In this thesis, we seek to accomplish the following broad goals:
1. Demonstrate that the current design presents a security risk, by showing that network attackers can exploit NTP and then use it to attack other core Internet protocols that rely on time.
2. Improve NTP to make it more robust, and rigorously analyze the security of the improved protocol.
3. Establish formal and precise security requirements that should be satisfied by a network time-synchronization protocol, and prove that these are sufficient for the security of other protocols that rely on time.
We take the following approach to achieve our goals incrementally.
1. We begin by (a) scrutinizing NTP's core protocol (RFC 5905) and (b) statically analyzing code of its reference implementation to identify vulnerabilities in protocol design, ambiguities in specifications, and flaws in reference implementations. We then leverage these observations to show several off- and on-path denial-of-service and time-shifting attacks on NTP clients. We then show cache-flushing and cache-sticking attacks on DNS(SEC) that leverage NTP. We quantify the attack surface using Internet measurements, and suggest simple countermeasures that can improve the security of NTP and DNS(SEC).
2. Next we move beyond identifying attacks and leverage ideas from Universal Composability (UC) security framework to develop a cryptographic model for attacks on NTP's datagram protocol. We use this model to prove the security of a new backwards-compatible protocol that correctly synchronizes time in the face of both off- and on-path network attackers.
3. Next, we propose general security notions for network time-synchronization protocols within the UC framework and formulate ideal functionalities that capture a number of prevalent forms of time measurement within existing systems. We show how they can be realized by real-world protocols (including but not limited to NTP), and how they can be used to assert security of time-reliant applications-specifically, cryptographic certificates with revocation and expiration times. Our security framework allows for a clear and modular treatment of the use of time in security-sensitive systems.
Our work makes the core NTP protocol and its implementations more robust and secure, thus improving the security of applications and protocols that rely on time
Recommended from our members
Quaestor: Query web caching for database-as-a-service providers
Today, web performance is primarily governed by round-trip latencies between end devices and cloud services. To improve performance, services need to minimize the delay of accessing data. In this paper, we propose a novel approach to low latency that relies on existing content delivery and web caching infrastructure. The main idea is to enable application-independent caching of query results and records with tunable consistency guarantees, in particular bounded staleness. Q
uaestor
(Query Store) employs two key concepts to incorporate both expiration-based and invalidation-based web caches: (1) an Expiring Bloom Filter data structure to indicate potentially stale data, and (2) statistically derived cache expiration times to maximize cache hit rates. Through a distributed query invalidation pipeline, changes to cached query results are detected in real-time. The proposed caching algorithms offer a new means for data-centric cloud services to trade latency against staleness bounds, e.g. in a database-as-a-service. Q
uaestor
is the core technology of the backend-as-a-service platform Baqend, a cloud service for low-latency websites. We provide empirical evidence for Q
uaestor
's scalability and performance through both simulation and experiments. The results indicate that for read-heavy workloads, up to tenfold speed-ups can be achieved through Q
uaestor
's caching.
</jats:p
Continuous Nearest Neighbor Queries over Sliding Windows
Abstract—This paper studies continuous monitoring of nearest neighbor (NN) queries over sliding window streams. According to this model, data points continuously stream in the system, and they are considered valid only while they belong to a sliding window that contains 1) the W most recent arrivals (count-based) or 2) the arrivals within a fixed interval W covering the most recent time stamps (time-based). The task of the query processor is to constantly maintain the result of long-running NN queries among the valid data. We present two processing techniques that apply to both count-based and time-based windows. The first one adapts conceptual partitioning, the best existing method for continuous NN monitoring over update streams, to the sliding window model. The second technique reduces the problem to skyline maintenance in the distance-time space and precomputes the future changes in the NN set. We analyze the performance of both algorithms and extend them to variations of NN search. Finally, we compare their efficiency through a comprehensive experimental evaluation. The skyline-based algorithm achieves lower CPU cost, at the expense of slightly larger space overhead. Index Terms—Location-dependent and sensitive, spatial databases, query processing, nearest neighbors, data streams, sliding windows.
Web browsing optimization over 2.5G and 3G: end-to-end mechanisms vs. usage of performance enhancing proxies
Published version on Wiley's platform: https://onlinelibrary.wiley.com/doi/abs/10.1002/wcm.4562.5 Generation (2.5G) and Third Generation (3G) cellular wireless networks allow mobile Internet access withbearers specifically designed for data communications. However, Internet protocols under-utilize wireless widearea network (WWAN) link resources, mainly due to large round trip times (RTTs) and request–reply protocolpatterns. Web browsing is a popular service that suffers significant performance degradation over 2.5G and 3G. Inthis paper, we review and compare the two main approaches for improving web browsing performance over wirelesslinks: (i) using adequate end-to-end parameters and mechanisms and (ii) interposing a performance enhancingproxy (PEP) between the wireless and wired parts. We conclude that PEPs are currently the only feasible way forsignificantly optimizing web browsing behavior over 2.5G and 3G. In addition, we evaluate the two main currentcommercial PEPs over live general packet radio service (GPRS) and universal mobile telecommunications system(UMTS) networks. The results show that PEPs can lead to near-ideal web browsing performance in certain scenarios.Postprint (published version
Grove: a Separation-Logic Library for Verifying Distributed Systems (Extended Version)
Grove is a concurrent separation logic library for verifying distributed
systems. Grove is the first to handle time-based leases, including their
interaction with reconfiguration, crash recovery, thread-level concurrency, and
unreliable networks. This paper uses Grove to verify several distributed system
components written in Go, including GroveKV, a realistic distributed
multi-threaded key-value store. GroveKV supports reconfiguration,
primary/backup replication, and crash recovery, and uses leases to execute
read-only requests on any replica. GroveKV achieves high performance (67-73% of
Redis on a single core), scales with more cores and more backup replicas
(achieving about 2x the throughput when going from 1 to 3 servers), and can
safely execute reads while reconfiguring.Comment: Extended version of paper appearing at SOSP 202
Recommended from our members
DotSlash: Providing Dynamic Scalability to Web Applications with On-demand Distributed Query Result Caching
Scalability poses a significant challenge for today's web applications, mainly due to the large population of potential users. To effectively address the problem of short-term dramatic load spikes caused by web hotspots, we developed a self-configuring and scalable rescue system called DotSlash. The primary goal of our system is to provide dynamic scalability to web applications by enabling a web site to obtain resources dynamically, and use them autonomically without any administrative intervention. To address the database server bottleneck, DotSlash allows a web site to set up on-demand distributed query result caching, which greatly reduces the database workload for read mostly databases, and thus increases the request rate supported at a DotSlash-enabled web site. The novelty of our work is that our query result caching is on demand, and operated based on load conditions. The caching remains inactive as long as the load is normal, but is activated once the load is heavy. This approach offers good data consistency during normal load situations, and good scalability with relaxed data consistency for heavy load periods. We have built a prototype system for the widely used LAMP configuration, and evaluated our system using the RUBBoS bulletin board benchmark. Experiments show that a DotSlash-enhanced web site can improve the maximum request rate supported by a factor of 5 using 8 rescue servers for the RUBBoS submission mix, and by a factor of 10 using 15 rescue servers for the RUBBoS read-only mix
To NACK or not to NACK? Negative Acknowledgments in Information-Centric Networking
Information-Centric Networking (ICN) is an internetworking paradigm that
offers an alternative to the current IP\nobreakdash-based Internet
architecture. ICN's most distinguishing feature is its emphasis on information
(content) instead of communication endpoints. One important open issue in ICN
is whether negative acknowledgments (NACKs) at the network layer are useful for
notifying downstream nodes about forwarding failures, or requests for incorrect
or non-existent information. In benign settings, NACKs are beneficial for ICN
architectures, such as CCNx and NDN, since they flush state in routers and
notify consumers. In terms of security, NACKs seem useful as they can help
mitigating so-called Interest Flooding attacks. However, as we show in this
paper, network-layer NACKs also have some unpleasant security implications. We
consider several types of NACKs and discuss their security design requirements
and implications. We also demonstrate that providing secure NACKs triggers the
threat of producer-bound flooding attacks. Although we discuss some potential
countermeasures to these attacks, the main conclusion of this paper is that
network-layer NACKs are best avoided, at least for security reasons.Comment: 10 pages, 7 figure
- …