139 research outputs found
Active Internet Traffic Filtering: Real-time Response to Denial of Service Attacks
Denial of Service (DoS) attacks are one of the most challenging threats to
Internet security. An attacker typically compromises a large number of
vulnerable hosts and uses them to flood the victim's site with malicious
traffic, clogging its tail circuit and interfering with normal traffic. At
present, the network operator of a site under attack has no other resolution
but to respond manually by inserting filters in the appropriate edge routers to
drop attack traffic. However, as DoS attacks become increasingly sophisticated,
manual filter propagation becomes unacceptably slow or even infeasible.
In this paper, we present Active Internet Traffic Filtering, a new automatic
filter propagation protocol. We argue that this system provides a guaranteed,
significant level of protection against DoS attacks in exchange for a
reasonable, bounded amount of router resources. We also argue that the proposed
system cannot be abused by a malicious node to interfere with normal Internet
operation. Finally, we argue that it retains its efficiency in the face of
continued Internet growth.Comment: Briefly describes the core ideas of AITF, a protocol for facing
Denial of Service Attacks. 6 pages lon
On the complexity of inverting integer and polynomial matrices
Abstract An algorithm is presented that probabilistically computes the exact inverse of a nonsingular n × n integer matrix A using O˜(n 3 (log ||A|| + log κ(A))) bit operations. Here, ||A|| = max ij |A ij | denotes the largest entry in absolute value, κ(A) := ||A −1 || ||A|| is the condition number of the input matrix, and the soft-O notation O˜indicates some missing log n and log log ||A|| factors. A variation of the algorithm is presented for polynomial matrices. The inverse of any nonsingular n × n matrix whose entries are polynomials of degree d over a field can be computed using an expected number of O˜(n 3 d) field operations. Both algorithms are randomized of the Las Vegas type: fail may be returned with probability at most 1/2, and if fail is not returned the output is certified to be correct in the same running time bound
On the probability of finding non-interfering paths in wireless multihop networks
Abstract. Multipath routing can improve system performance of capacity-limited wireless networks through load balancing. However, even with a single source and destination, intra-flow and inter-flow interference can void any performance improvement. In this paper, we show that establishing non-interfering paths can, in theory, leverage this issue. In practice however, finding non-interfering paths can be quite complex. In fact, we demonstrate that the problem of finding two non-interfering paths for a single source-destination pair is NP-complete. Therefore, an interesting problem is to determine if, given a network topology, non-interfering multipath routing is appropriate. To address this issue, we provide an analytic approximation of the probability of finding two non-interfering paths. The correctness of the analysis is verified by simulations
Estimating Residual Error Rate in Recognized Handwritten Documents Using Artificial Error Injection
ABSTRACT Both handwriting recognition systems and their users are error prone. Handwriting recognizers make recognition errors, and users may miss those errors when verifying output. As a result, it is common for recognized documents to contain residual errors. Unfortunately, in some application domains (e.g. health informatics), tolerance for residual errors in recognized handwriting may be very low, and a desire might exist to maximize user accuracy during verification. In this paper, we present a technique that allows us to measure the performance of a user verifying recognizer output. We inject artificial errors into a set of recognized handwritten forms and show that the rate of injected errors and recognition errors caught is highly correlated in real time. Systems supporting user verification can make use of this measure of user accuracy in a variety of ways. For example, they can force users to slow down or can highlight injected errors that were missed, thus encouraging users to take more care
Multiclass learnability and the ERM principle
Abstract We study the sample complexity of multiclass prediction in several learning settings. For the PAC setting our analysis reveals a surprising phenomenon: In sharp contrast to binary classification, we show that there exist multiclass hypothesis classes for which some Empirical Risk Minimizers (ERM learners) have lower sample complexity than others. Furthermore, there are classes that are learnable by some ERM learners, while other ERM learners will fail to learn them. We propose a principle for designing good ERM learners, and use this principle to prove tight bounds on the sample complexity of learning symmetric multiclass hypothesis classes-classes that are invariant under permutations of label names. We further provide a characterization of mistake and regret bounds for multiclass learning in the online setting and the bandit setting, using new generalizations of Littlestone's dimension
- …