137 research outputs found
How to Undo Things with Codes: New Writing Mechanisms and the Un/archivable Dis/appearing text
The discourses of criticism are being transformed at the same time that our writing mechanisms are undergoing a major change. Reflecting on the relationship between our writing tools and our perceptions and taking programmability and interactivity as the main characteristics of new writing media, this essay attempts an approach to how that which is new in scriptural techniques, that is to say, programmability and interactivity, are undoing our perception of such notions as the archive and embodiment. The two works which are here commented contain the conditions of unwriting their written trace; the interactor who makes the text appear paradoxically also causes its disappearance by acts of destruction or dispersion. In the case of AGRIPPA (A Book of The Dead), William Gibson reserves for the reader the role of the destructor of the text through an extreme gesture of interaction which destines the work to erasure and calls for the retrieval of a text that contains the conditions of its own death. In the case of Garry Hill‘s Writing Corpora, the body‘s acts are created of, create and are turned against writing, they embody and disperse the writing traces, while the body experiences the shift from inscription to embodiment
2.5K-Graphs: from Sampling to Generation
Understanding network structure and having access to realistic graphs plays a
central role in computer and social networks research. In this paper, we
propose a complete, and practical methodology for generating graphs that
resemble a real graph of interest. The metrics of the original topology we
target to match are the joint degree distribution (JDD) and the
degree-dependent average clustering coefficient (). We start by
developing efficient estimators for these two metrics based on a node sample
collected via either independence sampling or random walks. Then, we process
the output of the estimators to ensure that the target properties are
realizable. Finally, we propose an efficient algorithm for generating
topologies that have the exact target JDD and a close to the
target. Extensive simulations using real-life graphs show that the graphs
generated by our methodology are similar to the original graph with respect to,
not only the two target metrics, but also a wide range of other topological
metrics; furthermore, our generator is order of magnitudes faster than
state-of-the-art techniques
Towards Unbiased BFS Sampling
Breadth First Search (BFS) is a widely used approach for sampling large
unknown Internet topologies. Its main advantage over random walks and other
exploration techniques is that a BFS sample is a plausible graph on its own,
and therefore we can study its topological characteristics. However, it has
been empirically observed that incomplete BFS is biased toward high-degree
nodes, which may strongly affect the measurements. In this paper, we first
analytically quantify the degree bias of BFS sampling. In particular, we
calculate the node degree distribution expected to be observed by BFS as a
function of the fraction f of covered nodes, in a random graph RG(pk) with an
arbitrary degree distribution pk. We also show that, for RG(pk), all commonly
used graph traversal techniques (BFS, DFS, Forest Fire, Snowball Sampling, RDS)
suffer from exactly the same bias. Next, based on our theoretical analysis, we
propose a practical BFS-bias correction procedure. It takes as input a
collected BFS sample together with its fraction f. Even though RG(pk) does not
capture many graph properties common in real-life graphs (such as
assortativity), our RG(pk)-based correction technique performs well on a broad
range of Internet topologies and on two large BFS samples of Facebook and Orkut
networks. Finally, we consider and evaluate a family of alternative correction
procedures, and demonstrate that, although they are unbiased for an arbitrary
topology, their large variance makes them far less effective than the
RG(pk)-based technique.Comment: BFS, RDS, graph traversal, sampling bias correctio
Recommended from our members
Intra- and Inter-Session Network Coding in Wireless Networks
In this paper, we are interested in improving the performance of constructive
network coding schemes in lossy wireless environments.We propose I2NC - a
cross-layer approach that combines inter-session and intra-session network
coding and has two strengths. First, the error-correcting capabilities of
intra-session network coding make our scheme resilient to loss. Second,
redundancy allows intermediate nodes to operate without knowledge of the
decoding buffers of their neighbors. Based only on the knowledge of the loss
rates on the direct and overhearing links, intermediate nodes can make
decisions for both intra-session (i.e., how much redundancy to add in each
flow) and inter-session (i.e., what percentage of flows to code together)
coding. Our approach is grounded on a network utility maximization (NUM)
formulation of the problem. We propose two practical schemes, I2NC-state and
I2NC-stateless, which mimic the structure of the NUM optimal solution. We also
address the interaction of our approach with the transport layer. We
demonstrate the benefits of our schemes through simulations
PhishDef: URL Names Say It All
Phishing is an increasingly sophisticated method to steal personal user
information using sites that pretend to be legitimate. In this paper, we take
the following steps to identify phishing URLs. First, we carefully select
lexical features of the URLs that are resistant to obfuscation techniques used
by attackers. Second, we evaluate the classification accuracy when using only
lexical features, both automatically and hand-selected, vs. when using
additional features. We show that lexical features are sufficient for all
practical purposes. Third, we thoroughly compare several classification
algorithms, and we propose to use an online method (AROW) that is able to
overcome noisy training data. Based on the insights gained from our analysis,
we propose PhishDef, a phishing detection system that uses only URL names and
combines the above three elements. PhishDef is a highly accurate method (when
compared to state-of-the-art approaches over real datasets), lightweight (thus
appropriate for online and client-side deployment), proactive (based on online
classification rather than blacklists), and resilient to training data
inaccuracies (thus enabling the use of large noisy training data).Comment: 9 pages, submitted to IEEE INFOCOM 201
- …