17,638 research outputs found
Long-range dependencies in reading memory pages in the man-computer system interaction
The modern computer systems consist of many interdependent subsystems. There are two possible approaches to their analysis. The first one is based on the idea of reductionism proposed by Descrates[1] and can be considered as a still ruling paradigm in the case of computer science and engineering. The second one can be related to the still new idea of the complex systems approach, where in order to understand the behaviour of such systems one needs to have the knowledge about the behaviour of system components and also, that is more important, how they act together [2, 3, 4, 5, 6, 7, 8]. This specific paradigm change can be even shown in the case of the idea of Turing machines and new approach for consideration in the case of interactive processing – even the opinion that “the computer engineering is not a mathematical science” was presented [9]. It should be noted that the Turing machine is a mathematical idea [10] while its implementations are the physical ones [11], but if the physical nature of computer systems was indicated, there is a need to have an appropriate physical (thermodynamical) basis for deliberations in such a case. In [12] it has been shown that the analysis of processes in the computer systems can be based on non-extensive thermodynamics. However, this analysis considers only spatial correlations, meanwhile in this paper we will focus on time dependencies leading to a long-memory effect. The paper is divided into 5 sections. In Section 2 the conception of the hierarchical structure of memory system in the computer is presented. This proposal was given in [11] where its influence on computer thermodynamics was discussed. In Section 3 there is a short survey about long-range dependencies whereas Section 4 discusses practical usage of theoretical considerations. The last section concludes the paper
A practical guide to computer simulations
Here practical aspects of conducting research via computer simulations are
discussed. The following issues are addressed: software engineering,
object-oriented software development, programming style, macros, make files,
scripts, libraries, random numbers, testing, debugging, data plotting, curve
fitting, finite-size scaling, information retrieval, and preparing
presentations.
Because of the limited space, usually only short introductions to the
specific areas are given and references to more extensive literature are cited.
All examples of code are in C/C++.Comment: 69 pages, with permission of Wiley-VCH, see http://www.wiley-vch.de
(some screenshots with poor quality due to arXiv size restrictions) A
comprehensively extended version will appear in spring 2009 as book at
Word-Scientific, see http://www.worldscibooks.com/physics/6988.htm
Cognitive constraints and island effects
Competence-based theories of island effects play a central role in generative grammar, yet the graded nature of many syntactic islands has never been properly accounted for. Categorical syntactic accounts of island effects have persisted in spite of a wealth of data suggesting that island effects are not categorical in nature and that nonstructural manipulations that leave island structures intact can radically alter judgments of island violations. We argue here, building on work by Paul Deane, Robert Kluender, and others, that processing factors have the potential to account for this otherwise unexplained variation in acceptability judgments.
We report the results of self-paced reading experiments and controlled acceptability studies that explore the relationship between processing costs and judgments of acceptability. In each of the three self-paced reading studies, the data indicate that the processing cost of different types of island violations can be significantly reduced to a degree comparable to that of nonisland filler-gap constructions by manipulating a single nonstructural factor. Moreover, this reduction in processing cost is accompanied by significant improvements in acceptability. This evidence favors the hypothesis that island-violating constructions involve numerous processing pressures that aggregate to drive processing difficulty above a threshold, resulting in unacceptability. We examine the implications of these findings for the grammar of filler-gap dependencies
Semantic Object Parsing with Local-Global Long Short-Term Memory
Semantic object parsing is a fundamental task for understanding objects in
detail in computer vision community, where incorporating multi-level contextual
information is critical for achieving such fine-grained pixel-level
recognition. Prior methods often leverage the contextual information through
post-processing predicted confidence maps. In this work, we propose a novel
deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly
incorporate short-distance and long-distance spatial dependencies into the
feature learning over all pixel positions. In each LG-LSTM layer, local
guidance from neighboring positions and global guidance from the whole image
are imposed on each position to better exploit complex local and global
contextual information. Individual LSTMs for distinct spatial dimensions are
also utilized to intrinsically capture various spatial layouts of semantic
parts in the images, yielding distinct hidden and memory cells of each position
for each dimension. In our parsing approach, several LG-LSTM layers are stacked
and appended to the intermediate convolutional layers to directly enhance
visual features, allowing network parameters to be learned in an end-to-end
way. The long chains of sequential computation by stacked LG-LSTM layers also
enable each pixel to sense a much larger region for inference benefiting from
the memorization of previous dependencies in all positions along all
dimensions. Comprehensive evaluations on three public datasets well demonstrate
the significant superiority of our LG-LSTM over other state-of-the-art methods.Comment: 10 page
PerfWeb: How to Violate Web Privacy with Hardware Performance Events
The browser history reveals highly sensitive information about users, such as
financial status, health conditions, or political views. Private browsing modes
and anonymity networks are consequently important tools to preserve the privacy
not only of regular users but in particular of whistleblowers and dissidents.
Yet, in this work we show how a malicious application can infer opened websites
from Google Chrome in Incognito mode and from Tor Browser by exploiting
hardware performance events (HPEs). In particular, we analyze the browsers'
microarchitectural footprint with the help of advanced Machine Learning
techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines,
and in contrast to previous literature also Convolutional Neural Networks. We
profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing
portals, on two machines featuring an Intel and an ARM processor. By monitoring
retired instructions, cache accesses, and bus cycles for at most 5 seconds, we
manage to classify the selected websites with a success rate of up to 86.3%.
The results show that hardware performance events can clearly undermine the
privacy of web users. We therefore propose mitigation strategies that impede
our attacks and still allow legitimate use of HPEs
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …