128 research outputs found
Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package
We introduce the \texttt{pyunicorn} (Pythonic unified complex network and
recurrence analysis toolbox) open source software package for applying and
combining modern methods of data analysis and modeling from complex network
theory and nonlinear time series analysis. \texttt{pyunicorn} is a fully
object-oriented and easily parallelizable package written in the language
Python. It allows for the construction of functional networks such as climate
networks in climatology or functional brain networks in neuroscience
representing the structure of statistical interrelationships in large data sets
of time series and, subsequently, investigating this structure using advanced
methods of complex network theory such as measures and models for spatial
networks, networks of interacting networks, node-weighted statistics or network
surrogates. Additionally, \texttt{pyunicorn} provides insights into the
nonlinear dynamics of complex systems as recorded in uni- and multivariate time
series from a non-traditional perspective by means of recurrence quantification
analysis (RQA), recurrence networks, visibility graphs and construction of
surrogate time series. The range of possible applications of the library is
outlined, drawing on several examples mainly from the field of climatology.Comment: 28 pages, 17 figure
Challenges and Open Questions of Machine Learning in Computer Security
This habilitation thesis presents advancements in machine learning for computer security,
arising from problems in network intrusion detection and steganography.
The thesis put an emphasis on explanation of traits shared by steganalysis, network intrusion
detection, and other security domains, which makes these domains different from
computer vision, speech recognition, and other fields where machine learning is typically
studied. Then, the thesis presents methods developed to at least partially solve the identified
problems with an overall goal to make machine learning based intrusion detection
system viable. Most of them are general in the sense that they can be used outside intrusion
detection and steganalysis on problems with similar constraints.
A common feature of all methods is that they are generally simple, yet surprisingly
effective. According to large-scale experiments they almost always improve the prior art,
which is likely caused by being tailored to security problems and designed for large volumes
of data.
Specifically, the thesis addresses following problems:
anomaly detection with low computational and memory complexity such that efficient
processing of large data is possible;
multiple-instance anomaly detection improving signal-to-noise ration by classifying
larger group of samples;
supervised classification of tree-structured data simplifying their encoding in neural
networks;
clustering of structured data;
supervised training with the emphasis on the precision in top p% of returned data;
and finally explanation of anomalies to help humans understand the nature of anomaly
and speed-up their decision.
Many algorithms and method presented in this thesis are deployed in the real intrusion
detection system protecting millions of computers around the globe
The University Defence Research Collaboration In Signal Processing
This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations.
The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour
The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)
This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry
31th International Conference on Information Modelling and Knowledge Bases
Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers
Reliable and safe autonomy for ground vehicles in unstructured environments
This thesis is concerned with the algorithms and systems that are required to enable safe autonomous operation of an unmanned ground vehicle (UGV) in an unstructured and unknown environment; one in which there is no speci c infrastructure to assist the vehicle autonomy and complete a priori information is not available. Under these conditions it is necessary for an autonomous system to perceive the surrounding environment, in order to perform safe and reliable control actions with respect to the context of the vehicle, its task and the world. Speci cally, exteroceptive sensors measure physical properties of the world. This information is interpreted to extract a higher level perception, then mapped to provide a consistent spatial context. This map of perceived information forms an integral part of the autonomous UGV (AUGV) control system architecture, therefore any perception or mapping errors reduce the reliability and safety of the system. Currently, commercially viable autonomous systems achieve the requisite level of reliability and safety by using strong structure within their operational environment. This permits the use of powerful assumptions about the world, which greatly simplify the perception requirements. For example, in an urban context, things that look approximately like roads are roads. In an indoor environment, vertical structure must be avoided and everything else is traversable. By contrast, when this structure is not available, little can be assumed and the burden on perception is very large. In these cases, reliability and safety must currently be provided by a tightly integrated human supervisor. The major contribution of this thesis is to provide a holistic approach to identify and mitigate the primary sources of error in typical AUGV sensor feedback systems (comprising perception and mapping), to promote reliability and safety. This includes an analysis of the geometric and temporal errors that occur in the coordinate transformations that are required for mapping and methods to minimise these errors in real systems. Interpretive errors are also studied and methods to mitigate them are presented. These methods combine information theoretic measures with multiple sensor modalities, to improve perceptive classi cation and provide sensor redundancy. The work in this thesis is implemented and tested on a real AUGV system, but the methods do not rely on any particular aspects of this vehicle. They are all generally and widely applicable. This thesis provides a rm base at a low level, from which continued research in autonomous reliability and safety at ever higher levels can be performed
On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters
This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p
Gaze-Based Human-Robot Interaction by the Brunswick Model
We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
The University Defence Research Collaboration In Signal Processing: 2013-2018
Signal processing is an enabling technology crucial to all areas
of defence and security. It is called for whenever humans and
autonomous systems are required to interpret data (i.e. the signal)
output from sensors. This leads to the production of the
intelligence on which military outcomes depend. Signal processing
should be timely, accurate and suited to the decisions
to be made. When performed well it is critical, battle-winning
and probably the most important weapon which you’ve never
heard of.
With the plethora of sensors and data sources that are
emerging in the future network-enabled battlespace, sensing
is becoming ubiquitous. This makes signal processing more
complicated but also brings great opportunities.
The second phase of the University Defence Research Collaboration
in Signal Processing was set up to meet these complex
problems head-on while taking advantage of the opportunities.
Its unique structure combines two multi-disciplinary
academic consortia, in which many researchers can approach
different aspects of a problem, with baked-in industrial collaboration
enabling early commercial exploitation.
This phase of the UDRC will have been running for 5 years
by the time it completes in March 2018, with remarkable results.
This book aims to present those accomplishments and
advances in a style accessible to stakeholders, collaborators and
exploiters
- …