10,715 research outputs found
Robot graphic simulation testbed
The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts
Agent Organization and Request Propagation in the Knowledge Plane
In designing and building a network like the Internet, we continue to face the problems of scale and distribution. In particular, network management has become an increasingly difficult task, and network applications often need to maintain efficient connectivity graphs for various purposes. The knowledge plane was proposed as a new construct to improve network management and applications. In this proposal, I propose an application-independent mechanism to support the construction of application-specific connectivity graphs. Specifically, I propose to build a network knowledge plane and multiple sub-planes for different areas of network services. The network knowledge plane provides valuable knowledge about the Internet to the sub-planes, and each sub-plane constructs its own connectivity graph using network knowledge and knowledge in its own specific area. I focus on two key design issues: (1) a region-based architecture for agent organization; (2) knowledge dissemination and request propagation. Network management and applications benefit from the underlying network knowledge plane and sub-planes. To demonstrate the effectiveness of this mechanism, I conduct case studies in network management and security
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
Recommended from our members
Diagnostic Applications for Micro-Synchrophasor Measurements
This report articulates and justifies the preliminary selection of diagnostic applications for data from micro-synchrophasors (µPMUs) in electric power distribution systems that will be further studied and developed within the scope of the three-year ARPA-e award titled Micro-synchrophasors for Distribution Systems
Inferring Power Grid Information with Power Line Communications: Review and Insights
High-frequency signals were widely studied in the last decade to identify
grid and channel conditions in PLNs. PLMs operating on the grid's physical
layer are capable of transmitting such signals to infer information about the
grid. Hence, PLC is a suitable communication technology for SG applications,
especially suited for grid monitoring and surveillance. In this paper, we
provide several contributions: 1) a classification of PLC-based applications;
2) a taxonomy of the related methodologies; 3) a review of the literature in
the area of PLC Grid Information Inference (GII); and, insights that can be
leveraged to further advance the field. We found research contributions
addressing PLMs for three main PLC-GII applications: topology inference,
anomaly detection, and physical layer key generation. In addition, various
PLC-GII measurement, processing, and analysis approaches were found to provide
distinctive features in measurement resolution, computation complexity, and
analysis accuracy. We utilize the outcome of our review to shed light on the
current limitations of the research contributions and suggest future research
directions in this field.Comment: IEEE Communication Surveys and Tutorials Journa
Diagnosis of an EPS module
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova
de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e ComputadoresThis thesis addresses and contextualizes the problem of diagnostic of an Evolvable
Production System (EPS). An EPS is a complex and lively entity composed of intelligent modules that interact through bio-inspired mechanisms, to ensure high system availability and seamless reconfiguration.
The actual economic situation together with the increasing demand of high quality and low
priced customized products imposed a shift in the production policies of enterprises. Shop floors have to become more agile and flexible to accommodate the new production paradigms. Rather than selling products enterprises are establishing a trend of offering services to explore business
opportunities.
The new production paradigms, potentiated by the advances in Information Technologies
(IT), especially in web related standards and technologies as well as the progressive acceptance of the multi-agent systems (MAS) concept and related technologies, envision collections of modules whose individual and collective function adapts and evolves ensuring the fitness and adequacy of the shop
floor in tackling profitable but volatile business opportunities. Despite the richness of the interactions and the effort set in modelling them, their potential to favour fault propagation and interference, in
these complex environments, has been ignored from a diagnostic point of view.
With the increase of distributed and autonomous components that interact in the execution of processes current diagnostic approaches will soon be insufficient. While current system dynamics are complex and to a certain extent unpredictable the adoption of the next generation of approaches and technologies comes at the cost of a yet increased complexity.Whereas most of the research in such distributed industrial systems is focused in the study and establishment of control structures, the problem of diagnosis has been left relatively unattended.
There are however significant open challenges in the diagnosis of such modular systems including:
understanding fault propagation and ensuring scalability and co-evolution.
This work provides an implementation of a state-of-the-art agent-based interaction-oriented architecture compliant with the EPS paradigm that supports the introduction of a new developed diagnostic algorithm that has the ability to cope with the modern manufacturing paradigm challenges and to provide diagnostic analysis that explores the network dimension of multi-agent systems
Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified
- …