4,579 research outputs found
PCLIPS
CLIPS is an expert system, created specifically to allow rapid implementation of an expert system. CLIPS is written in C, and thus needs a very small amount of memory to run. Parallel CLIPS (PCLIPS) is an extension to CLIPS which is intended to be used in situations where a group of expert systems are expected to run simultaneously and occasionally communicate with each other on an integrated network. PCLIPS is a coarse-grained data distribution system. Its main goal is to take information in one knowledge base and distribute it to other knowledge bases so that all the executing expert systems are able to use that knowledge to solve their disparate problems
Second CLIPS Conference Proceedings, volume 1
Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems
Processing Diabetes mellitus composite events in MAGPIE
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system’s scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system’s ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.Peer ReviewedPostprint (author's final draft
Adaptive pattern recognition in a real-world environment
This thesis introduces and explores the notion of a real-world environment
with respect to adaptive pattern recognition and neural network systems. It
then examines the individual properties of a real-world environment and
proposes Continuous Adaptation, Persistence of information and Context-sensitive
recognition to be the major design criteria a neural network system in
a real-world environment should satisfy.
Based on these criteria, it then assesses the performance of Hopfield networks
and Associative Memory systems and identifies their operational limitations.
This leads to the introduction of Randomized Internal Representations, a novel
class of neural network systems which stores information in a fully
distributed way yet is capable of encoding and utilizing context.
It then assesses the performance of Competitive Learning and Adaptive
Resonance Theory systems and again having identified their operational
weakness, it describes the Dynamic Adaptation Scheme which satisfies all
three design criteria for a real-world environment
Developing Real-Time Emergency Management Applications: Methodology for a Novel Programming Model Approach
The last years have been characterized by the arising of highly distributed computing
platforms composed of a heterogeneity of computing and communication resources including
centralized high-performance computing architectures (e.g. clusters or large shared-memory
machines), as well as multi-/many-core components also integrated into mobile nodes
and network facilities. The emerging of computational paradigms such as Grid and Cloud
Computing, provides potential solutions to integrate such platforms with data systems, natural
phenomena simulations, knowledge discovery and decision support systems responding to a
dynamic demand of remote computing and communication resources and services.
In this context time-critical applications, notably emergency management systems, are
composed of complex sets of application components specialized for executing specific
computations, which are able to cooperate in such a way as to perform a global goal in a
distributed manner. Since the last years the scientific community has been involved in facing
with the programming issues of distributed systems, aimed at the definition of applications
featuring an increasing complexity in the number of distributed components, in the spatial
distribution and cooperation between interested parties and in their degree of heterogeneity.
Over the last decade the research trend in distributed computing has been focused on
a crucial objective. The wide-ranging composition of distributed platforms in terms of
different classes of computing nodes and network technologies, the strong diffusion of
applications that require real-time elaborations and online compute-intensive processing as
in the case of emergency management systems, lead to a pronounced tendency of systems
towards properties like self-managing, self-organization, self-controlling and strictly speaking
adaptivity.
Adaptivity implies the development, deployment, execution and management of applications
that, in general, are dynamic in nature. Dynamicity concerns the number and the specific
identification of cooperating components, the deployment and composition of the most
suitable versions of software components on processing and networking resources and
services, i.e., both the quantity and the quality of the application components to achieve
the needed Quality of Service (QoS). In time-critical applications the QoS specification
can dynamically vary during the execution, according to the user intentions and the
Developing Real-Time Emergency
Management Applications: Methodology for
a Novel Programming Model Approach
Gabriele Mencagli and Marco Vanneschi
Department of Computer Science, University of Pisa, L. Bruno Pontecorvo, Pisa
Italy
2
2 Will-be-set-by-IN-TECH
information produced by sensors and services, as well as according to the monitored state
and performance of networks and nodes.
The general reference point for this kind of systems is the Grid paradigm which, by
definition, aims to enable the access, selection and aggregation of a variety of distributed and
heterogeneous resources and services. However, though notable advancements have been
achieved in recent years, current Grid technology is not yet able to supply the needed software
tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability
and performance, interoperability, as well as fault tolerance and security, of the emerging
applications.
For this reason in this chapter we will study a methodology for designing high-performance
computations able to exploit the heterogeneity and dynamicity of distributed environments
by expressing adaptivity and QoS-awareness directly at the application level. An effective
approach needs to address issues like QoS predictability of different application configurations
as well as the predictability of reconfiguration costs. Moreover adaptation strategies need to
be developed assuring properties like the stability degree of a reconfiguration decision and the
execution optimality (i.e. select reconfigurations accounting proper trade-offs among different
QoS objectives). In this chapter we will present the basic points of a novel approach that lays
the foundations for future programming model environments for time-critical applications
such as emergency management systems.
The organization of this chapter is the following. In Section 2 we will compare the existing
research works for developing adaptive systems in critical environments, highlighting their
drawbacks and inefficiencies. In Section 3, in order to clarify the application scenarios that
we are considering, we will present an emergency management system in which the run-time
selection of proper application configuration parameters is of great importance for meeting the
desired QoS constraints. In Section 4we will describe the basic points of our approach in terms
of how compute-intensive operations can be programmed, how they can be dynamically
modified and how adaptation strategies can be expressed. In Section 5 our approach will
be contextualize to the definition of an adaptive parallel module, which is a building block
for composing complex and distributed adaptive computations. Finally in Section 6 we will
describe a set of experimental results that show the viability of our approach and in Section 7
we will give the concluding remarks of this chapter
Energy Management
Forecasts point to a huge increase in energy demand over the next 25 years, with a direct and immediate impact on the exhaustion of fossil fuels, the increase in pollution levels and the global warming that will have significant consequences for all sectors of society. Irrespective of the likelihood of these predictions or what researchers in different scientific disciplines may believe or publicly say about how critical the energy situation may be on a world level, it is without doubt one of the great debates that has stirred up public interest in modern times. We should probably already be thinking about the design of a worldwide strategic plan for energy management across the planet. It would include measures to raise awareness, educate the different actors involved, develop policies, provide resources, prioritise actions and establish contingency plans. This process is complex and depends on political, social, economic and technological factors that are hard to take into account simultaneously. Then, before such a plan is formulated, studies such as those described in this book can serve to illustrate what Information and Communication Technologies have to offer in this sphere and, with luck, to create a reference to encourage investigators in the pursuit of new and better solutions
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
A Connectionist Theory of Phenomenal Experience
When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as
many of them have been doing recently, there are two fundamentally distinct approaches available. Either
consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or
it is to be explained in terms of the computational processes defined over these vehicles. We call versions of
these two approaches vehicle and process theories of consciousness, respectively. However, while there may
be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because
of the influence exerted, on the one hand, by a large body of research which purports to show that the
explicit representation of information in the brain and conscious experience are dissociable, and on the
other, by the classical computational theory of mind – the theory that takes human cognition to be a species
of symbol manipulation. But two recent developments in cognitive science combine to suggest that a
reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the
experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer
reasonable to assume that the dissociability of conscious experience and explicit representation has been
adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in
cognitive science as it once was. It now has a lively competitor in the form of connectionism; and
connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory
of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It
takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit
representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some
common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways
AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments
This report considers the application of Articial Intelligence (AI) techniques to
the problem of misuse detection and misuse localisation within telecommunications
environments. A broad survey of techniques is provided, that covers inter alia
rule based systems, model-based systems, case based reasoning, pattern matching,
clustering and feature extraction, articial neural networks, genetic algorithms, arti
cial immune systems, agent based systems, data mining and a variety of hybrid
approaches. The report then considers the central issue of event correlation, that
is at the heart of many misuse detection and localisation systems. The notion of
being able to infer misuse by the correlation of individual temporally distributed
events within a multiple data stream environment is explored, and a range of techniques,
covering model based approaches, `programmed' AI and machine learning
paradigms. It is found that, in general, correlation is best achieved via rule based approaches,
but that these suffer from a number of drawbacks, such as the difculty of
developing and maintaining an appropriate knowledge base, and the lack of ability
to generalise from known misuses to new unseen misuses. Two distinct approaches
are evident. One attempts to encode knowledge of known misuses, typically within
rules, and use this to screen events. This approach cannot generally detect misuses
for which it has not been programmed, i.e. it is prone to issuing false negatives.
The other attempts to `learn' the features of event patterns that constitute normal
behaviour, and, by observing patterns that do not match expected behaviour, detect
when a misuse has occurred. This approach is prone to issuing false positives,
i.e. inferring misuse from innocent patterns of behaviour that the system was not
trained to recognise. Contemporary approaches are seen to favour hybridisation,
often combining detection or localisation mechanisms for both abnormal and normal
behaviour, the former to capture known cases of misuse, the latter to capture
unknown cases. In some systems, these mechanisms even work together to update
each other to increase detection rates and lower false positive rates. It is concluded
that hybridisation offers the most promising future direction, but that a rule or state
based component is likely to remain, being the most natural approach to the correlation
of complex events. The challenge, then, is to mitigate the weaknesses of
canonical programmed systems such that learning, generalisation and adaptation
are more readily facilitated
- …