26,763 research outputs found
Self-tuning routine alarm analysis of vibration signals in steam turbine generators
This paper presents a self-tuning framework for knowledge-based diagnosis of routine alarms in steam turbine generators. The techniques provide a novel basis for initialising and updating time series feature extraction parameters used in the automated decision support of vibration events due to operational transients. The data-driven nature of the algorithms allows for machine specific characteristics of individual turbines to be learned and reasoned about. The paper provides a case study illustrating the routine alarm paradigm and the applicability of systems using such techniques
Diagnosis: Reasoning from first principles and experiential knowledge
Completeness, efficiency and autonomy are requirements for suture diagnostic reasoning systems. Methods for automating diagnostic reasoning systems include diagnosis from first principles (i.e., reasoning from a thorough description of structure and behavior) and diagnosis from experiential knowledge (i.e., reasoning from a set of examples obtained from experts). However, implementation of either as a single reasoning method fails to meet these requirements. The approach of combining reasoning from first principles and reasoning from experiential knowledge does address the requirements discussed above and can possibly ease some of the difficulties associated with knowledge acquisition by allowing developers to systematically enumerate a portion of the knowledge necessary to build the diagnosis program. The ability to enumerate knowledge systematically facilitates defining the program's scope, completeness, and competence and assists in bounding, controlling, and guiding the knowledge acquisition process
A design and implementation methodology for diagnostic systems
A methodology for design and implementation of diagnostic systems is presented. Also discussed are the advantages of embedding a diagnostic system in a host system environment. The methodology utilizes an architecture for diagnostic system development that is hierarchical and makes use of object-oriented representation techniques. Additionally, qualitative models are used to describe the host system components and their behavior. The methodology architecture includes a diagnostic engine that utilizes a combination of heuristic knowledge to control the sequence of diagnostic reasoning. The methodology provides an integrated approach to development of diagnostic system requirements that is more rigorous than standard systems engineering techniques. The advantages of using this methodology during various life cycle phases of the host systems (e.g., National Aerospace Plane (NASP)) include: the capability to analyze diagnostic instrumentation requirements during the host system design phase, a ready software architecture for implementation of diagnostics in the host system, and the opportunity to analyze instrumentation for failure coverage in safety critical host system operations
Applying tropos to socio-technical system design and runtime configuration
Recent trends in Software Engineering have introduced the importance of reconsidering the traditional idea of software design as a socio-tecnical problem, where human agents are integral part of the system along with hardware and software components. Design and runtime support for Socio-Technical Systems (STSs) requires appropriate modeling techniques and
non-traditional infrastructures. Agent-oriented software methodologies are natural solutions to the development of STSs, both humans and technical components are conceptualized and analyzed as part of the same system. In this paper, we illustrate a number of Tropos features that we believe fundamental to support the development and runtime reconfiguration of STSs.
Particularly, we focus on two critical design issues: risk analysis and location variability. We show how they are integrated and used into a planning-based approach to support the designer in evaluating and choosing the best design alternative. Finally, we present a generic framework to develop self-reconfigurable STSs
Unattended network operations technology assessment study. Technical support for defining advanced satellite systems concepts
The results are summarized of an unattended network operations technology assessment study for the Space Exploration Initiative (SEI). The scope of the work included: (1) identified possible enhancements due to the proposed Mars communications network; (2) identified network operations on Mars; (3) performed a technology assessment of possible supporting technologies based on current and future approaches to network operations; and (4) developed a plan for the testing and development of these technologies. The most important results obtained are as follows: (1) addition of a third Mars Relay Satellite (MRS) and MRS cross link capabilities will enhance the network's fault tolerance capabilities through improved connectivity; (2) network functions can be divided into the six basic ISO network functional groups; (3) distributed artificial intelligence technologies will augment more traditional network management technologies to form the technological infrastructure of a virtually unattended network; and (4) a great effort is required to bring the current network technology levels for manned space communications up to the level needed for an automated fault tolerance Mars communications network
Why (and How) Networks Should Run Themselves
The proliferation of networked devices, systems, and applications that we
depend on every day makes managing networks more important than ever. The
increasing security, availability, and performance demands of these
applications suggest that these increasingly difficult network management
problems be solved in real time, across a complex web of interacting protocols
and systems. Alas, just as the importance of network management has increased,
the network has grown so complex that it is seemingly unmanageable. In this new
era, network management requires a fundamentally new approach. Instead of
optimizations based on closed-form analysis of individual protocols, network
operators need data-driven, machine-learning-based models of end-to-end and
application performance based on high-level policy goals and a holistic view of
the underlying components. Instead of anomaly detection algorithms that operate
on offline analysis of network traces, operators need classification and
detection algorithms that can make real-time, closed-loop decisions. Networks
should learn to drive themselves. This paper explores this concept, discussing
how we might attain this ambitious goal by more closely coupling measurement
with real-time control and by relying on learning for inference and prediction
about a networked application or system, as opposed to closed-form analysis of
individual protocols
- …