4,473 research outputs found

    Ontological approach to development of computing with words based systems

    Get PDF
    AbstractComputing with words introduced by Zadeh becomes a very important concept in processing of knowledge represented in the form of propositions. Two aspects of this concept – approximation and personalization – are essential to the process of building intelligent systems for human-centric computing.For the last several years, Artificial Intelligence community has used ontology as a means for representing knowledge. Recently, the development of a new Internet paradigm – the Semantic Web – has led to introduction of another form of ontology. It allows for defining concepts, identifying relationships among these concepts, and representing concrete information. In other words, an ontology has become a very powerful way of representing not only information but also its semantics.The paper proposes an application of ontology, in the sense of the Semantic Web, for development of computing with words based systems capable of performing operations on propositions including their semantics. The ontology-based approach is very flexible and provides a rich environment for expressing different types of information including perceptions. It also provides a simple way of personalization of propositions. An architecture of computing with words based system is proposed. A prototype of such a system is described

    Monitoring Agents for Assisting NASA Engineers with Shuttle Ground Processing

    Get PDF
    The Spaceport Processing Systems Branch at NASA Kennedy Space Center has designed, developed, and deployed a rule-based agent to monitor the Space Shuttle's ground processing telemetry stream. The NASA Engineering Shuttle Telemetry Agent increases situational awareness for system and hardware engineers during ground processing of the Shuttle's subsystems. The agent provides autonomous monitoring of the telemetry stream and automatically alerts system engineers when user defined conditions are satisfied. Efficiency and safety are improved through increased automation. Sandia National Labs' Java Expert System Shell is employed as the agent's rule engine. The shell's predicate logic lends itself well to capturing the heuristics and specifying the engineering rules within this domain. The declarative paradigm of the rule-based agent yields a highly modular and scalable design spanning multiple subsystems of the Shuttle. Several hundred monitoring rules have been written thus far with corresponding notifications sent to Shuttle engineers. This chapter discusses the rule-based telemetry agent used for Space Shuttle ground processing. We present the problem domain along with design and development considerations such as information modeling, knowledge capture, and the deployment of the product. We also present ongoing work with other condition monitoring agents

    The New Hampshire, Vol. 109, No. 21 (Mar. 26, 2020)

    Get PDF
    An independent student produced newspaper from the University of New Hampshire

    Developing a Computational Framework for a Construction Scheduling Decision Support Web Based Expert System

    Get PDF
    Decision-making is one of the basic cognitive processes of human behaviors by which a preferred option or a course of action is chosen from among a set of alternatives based on certain criteria. Decision-making is the thought process of selecting a logical choice from the available options. When trying to make a good decision, all the positives and negatives of each option should be evaluated. This decision-making process is particularly challenging during the preparation of a construction schedule, where it is difficult for a human to analyze all possible outcomes of each and every situation because, construction of a project is performed in a real time environment with real time events which are subject to change at any time. The development of a construction schedule requires knowledge of the construction process that takes place to complete a project. Most of this knowledge is acquired through years of work/practical experiences. Currently, working professionals and/or students develop construction schedules without the assistance of a decision support system (that provides work/practical experiences captured in previous jobs or by other people). Therefore, a scheduling decision support expert system will help in decision-making by expediting and automating the situation analysis to discover the best possible solution. However, the algorithm/framework needed to develop such a decision support expert system does not exist so far. Thus, the focus of my research is to develop a computational framework for a web-based expert system that helps the decision-making process during the preparation of a construction schedule. My research to develop a new computational framework for construction scheduling follows an action research methodology. The main foundation components for my research are scheduling techniques (such as: Job Shop Problem), path-finding techniques (such as: travelling salesman problem), and rule-based languages (such as JESS). My computational framework is developed by combining these theories. The main contribution of my dissertation to computational science is the new scheduling framework, which consists of a combination of scheduling algorithms that is tested with construction scenarios. This framework could be useful in more areas where automatic job and/or task scheduling is necessary

    Efficient branch and node testing

    Get PDF
    Software testing evaluates the correctness of a program’s implementation through a test suite. The quality of a test case or suite is assessed with a coverage metric indicating what percentage of a program’s structure was exercised (covered) during execution. Coverage of every execution path is impossible due to infeasible paths and loops that result in an exponential or infinite number of paths. Instead, metrics such as the number of statements (nodes) or control-flow branches covered are used. Node and branch coverage require instrumentation probes to be present during program runtime. Traditionally, probes were statically inserted during compilation. These static probes remain even after coverage is recorded, incurring unnecessary overhead, reducing the number of tests that can be run, or requiring large amounts of memory In this dissertation, I present three novel techniques for improving branch and node coverage performance for the Java runtime. First, Demand-driven Structural Testing (DDST) uses dynamic insertion and removal of probes so they can be removed after recording coverage, avoiding the unnecessary overhead of static instrumentation. DDST is built on a new framework for developing and researching coverage techniques, Jazz. DDST for node coverage averages 19.7% faster than statically-inserted instrumentation on an industry-standard benchmark suite, SPECjvm98. Due to DDST’s higher-cost probes, no single branch coverage technique performs best on all programs or methods. To address this, I developed Hybrid Structural Testing (HST). HST combines different test techniques, including static and DDST, into one run. HST uses a cost model for analysis, reducing the cost of branch coverage testing an average of 48% versus Static and 56% versus DDST on SPECjvm98. HST never chooses certain techniques due to expensive analysis. I developed a third technique, Test Plan Caching (TPC), that exploits the inherent repetition in testing over a suite. TPC saves analysis results to avoid recomputation. Combined with HST, TPC produces a mix of techniques that record coverage quickly and efficiently. My three techniques reduce the average cost of branch coverage by 51.6–90.8% over previous approaches on SPECjvm98, allowing twice as many test cases in a given time budget

    Continuous Process Auditing (CPA): an Audit Rule Ontology Approach to Compliance and Operational Audits

    Get PDF
    Continuous Auditing (CA) has been investigated over time and it is, somewhat, in practice within nancial and transactional auditing as a part of continuous assurance and monitoring. Enterprise Information Systems (EIS) that run their activities in the form of processes require continuous auditing of a process that invokes the action(s) speci ed in the policies and rules in a continuous manner and/or sometimes in real-time. This leads to the question: How much could continuous auditing mimic the actual auditing procedures performed by auditing professionals? We investigate some of these questions through Continuous Process Auditing (CPA) relying on heterogeneous activities of processes in the EIS, as well as detecting exceptions and evidence in current and historic databases to provide audit assurance

    Design and implementation of a multi-agent opportunistic grid computing platform

    Get PDF
    Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts
    • …
    corecore