4,144 research outputs found

    Expert system verification and validation study. Phase 2: Requirements identification. Delivery 1: Updated survey report

    Get PDF
    The purpose is to report the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of Expert Systems

    Expert system verification and validation survey. Delivery 3: Recommendations

    Get PDF
    The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs

    Rethinking Map Legends with Visualization

    Get PDF
    This design paper presents new guidance for creating map legends in a dynamic environment. Our contribution is a set of guidelines for legend design in a visualization context and a series of illustrative themes through which they may be expressed. These are demonstrated in an applications context through interactive software prototypes. The guidelines are derived from cartographic literature and in liaison with EDINA who provide digital mapping services for UK tertiary education. They enhance approaches to legend design that have evolved for static media with visualization by considering: selection, layout, symbols, position, dynamism and design and process. Broad visualization legend themes include: The Ground Truth Legend, The Legend as Statistical Graphic and The Map is the Legend. Together, these concepts enable us to augment legends with dynamic properties that address specific needs, rethink their nature and role and contribute to a wider re-evaluation of maps as artifacts of usage rather than statements of fact. EDINA has acquired funding to enhance their clients with visualization legends that use these concepts as a consequence of this work. The guidance applies to the design of a wide range of legends and keys used in cartography and information visualization

    Searching, Selecting, and Synthesizing Source Code Components

    Get PDF
    As programmers develop software, they instinctively sense that source code exists that could be reused if found --- many programming tasks are common to many software projects across different domains. oftentimes, a programmer will attempt to create new software from this existing source code, such as third-party libraries or code from online repositories. Unfortunately, several major challenges make it difficult to locate the relevant source code and to reuse it. First, there is a fundamental mismatch between the high-level intent reflected in the descriptions of source code, and the low-level implementation details. This mismatch is known as the concept assignment problem , and refers to the frequent case when the keywords from comments or identifiers in code do not match the features implemented in the code. Second, even if relevant source code is found, programmers must invest significant intellectual effort into understanding how to reuse the different functions, classes, or other components present in the source code. These components may be specific to a particular application, and difficult to reuse.;One key source of information that programmers use to understand source code is the set of relationships among the source code components. These relationships are typically structural data, such as function calls or class instantiations. This structural data has been repeatedly suggested as an alternative to textual analysis for search and reuse, however as yet no comprehensive strategy exists for locating relevant and reusable source code. In my research program, I harness this structural data in a unified approach to creating and evolving software from existing components. For locating relevant source code, I present a search engine for finding applications based on the underlying Application Programming Interface (API) calls, and a technique for finding chains of relevant function invocations from repositories of millions of lines of code. Next, for reusing source code, I introduce a system to facilitate building software prototypes from existing packages, and an approach to detecting similar software applications

    Metamorphoses of ONAV console operations: From prototype to real time application

    Get PDF
    The ONAV (Onboard Navigation) Expert System is being developed as a real time console assistant to the ONAV flight controller for use in the Mission Control Center at the Johnson Space Center. Currently the entry and rendezvous systems are in verification, and the ascent is being prototyped. To arrive at this stage, from a prototype to real world application, the ONAV project has had to deal with not only AI issues but operating environment issues. The AI issues included the maturity of AI languages and the debugging tools, what is verification, and availability, stability, and the size of the expert pool. The environmental issues included real time data acquisition, hardware stability, and how to achieve acceptance by users and management

    Simplifying Deep-Learning-Based Model for Code Search

    Full text link
    To accelerate software development, developers frequently search and reuse existing code snippets from a large-scale codebase, e.g., GitHub. Over the years, researchers proposed many information retrieval (IR) based models for code search, which match keywords in query with code text. But they fail to connect the semantic gap between query and code. To conquer this challenge, Gu et al. proposed a deep-learning-based model named DeepCS. It jointly embeds method code and natural language description into a shared vector space, where methods related to a natural language query are retrieved according to their vector similarities. However, DeepCS' working process is complicated and time-consuming. To overcome this issue, we proposed a simplified model CodeMatcher that leverages the IR technique but maintains many features in DeepCS. Generally, CodeMatcher combines query keywords with the original order, performs a fuzzy search on name and body strings of methods, and returned the best-matched methods with the longer sequence of used keywords. We verified its effectiveness on a large-scale codebase with about 41k repositories. Experimental results showed the simplified model CodeMatcher outperforms DeepCS by 97% in terms of MRR (a widely used accuracy measure for code search), and it is over 66 times faster than DeepCS. Besides, comparing with the state-of-the-art IR-based model CodeHow, CodeMatcher also improves the MRR by 73%. We also observed that: fusing the advantages of IR-based and deep-learning-based models is promising because they compensate with each other by nature; improving the quality of method naming helps code search, since method name plays an important role in connecting query and code

    IMPLEMENTATION OF RAPID APPLICATION DEVELOPMENT (RAD) METHOD IN THE DESIGN OF RESEARCH PARTNER RECOMMENDATION SYSTEM IN HIGHER EDUCATION

    Get PDF
    In a higher education environment that carries out tridharma obligations, one of the roles of a lecturer is to be a researcher. Collaboration between researchers who have different expertise is important to provide solutions to problems and can also improve the quality of the research itself. This research is present to provide a research partner recommendation system by applying the NaĂŻve Bayes machine learning algorithm and the Rapid Application Development (RAD) method. Software development is done with PHP programming language and MySQL database. Testing of the software developed using black box testing. The application provides recommendations for research partners based on keywords entered in the form of research topics. Research partner recommendations are given in the form of a weighted value of the percentage of suitability for the keywords entered. The algorithm has been successfully applied and testing has been successfully passed so that researchers can use this software to get the right research partner according to the research topic concerned

    Collaborative-demographic hybrid for financial: product recommendation

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsDue to the increased availability of mature data mining and analysis technologies supporting CRM processes, several financial institutions are striving to leverage customer data and integrate insights regarding customer behaviour, needs, and preferences into their marketing approach. As decision support systems assisting marketing and commercial efforts, Recommender Systems applied to the financial domain have been gaining increased attention. This thesis studies a Collaborative- Demographic Hybrid Recommendation System, applied to the financial services sector, based on real data provided by a Portuguese private commercial bank. This work establishes a framework to support account managers’ advice on which financial product is most suitable for each of the bank’s corporate clients. The recommendation problem is further developed by conducting a performance comparison for both multi-output regression and multiclass classification prediction approaches. Experimental results indicate that multiclass architectures are better suited for the prediction task, outperforming alternative multi-output regression models on the evaluation metrics considered. Withal, multiclass Feed-Forward Neural Networks, combined with Recursive Feature Elimination, is identified as the topperforming algorithm, yielding a 10-fold cross-validated F1 Measure of 83.16%, and achieving corresponding values of Precision and Recall of 84.34%, and 85.29%, respectively. Overall, this study provides important contributions for positioning the bank’s commercial efforts around customers’ future requirements. By allowing for a better understanding of customers’ needs and preferences, the proposed Recommender allows for more personalized and targeted marketing contacts, leading to higher conversion rates, corporate profitability, and customer satisfaction and loyalty

    Creating a Software Assembly Line

    Get PDF
    This thesis describes a technical solution that improved the software development efforts needed to verify and validate a medical device, herein referred to as the “medical device.” The medical device had many software and hardware configurations that had to be developed, integrated, managed, and tested. There were a number of problems with the manual processes that were being used to verify and validate the product, so this project developed a system called the “Software Assembly Line” to continuously build software and automatically test it on multiple hardware configurations. As a result, software quality and predictability were improved, and the number of cycles required for formal verification and validation was reduced. The final project recommendation was to validate the Software Assembly Line according to 21CFR820.75, Process Validation
    • …
    corecore