1,113 research outputs found

    TrusNet: Peer-to-Peer Cryptographic Authentication

    Get PDF
    Originally, the Internet was meant as a general purpose communication protocol, transferring primarily text documents between interested parties. Over time, documents expanded to include pictures, videos and even web pages. Increasingly, the Internet is being used to transfer a new kind of data which it was never designed for. In most ways, this new data type fits in naturally to the Internet, taking advantage of the near limit-less expanse of the protocol. Hardware protocols, unlike previous data types, provide a unique set security problem. Much like financial data, hardware protocols extended across the Internet must be protected with authentication. Currently, systems which do authenticate do so through a central server, utilizing a similar authentication model to the HTTPS protocol. This hierarchical model is often at odds with the needs of hardware protocols, particularly in ad-hoc networks where peer-to-peer communication is prioritized over a hierarchical model. Our project attempts to implement a peer-to-peer cryptographic authentication protocol to be used to protect hardware protocols extending over the Internet. The TrusNet project uses public-key cryptography to authenticate nodes on a distributed network, with each node locally managing a record of the public keys of nodes which it has encountered. These keys are used to secure data transmission between nodes and to authenticate the identities of nodes. TrusNet is designed to be used on multiple different types of network interfaces, but currently only has explicit hooks for Internet Protocol connections. As of June 2016, TrusNet has successfully achieved a basic authentication and communication protocol on Windows 7, OSX, Linux 14 and the Intel Edison. TrusNet uses RC-4 as its stream cipher and RSA as its public-key algorithm, although both of these are easily configurable. Along with the library, TrusNet also enables the building of a unit testing suite, a simple UI application designed to visualize the basics of the system and a build with hooks into the I/O pins of the Intel Edison allowing for a basic demonstration of the system

    Integration of Abductive and Deductive Inference Diagnosis Model and Its Application in Intelligent Tutoring System

    Get PDF
    This dissertation presents a diagnosis model, Integration of Abductive and Deductive Inference diagnosis model (IADI), in the light of the cognitive processes of human diagnosticians. In contrast with other diagnosis models, that are based on enumerating, tracking and classifying approaches, the IADI diagnosis model relies on different inferences to solve the diagnosis problems. Studies on a human diagnosticians\u27 process show that a diagnosis process actually is a hypothesizing process followed by a verification process. The IADI diagnosis model integrates abduction and deduction to simulate these processes. The abductive inference captures the plausible features of this hypothesizing process while the deductive inference presents the nature of the verification process. The IADI diagnosis model combines the two inference mechanisms with a structure analysis to form the three steps of diagnosis, mistake detection by structure analysis, misconception hypothesizing by abductive inference, and misconception verification by deductive inference. An intelligent tutoring system, Recursive Programming Tutor (RPT), has been designed and developed to teach students the basic concepts of recursive programming. The RPT prototype illustrates the basic features of the IADI diagnosis approach, and also shows a hypertext-based tutoring environment and the tutoring strategies, such as concentrating diagnosis on the key steps of problem solving, organizing explanations by design plans and incorporating the process of tutoring into diagnosis

    WheaCha: A Method for Explaining the Predictions of Models of Code

    Full text link
    Attribution methods have emerged as a popular approach to interpreting model predictions based on the relevance of input features. Although the feature importance ranking can provide insights of how models arrive at a prediction from a raw input, they do not give a clear-cut definition of the key features models use for the prediction. In this paper, we present a new method, called WheaCha, for explaining the predictions of code models. Although WheaCha employs the same mechanism of tracing model predictions back to the input features, it differs from all existing attribution methods in crucial ways. Specifically, WheaCha divides an input program into "wheat" (i.e., the defining features that are the reason for which models predict the label that they predict) and the rest "chaff" for any prediction of a learned code model. We realize WheaCha in a tool, HuoYan, and use it to explain four prominent code models: code2vec, seq-GNN, GGNN, and CodeBERT. Results show (1) HuoYan is efficient - taking on average under twenty seconds to compute the wheat for an input program in an end-to-end fashion (i.e., including model prediction time); (2) the wheat that all models use to predict input programs is made of simple syntactic or even lexical properties (i.e., identifier names); (3) Based on wheat, we present a novel approach to explaining the predictions of code models through the lens of training data

    Supporting feature-level software maintenance

    Get PDF
    Software maintenance is the process of modifying a software system to fix defects, improve performance, add new functionality, or adapt the system to a new environment. A maintenance task is often initiated by a bug report or a request for new functionality. Bug reports typically describe problems with incorrect behaviors or functionalities. These behaviors or functionalities are known as features. Even in very well-designed systems, the source code that implements features is often not completely modularized. The delocalized nature of features makes maintaining them challenging. Since maintenance tasks are expressed in terms of features, the goal of this dissertation is to support software maintenance at the feature-level. We focus on two tasks in particular: feature location and impact analysis via feature coupling.;Feature location is the process of identifying the source code that implements a feature, and it is an essential first step to any maintenance task. There are many existing techniques for feature location that incorporate various types of analyses such as static, dynamic, and textual. In this dissertation, we recognize the advantages of leveraging several types of analyses and introduce a new approach to feature location based on combining dynamic analysis, textual analysis, and web mining algorithms applied to software. The use of web mining for feature location is a novel contribution, and we show that our new techniques based on web mining are significantly more effective than the current state of the art.;After using feature location to identify a feature\u27s source code, maintenance can be completed on that feature. Impact analysis should then be performed to revalidate the system and determine which other features may have been affected by the modifications. We define three feature coupling metrics that capture the relationship between features based on structural information, textual information, and their combination. Our novel feature coupling metrics can be used for impact analysis to quantify the strength of coupling between pairs of features. We performed three empirical studies on open-source software systems to assess the feature coupling metrics and established three major results. First, there is a moderate to strong statistically significant correlation between feature coupling and faults. Second, feature coupling can be used to correctly determine about half of the other features that would be affected by a change to a given feature. Finally, we found that the metrics align with developers\u27 opinions about pairs of features that are actually coupled
    corecore