29 research outputs found

    A Study of Student Performance In Combined Courses

    Get PDF
    Here at the University of Alabama in Huntsville, there are certain graduate courses that are open to both graduate students and upper-level undergraduate students. In the Computer Science Department, due to a lack of teaching faculty, for a period of time some of these courses became required courses for both undergraduate and certain graduate students, rather than having separate required courses for each. Similarly, for the same time period, most of the elective courses available to undergraduate students were those courses which were also open to graduate students, and thus also heavily populated by graduate students. This study investigates whether undergraduate students overall suffer by being placed in courses with graduate students. Similarly, it investigates whether the graduate students suffer by being placed in courses with undergraduate students. Both required and elective courses are examined. Variations such as additional preparation in the form of an extra prerequisite for undergraduates are investigated. The impact of student quality as indicated by ACT scores and GRE scores are also taken into account. The study found that undergraduate students perform about the same in courses with graduate students as they do in courses where normally only undergraduate students are present

    Semantic Metrics for Analysis of Software

    Get PDF
    A recently conceived suite of object-oriented software metrics focus is on semantic aspects of software, in contradistinction to traditional software metrics, which focus on syntactic aspects of software. Semantic metrics represent a more human-oriented view of software than do syntactic metrics. The semantic metrics of a given computer program are calculated by use of the output of a knowledge-based analysis of the program, and are substantially more representative of software quality and more readily comprehensible from a human perspective than are the syntactic metrics

    Empirical Validation of the RCDC and RCDE Semantic Complexity Metrics for Object-Oriented Software

    Get PDF
    The Relative Class Domain Complexity (RCDC) and Relative Class Definition Entropy (RCDE) semantic metrics have been proposed for use as complexity metrics for object-oriented software. These semantic metrics are calculated on a knowledge-based representation of software, following a knowledge-based program understanding examination of the software. The metrics have great potential because they can be applied during the software design phase whereas most complexity metrics cannot be applied until after development is complete. In this paper, we present the results of a study to empirically validate the RCDC and RCDE metrics. We show that the metrics compare favorably with the findings of human experts and also that they correlate well with the results of conventional complexity metrics

    Abstract Several different definitions of the Lack of

    No full text
    Several different definitions of the Lack of Cohesion of Methods (LCOM) metric exist. Various implementations of the LCOM metric, regarding inheritance and use of the constructor and destructor in the calculation are possible. This paper discusses the pros and cons of the possible definitions and implementations of the LCOM metric. An experiment that compared each implementation and definition of LCOM to cohesiveness as determined by seven experts is described. Linear regression analyses comparing cohesiveness to the various LCOM metrics are discussed. Software metrics for the procedural software development paradigm have been extensively studied. Metrics such as McCabe's cyclomatic complexity metric 1 and Halstead's Software Science metrics 2 are well known and frequently used to measure software complexity in the procedural paradigm. More recently, software metrics that are tailored to the measurement of design complexity in the object-oriented paradigm have been developed. Chid..

    Confidence-Based Cheat Detection Through Constrained Order Inference of Temporal Sequences

    No full text
    For particular domains, duplication may be indicative of cheating or an adversarial act intended to skew data. For Sony\u27s PlayStation Network (PSN) that services the world\u27s most popular gaming platform, we observe cheating through duplication of user data in the context of trophies/achievements. This particular domain is representative of the challenges of increasingly prevalent temporal data, where conventional similarity and distance-based deduplication techniques struggle in the context of deduplication. We leverage the Adaptive Sorted Neighborhood Method (ASNM) for temporal domains by applying ASNM, inferring attribute metadata, and performing inference of temporal ordering requirements using subsequence discovery techniques Longest Common Subsequence (LCS) and Needleman-Wunsch (NW). For records of a shared type, we split each record\u27s time-ordered events into constrained and unconstrained sequences. Through both a binary classification and confidence-based approach, we indicate suspicious (errant) records that do not adhere to the inferred constrained order and may indicate a record as a duplicate if its unconstrained order matches that of another record. ASNM, ASNM+LCS and ASNM+NW were evaluated against a labeled dataset of 22,794 records from PSN trophy data where duplication may be indicative of cheating. ASNM+LCS resulted in an F1 of 0.949 using the confidence-based approach, outperforming ASNM and ASNM+NW. ASNM\u27s best performance was an F1 of 0.708 at the 0.99 similarity threshold; ASNM+NW\u27s best performance was an F1 of 0.942 using the confidence-based approach. The significant performance improvement costs little overhead as ASNM+LCS and ASNM+NW averaged only 3.79% and 5.75% additional runtime, respectively
    corecore