200,172 research outputs found

    Defining Expertise in the Use of Constraint-based CAD Tools by Examining Practicing Professionals

    Get PDF
    Academic engineering graphics curricula are facing a rapidly changing knowledge base and current teaching and assessment methods are struggling to keep pace.  This paper is the second in a two-part series which examines practicing engineering graphics professionals to discover their experiences in developing expertise in the use of constraint-based CAD tools.  It presents the results of a knowledge-mapping task and think-aloud modeling task used with five practicing product designers to examine their solid modeling strategies used when creating a 3D model and their organization of the concepts surrounding the knowledge domain of constraint-based CAD tools.  The results of the think-aloud modeling task yielded five specific modeling procedures which were distilled into one common modeling procedure for the given object.  The results of the knowledge mapping task revealed five separate knowledge maps, and the common elements from each one were combined to form a generic knowledge map related to the use of constraint-based CAD tools.  These two sets of results comprised the initial elements used to define expertise in the use of constraint-based CAD tools based on these five participants.  This article provides an initial look at an approach to creating geometry with constraint-based CAD tools, as well as specific topics to be included in a curriculum that includes constraint-based CAD tools.  There conclusions also suggest potential teaching and assessment methodologies

    Modeling the object-oriented software process: OPEN and the unified process

    Get PDF
    A short introduction to software process modeling is presented, particularly object-oriented modeling. Two major industrial process models are discussed: the OPEN model and the Unified Process model. In more detail, the quality assurance in the Unified Process tool (formally called Objectory) is reviewed

    CT Automated Exposure Control Using A Generalized Detectability Index

    Get PDF
    Purpose Identifying an appropriate tube current setting can be challenging when using iterative reconstruction due to the varying relationship between spatial resolution, contrast, noise, and dose across different algorithms. This study developed and investigated the application of a generalized detectability index (d\u27gen) to determine the noise parameter to input to existing automated exposure control (AEC) systems to provide consistent image quality (IQ) across different reconstruction approaches. Methods This study proposes a task‐based automated exposure control (AEC) method using a generalized detectability index (d\u27gen). The proposed method leverages existing AEC methods that are based on a prescribed noise level. The generalized d\u27gen metric is calculated using lookup tables of task‐based modulation transfer function (MTF) and noise power spectrum (NPS). To generate the lookup tables, the American College of Radiology CT accreditation phantom was scanned on a multidetector CT scanner (Revolution CT, GE Healthcare) at 120 kV and tube current varied manually from 20 to 240 mAs. Images were reconstructed using a reference reconstruction algorithm and four levels of an in‐house iterative reconstruction algorithm with different regularization strengths (IR1–IR4). The task‐based MTF and NPS were estimated from the measured images to create lookup tables of scaling factors that convert between d\u27gen and noise standard deviation. The performance of the proposed d\u27gen‐AEC method in providing a desired IQ level over a range of iterative reconstruction algorithms was evaluated using the American College of Radiology (ACR) phantom with elliptical shell and using a human reader evaluation on anthropomorphic phantom images. Results The study of the ACR phantom with elliptical shell demonstrated reasonable agreement between the d\u27gen predicted by the lookup table and d\u27 measured in the images, with a mean absolute error of 15% across all dose levels and maximum error of 45% at the lowest dose level with the elliptical shell. For the anthropomorphic phantom study, the mean reader scores for images resulting from the d\u27gen‐AEC method were 3.3 (reference image), 3.5 (IR1), 3.6 (IR2), 3.5 (IR3), and 2.2 (IR4). When using the d\u27gen‐AEC method, the observers’ IQ scores for the reference reconstruction were statistical equivalent to the scores for IR1, IR2, and IR3 iterative reconstructions (P \u3e 0.35). The d\u27gen‐AEC method achieved this equivalent IQ at lower dose for the IR scans compared to the reference scans. Conclusions A novel AEC method, based on a generalized detectability index, was investigated. The proposed method can be used with some existing AEC systems to derive the tube current profile for iterative reconstruction algorithms. The results provide preliminary evidence that the proposed d\u27gen‐AEC can produce similar IQ across different iterative reconstruction approaches at different dose levels

    Neuro-fuzzy knowledge processing in intelligent learning environments for improved student diagnosis

    Get PDF
    In this paper, a neural network implementation for a fuzzy logic-based model of the diagnostic process is proposed as a means to achieve accurate student diagnosis and updates of the student model in Intelligent Learning Environments. The neuro-fuzzy synergy allows the diagnostic model to some extent "imitate" teachers in diagnosing students' characteristics, and equips the intelligent learning environment with reasoning capabilities that can be further used to drive pedagogical decisions depending on the student learning style. The neuro-fuzzy implementation helps to encode both structured and non-structured teachers' knowledge: when teachers' reasoning is available and well defined, it can be encoded in the form of fuzzy rules; when teachers' reasoning is not well defined but is available through practical examples illustrating their experience, then the networks can be trained to represent this experience. The proposed approach has been tested in diagnosing aspects of student's learning style in a discovery-learning environment that aims to help students to construct the concepts of vectors in physics and mathematics. The diagnosis outcomes of the model have been compared against the recommendations of a group of five experienced teachers, and the results produced by two alternative soft computing methods. The results of our pilot study show that the neuro-fuzzy model successfully manages the inherent uncertainty of the diagnostic process; especially for marginal cases, i.e. where it is very difficult, even for human tutors, to diagnose and accurately evaluate students by directly synthesizing subjective and, some times, conflicting judgments
    • …
    corecore