38 research outputs found

    A quantitative analysis of parametric CAD model complexity and its relationship to perceived modeling complexity

    Get PDF
    Digital product data quality and reusability has been proven a critical aspect of the Model-Based Enterprise to enable the efficient design and redesign of products. The extent to which a history-based parametric CAD model can be edited or reused depends on the geometric complexity of the part and the procedure employed to build it. As a prerequisite for defining metrics that can quantify the quality of the modeling process, it is necessary to have CAD datasets that are sorted and ranked according to the complexity of the modeling process. In this paper, we examine the concept of perceived CAD modeling complexity, defined as the degree to which a parametric CAD model is perceived as difficult to create, use, and/or modify by expert CAD designers. We present a novel method to integrate pair-wise comparisons of CAD modeling complexity made by experts into a single metric that can be used as ground truth. Next, we discuss a comprehensive study of quantitative metrics which are derived primarily from the geometric characteristics of the models and the graph structure that represents the parent/child relationships between features. Our results show that the perceived CAD modeling complexity metric derived from experts’ assessment correlates particularly strongly with graph-based metrics. The Spearman coefficients for five of these metrics suggest that they can be effectively used to study the parameters that influence the reusability of models and as a basis to implement effective personalized learning strategies in online CAD training scenarios

    Computational Approaches to Understanding Structure-Function Relationships at the Intersection of Cellular Organization, Mechanics, and Electrophysiology

    Get PDF
    The heart is a complex mechanical and electrical environment and small changes at the cellular and subcellular scale can have profound impacts at the tissue, organ, and organ system levels. The goal of this research is to better understand structure-function relationships at these cellular and subcellular levels of the cardiac environment. This improved understanding may prove increasingly important as medicine begins shifting toward engineered replacement tissues and organs. Specifically, we work towards this goal by presenting a framework to automatically create finite element models of cells based on optical images. This framework can be customized to model the effects of subcellular structure and organization on mechanical and electrophysiological properties at the cellular level and has the potential for extension to the tissue level and beyond. In part one of this work, we present a novel algorithm is presented that can generate physiologically relevant distributions of myofibrils within adult cardiomyocytes from confocal microscopy images. This is achieved by modelling these distributions as directed acyclic graphs, assigning a cost to each node based on observations of cardiac structure and function, and determining to minimum-cost flow through the network. This resulting flow represents the optimal distribution of myofibrils within the cell. In part two, these generated geometries are used as inputs to a finite element model (FEM) to determine the role the myofibrillar organization plays in the axal and transverse mechanics of the whole cell. The cardiomyocytes are modeled as a composite of fiber trusses within an elastic solid matrix. The behavior of the model is validated by comparison to data from combined Atomic Force Microscopy (AFM) and Carbon Fiber manipulation. Recommendations for extending the FEM framework are also explored. A secondary goal, discussed in part three of this work, is to make computational models and simulation tools more accessible to novice learners. Doing so allows active learning of complicated course materials to take place. Working towards this goal, we present CellSpark: a simulation tool developed for teaching cellular electrophysiology and modelling to undergraduate bioengineering students. We discuss the details of its implementation and implications for improved student learning outcomes when used as part of a discovery learning assignment

    Knowledge Discovery with Bayesian Networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Self-adaptive isogeometric spatial discretisations of the first and second-order forms of the neutron transport equation with dual-weighted residual error measures and diffusion acceleration

    Get PDF
    As implemented in a new modern-Fortran code, NURBS-based isogeometric analysis (IGA) spatial discretisations and self-adaptive mesh refinement (AMR) algorithms are developed in the application to the first-order and second-order forms of the neutron transport equation (NTE). These AMR algorithms are shown to be computationally efficient and numerically accurate when compared to standard approaches. IGA methods are very competitive and offer certain unique advantages over standard finite element methods (FEM), not least of all because the numerical analysis is performed over an exact representation of the underlying geometry, which is generally available in some computer-aided design (CAD) software description. Furthermore, mesh refinement can be performed within the analysis program at run-time, without the need to revisit any ancillary mesh generator. Two error measures are described for the IGA-based AMR algorithms, both of which can be employed in conjunction with energy-dependent meshes. The first heuristically minimises any local contributions to the global discretisation error, as per some appropriate user-prescribed norm. The second employs duality arguments to minimise important local contributions to the error as measured in some quantity of interest; this is commonly known as a dual-weighted residual (DWR) error measure and it demands the solution to both the forward (primal) and the adjoint (dual) NTE. Finally, convergent and stable diffusion acceleration and generalised minimal residual (GMRes) algorithms, compatible with the aforementioned AMR algorithms, are introduced to accelerate the convergence of the within-group self-scattering sources for scattering-dominated problems for the first and second-order forms of the NTE. A variety of verification benchmark problems are analysed to demonstrate the computational performance and efficiency of these acceleration techniques.Open Acces

    Dynamic task scheduling and binding for many-core systems through stream rewriting

    Get PDF
    This thesis proposes a novel model of computation, called stream rewriting, for the specification and implementation of highly concurrent applications. Basically, the active tasks of an application and their dependencies are encoded as a token stream, which is iteratively modified by a set of rewriting rules at runtime. In order to estimate the performance and scalability of stream rewriting, a large number of experiments have been evaluated on many-core systems and the task management has been implemented in software and hardware.In dieser Dissertation wurde Stream Rewriting als eine neue Methode entwickelt, um Anwendungen mit einer großen Anzahl von dynamischen Tasks zu beschreiben und effizient zur Laufzeit verwalten zu können. Dabei werden die aktiven Tasks in einem Datenstrom verpackt, der zur Laufzeit durch wiederholtes Suchen und Ersetzen umgeschrieben wird. Um die Performance und Skalierbarkeit zu bestimmen, wurde eine Vielzahl von Experimenten mit Many-Core-Systemen durchgeführt und die Verwaltung von Tasks über Stream Rewriting in Software und Hardware implementiert

    First CLIPS Conference Proceedings, volume 1

    Get PDF
    The first Conference of C Language Production Systems (CLIPS) hosted by the NASA-Lyndon B. Johnson Space Center in August 1990 is presented. Articles included engineering applications, intelligent tutors and training, intelligent software engineering, automated knowledge acquisition, network applications, verification and validation, enhancements to CLIPS, space shuttle quality control/diagnosis applications, space shuttle and real-time applications, and medical, biological, and agricultural applications

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    Emergence through conflict : the Multi-Disciplinary Design System (MDDS)

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2009.Includes bibliographical references (p. 413-430).This dissertation proposes a framework and a group of systematic methodologies to construct a computational Multi-Disciplinary Design System (MDDS) that can support the design of complex systems within a variety of domains. The way in which the resulting design system is constructed, and the capabilities it brings to bare, are totally different from the methods used in traditional sequential design. The MDDS embraces diverse areas of research that include design science, systems theory, artificial intelligence, design synthesis and generative algorithms, mathematical modeling and disciplinary analyses, optimization theory, data management and model integration, and experimental design among many others. There are five phases to generate the MDDS. These phases involve decomposition, formulation, modeling, integration, and exploration. These phases are not carried out in a sequential manner, but rather in a continuous move back and forth between the different phases. The process of building the MDDS begins with a top-down decomposition of a design concept. The design, seen as an object, is decomposed into its components and aspects, while the design, seen as a process, is decomposed into developmental levels and design activities. Then based on the process decomposition, the architecture of the MDDS is formulated into hierarchical levels each of which comprises a group of design cycles that include design modules at different degrees of abstraction. Based on the design object decomposition, the design activities which include synthesis, analysis, evaluation and optimization are modeled within the design modules.(cont.) Subsequently through a bottom-up approach, the design modules are integrated into a data flow network. This network forms MDDS as an integrated system that acts as a holistic structured functional unit that explores the design space in search of satisfactory solutions. The MDDS emergent properties are not detectable through the properties and behaviors of its parts, and can only be enucleated through a holistic approach. The MDDS is an adaptable system that is continuously dependent on, and responsive to, the uncertainties of the design process. The evolving MDDS is thus characterized a multi-level, multi-module, multi-variable and multi-resolution system. Although the MDDS framework is intended to be domain-independent, several MDDS prototypes were developed within this dissertation to generate exploratory building designs.by Anas Alfaris.Ph.D

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore