306,906 research outputs found

    Model-Free Data-Driven Methods in Mechanics: Material Data Identification and Solvers

    Get PDF
    This paper presents an integrated model-free data-driven approach to solid mechanics, allowing to perform numerical simulations on structures on the basis of measures of displacement fields on representative samples, without postulating a specific constitutive model. A material data identification procedure, allowing to infer strain-stress pairs from displacement fields and boundary conditions, is used to build a material database from a set of mutiaxial tests on a non-conventional sample. This database is in turn used by a data-driven solver, based on an algorithm minimizing the distance between manifolds of compatible and balanced mechanical states and the given database, to predict the response of structures of the same material, with arbitrary geometry and boundary conditions. Examples illustrate this modelling cycle and demonstrate how the data-driven identification method allows importance sampling of the material state space, yielding faster convergence of simulation results with increasing database size, when compared to synthetic material databases with regular sampling patterns.Comment: Revised versio

    Managing evolution and change in web-based teaching and learning environments

    Get PDF
    The state of the art in information technology and educational technologies is evolving constantly. Courses taught are subject to constant change from organisational and subject-specific reasons. Evolution and change affect educators and developers of computer-based teaching and learning environments alike – both often being unprepared to respond effectively. A large number of educational systems are designed and developed without change and evolution in mind. We will present our approach to the design and maintenance of these systems in rapidly evolving environments and illustrate the consequences of evolution and change for these systems and for the educators and developers responsible for their implementation and deployment. We discuss various factors of change, illustrated by a Web-based virtual course, with the objective of raising an awareness of this issue of evolution and change in computer-supported teaching and learning environments. This discussion leads towards the establishment of a development and management framework for teaching and learning systems

    git2net - Mining Time-Stamped Co-Editing Networks from Large git Repositories

    Full text link
    Data from software repositories have become an important foundation for the empirical study of software engineering processes. A recurring theme in the repository mining literature is the inference of developer networks capturing e.g. collaboration, coordination, or communication from the commit history of projects. Most of the studied networks are based on the co-authorship of software artefacts defined at the level of files, modules, or packages. While this approach has led to insights into the social aspects of software development, it neglects detailed information on code changes and code ownership, e.g. which exact lines of code have been authored by which developers, that is contained in the commit log of software projects. Addressing this issue, we introduce git2net, a scalable python software that facilitates the extraction of fine-grained co-editing networks in large git repositories. It uses text mining techniques to analyse the detailed history of textual modifications within files. This information allows us to construct directed, weighted, and time-stamped networks, where a link signifies that one developer has edited a block of source code originally written by another developer. Our tool is applied in case studies of an Open Source and a commercial software project. We argue that it opens up a massive new source of high-resolution data on human collaboration patterns.Comment: MSR 2019, 12 pages, 10 figure

    A Novel Approach to Face Recognition using Image Segmentation based on SPCA-KNN Method

    Get PDF
    In this paper we propose a novel method for face recognition using hybrid SPCA-KNN (SIFT-PCA-KNN) approach. The proposed method consists of three parts. The first part is based on preprocessing face images using Graph Based algorithm and SIFT (Scale Invariant Feature Transform) descriptor. Graph Based topology is used for matching two face images. In the second part eigen values and eigen vectors are extracted from each input face images. The goal is to extract the important information from the face data, to represent it as a set of new orthogonal variables called principal components. In the final part a nearest neighbor classifier is designed for classifying the face images based on the SPCA-KNN algorithm. The algorithm has been tested on 100 different subjects (15 images for each class). The experimental result shows that the proposed method has a positive effect on overall face recognition performance and outperforms other examined methods

    Qualitative Effects of Knowledge Rules in Probabilistic Data Integration

    Get PDF
    One of the problems in data integration is data overlap: the fact that different data sources have data on the same real world entities. Much development time in data integration projects is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates from the integration result or solve other semantic conflicts, but it proofs impossible to get rid of all semantic problems in data integration. An often-used rule of thumb states that about 90% of the development effort is devoted to solving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that stores any remaining semantic uncertainty and conflicts in a probabilistic database enabling it to already be meaningfully used. The main development effort in our approach is devoted to defining and tuning knowledge rules and thresholds. Rules and thresholds directly impact the size and quality of the integration result. We measure integration quality indirectly by measuring the quality of answers to queries on the integrated data set in an information retrieval-like way. The main contribution of this report is an experimental investigation of the effects and sensitivity of rule definition and threshold tuning on the integration quality. This proves that our approach indeed reduces development effort — and not merely shifts the effort to rule definition and threshold tuning — by showing that setting rough safe thresholds and defining only a few rules suffices to produce a ‘good enough’ integration that can be meaningfully used
    corecore