11,872 research outputs found

    Friction Variability in Planar Pushing Data: Anisotropic Friction and Data-collection Bias

    Full text link
    Friction plays a key role in manipulating objects. Most of what we do with our hands, and most of what robots do with their grippers, is based on the ability to control frictional forces. This paper aims to better understand the variability and predictability of planar friction. In particular, we focus on the analysis of a recent dataset on planar pushing by Yu et al. [1] devised to create a data-driven footprint of planar friction. We show in this paper how we can explain a significant fraction of the observed unconventional phenomena, e.g., stochasticity and multi-modality, by combining the effects of material non-homogeneity, anisotropy of friction and biases due to data collection dynamics, hinting that the variability is explainable but inevitable in practice. We introduce an anisotropic friction model and conduct simulation experiments comparing with more standard isotropic friction models. The anisotropic friction between object and supporting surface results in convergence of initial condition during the automated data collection. Numerical results confirm that the anisotropic friction model explains the bias in the dataset and the apparent stochasticity in the outcome of a push. The fact that the data collection process itself can originate biases in the collected datasets, resulting in deterioration of trained models, calls attention to the data collection dynamics.Comment: 8 pages, 13 figure

    Invest to Save: Report and Recommendations of the NSF-DELOS Working Group on Digital Archiving and Preservation

    Get PDF
    Digital archiving and preservation are important areas for research and development, but there is no agreed upon set of priorities or coherent plan for research in this area. Research projects in this area tend to be small and driven by particular institutional problems or concerns. As a consequence, proposed solutions from experimental projects and prototypes tend not to scale to millions of digital objects, nor do the results from disparate projects readily build on each other. It is also unclear whether it is worthwhile to seek general solutions or whether different strategies are needed for different types of digital objects and collections. The lack of coordination in both research and development means that there are some areas where researchers are reinventing the wheel while other areas are neglected. Digital archiving and preservation is an area that will benefit from an exercise in analysis, priority setting, and planning for future research. The WG aims to survey current research activities, identify gaps, and develop a white paper proposing future research directions in the area of digital preservation. Some of the potential areas for research include repository architectures and inter-operability among digital archives; automated tools for capture, ingest, and normalization of digital objects; and harmonization of preservation formats and metadata. There can also be opportunities for development of commercial products in the areas of mass storage systems, repositories and repository management systems, and data management software and tools.

    Towards rational and minimal change propagation in model evolution

    Full text link
    A critical issue in the evolution of software models is change propagation: given a primary change that is made to a model in order to meet a new or changed requirement, what additional secondary changes are needed to maintain consistency within the model, and between the model and other models in the system? In practice, there are many ways of propagating changes to fix a given inconsistency, and how to justify and automate the selection between such change options remains a critical challenge. In this paper, we propose a number of postulates, inspired by the mature belief revision theory, that a change propagation process should satisfy to be considered rational and minimal. Such postulates enable us to reason about selecting alternative change options, and consequently to develop a machinery that automatically performs this task. We further argue that a possible implementation of such a change propagation process can be considered as a classical state space search in which each state represents a snapshot of the model in the process. This view naturally reflects the cascading nature of change propagation, where each change can require further changes to be made

    Design automation of microfluidic droplet sorting platforms

    Full text link
    Both basic research and biological design require high throughput screening to parse through the massive amounts of variants generated in experiments. However, the cost and expertise needed for use of such technology limit accessibility. Simple and reproducible designs of a sorting platform would reduce the barrier for implementation of affordable bench-top screening platforms. Droplet microfluidics present a promising approach for automating biology, reducing reaction volumes to picoliter droplets and allowing for deterministic manipulation of samples. Droplet microfluidics have been used extensively for high throughput screening and directed evolution, yet limitations in fabrication have prevented the characterization needed for a design tool and subsequent widespread adoption. Here, we present a finite element analysis (FEA) model-based design framework for dielectrophoretic droplet microfluidic sorters and its preliminary experimental validation. This framework extends previous work from our group creating microfluidic designs tools, increasing their usability in the lab

    Evidences of the mismatch between industry and academy on modelling language quality evaluation

    Full text link
    Quality is an implicit property of models and modelling languages by their condition of engineering artifacts. However, the quality property is affected by the diversity of conceptions around the model-driven paradigm. In this document is presented a report of quality issues on modelling languages and models. These issues result from an analysis about quality evidences obtained from industrial and academic/scientific contexts.Comment: Technical repor

    A Tool for Supporting the Co-Evolution of Enterprise Architecture Meta-models and Models

    Get PDF
    Enterprise architecture models capture the concepts and relationships that together describe the essentials of the various enterprise domains. This model of the enterprise is tightly coupled to a domain-specific modeling language that defines the formalisms for creating and updating such model. These languages are described as meta-models by the model-driven engineering field. Results from surveys on enterprise architecture tool analysis showed a lack of support concerning the co-evolution of enterprise architecture meta-model and models. This paper presents a tool that automates enterprise architecture models co-evolution according to a set of meta-model changes. A Portuguese governmental organization used and validated the tool using observational, analytical and descriptive evaluation methods

    A framework for the automation of generalised stability theory

    Full text link
    The traditional approach to investigating the stability of a physical system is to linearise the equations about a steady base solution, and to examine the eigenvalues of the linearised operator. Over the past several decades, it has been recognised that this approach only determines the asymptotic stability of the system, and neglects the possibility of transient perturbation growth arising due to the nonnormality of the system. This observation motivated the development of a more powerful generalised stability theory (GST), which focusses instead on the singular value decomposition of the linearised propagator of the system. While GST has had significant successes in understanding the stability of phenomena in geophysical fluid dynamics, its more widespread applicability has been hampered by the fact that computing the SVD requires both the tangent linear operator and its adjoint: deriving the tangent linear and adjoint models is usually a considerable challenge, and manually embedding them inside an eigensolver is laborious. In this paper, we present a framework for the automation of generalised stability theory, which overcomes these difficulties. Given a compact high-level symbolic representation of a finite element discretisation implemented in the FEniCS system, efficient C++ code is automatically generated to assemble the forward, tangent linear and adjoint models; these models are then used to calculate the optimally growing perturbations to the forward model, and their growth rates. By automating the stability computations, we hope to make these powerful tools a more routine part of computational analysis. The efficiency and generality of the framework is demonstrated with applications drawn from geophysical fluid dynamics, phase separation and quantum mechanics.Comment: Accepted in SIS

    Scaling Genetic Programming for Source Code Modification

    Full text link
    In Search Based Software Engineering, Genetic Programming has been used for bug fixing, performance improvement and parallelisation of programs through the modification of source code. Where an evolutionary computation algorithm, such as Genetic Programming, is to be applied to similar code manipulation tasks, the complexity and size of source code for real-world software poses a scalability problem. To address this, we intend to inspect how the Software Engineering concepts of modularity, granularity and localisation of change can be reformulated as additional mechanisms within a Genetic Programming algorithm.Comment: 4 pages, Accepted for Graduate Student Workshop, GECCO 2012, Retracted by Author

    Towards maintainer script modernization in FOSS distributions

    Get PDF
    Free and Open Source Software (FOSS) distributions are complex software systems, made of thousands packages that evolve rapidly, independently, and without centralized coordination. During packages upgrades, corner case failures can be encountered and are hard to deal with, especially when they are due to misbehaving maintainer scripts: executable code snippets used to finalize package configuration. In this paper we report a software modernization experience, the process of representing existing legacy systems in terms of models, applied to FOSS distributions. We present a process to define meta-models that enable dealing with upgrade failures and help rolling back from them, taking into account maintainer scripts. The process has been applied to widely used FOSS distributions and we report about such experiences
    • …
    corecore