936 research outputs found

    Flow behavior in liquid molding

    Get PDF
    The liquid molding (LM) process for manufacturing polymer composites with structural properties has the potential to significantly lower fabrication costs and increase production rates. LM includes both resin transfer molding and structural reaction injection molding. To achieve this potential, however, the underlying science base must be improved to facilitate effective process optimization and implementation of on-line process control. The National Institute of Standards and Technology (NIST) has a major program in LM that includes materials characterization, process simulation models, on-line process monitoring and control, and the fabrication of test specimens. The results of this program are applied to real parts through cooperative projects with industry. The key feature in the effort is a comprehensive and integrated approach to the processing science aspects of LM. This paper briefly outlines the NIST program and uses several examples to illustrate the work

    Testing probability distributions underlying aggregated data

    Full text link
    In this paper, we analyze and study a hybrid model for testing and learning probability distributions. Here, in addition to samples, the testing algorithm is provided with one of two different types of oracles to the unknown distribution DD over [n][n]. More precisely, we define both the dual and cumulative dual access models, in which the algorithm AA can both sample from DD and respectively, for any i∈[n]i\in[n], - query the probability mass D(i)D(i) (query access); or - get the total mass of {1,
,i}\{1,\dots,i\}, i.e. ∑j=1iD(j)\sum_{j=1}^i D(j) (cumulative access) These two models, by generalizing the previously studied sampling and query oracle models, allow us to bypass the strong lower bounds established for a number of problems in these settings, while capturing several interesting aspects of these problems -- and providing new insight on the limitations of the models. Finally, we show that while the testing algorithms can be in most cases strictly more efficient, some tasks remain hard even with this additional power

    Can we avoid high coupling?

    Get PDF
    It is considered good software design practice to organize source code into modules and to favour within-module connections (cohesion) over between-module connections (coupling), leading to the oft-repeated maxim "low coupling/high cohesion". Prior research into network theory and its application to software systems has found evidence that many important properties in real software systems exhibit approximately scale-free structure, including coupling; researchers have claimed that such scale-free structures are ubiquitous. This implies that high coupling must be unavoidable, statistically speaking, apparently contradicting standard ideas about software structure. We present a model that leads to the simple predictions that approximately scale-free structures ought to arise both for between-module connectivity and overall connectivity, and not as the result of poor design or optimization shortcuts. These predictions are borne out by our large-scale empirical study. Hence we conclude that high coupling is not avoidable--and that this is in fact quite reasonable

    Sublinear-Time Algorithms for Monomer-Dimer Systems on Bounded Degree Graphs

    Full text link
    For a graph GG, let Z(G,λ)Z(G,\lambda) be the partition function of the monomer-dimer system defined by ∑kmk(G)λk\sum_k m_k(G)\lambda^k, where mk(G)m_k(G) is the number of matchings of size kk in GG. We consider graphs of bounded degree and develop a sublinear-time algorithm for estimating log⁥Z(G,λ)\log Z(G,\lambda) at an arbitrary value λ>0\lambda>0 within additive error Ï”n\epsilon n with high probability. The query complexity of our algorithm does not depend on the size of GG and is polynomial in 1/Ï”1/\epsilon, and we also provide a lower bound quadratic in 1/Ï”1/\epsilon for this problem. This is the first analysis of a sublinear-time approximation algorithm for a # P-complete problem. Our approach is based on the correlation decay of the Gibbs distribution associated with Z(G,λ)Z(G,\lambda). We show that our algorithm approximates the probability for a vertex to be covered by a matching, sampled according to this Gibbs distribution, in a near-optimal sublinear time. We extend our results to approximate the average size and the entropy of such a matching within an additive error with high probability, where again the query complexity is polynomial in 1/Ï”1/\epsilon and the lower bound is quadratic in 1/Ï”1/\epsilon. Our algorithms are simple to implement and of practical use when dealing with massive datasets. Our results extend to other systems where the correlation decay is known to hold as for the independent set problem up to the critical activity

    Advances in the neurophysiology of magnocellular neuroendocrine cells

    Get PDF
    © 2020 British Society for Neuroendocrinology Hypothalamic magnocellular neuroendocrine cells have unique electrical properties and a remarkable capacity for morphological and synaptic plasticity. Their large somatic size, their relatively uniform and dense clustering in the supraoptic and paraventricular nuclei, and their large axon terminals in the neurohypophysis make them an attractive target for direct electrophysiological interrogation. Here, we provide a brief review of significant recent findings in the neuroplasticity and neurophysiological properties of these neurones that were presented at the symposium “Electrophysiology of Magnocellular Neurons” during the 13th World Congress on Neurohypophysial Hormones in Ein Gedi, Israel in April 2019. Magnocellular vasopressin (VP) neurones respond directly to hypertonic stimulation with membrane depolarisation, which is triggered by cell shrinkage-induced opening of an N-terminal-truncated variant of transient receptor potential vanilloid type-1 (TRPV1) channels. New findings indicate that this mechanotransduction depends on actin and microtubule cytoskeletal networks, and that direct coupling of the TRPV1 channels to microtubules is responsible for mechanical gating of the channels. Vasopressin neurones also respond to osmostimulation by activation of epithelial Na+ channels (ENaC). It was shown recently that changes in ENaC activity modulate magnocellular neurone basal firing by generating tonic changes in membrane potential. Both oxytocin and VP neurones also undergo robust excitatory synapse plasticity during chronic osmotic stimulation. Recent findings indicate that new glutamate synapses induced during chronic salt loading express highly labile Ca2+-permeable GluA1 receptors requiring continuous dendritic protein synthesis for synapse maintenance. Finally, recordings from the uniquely tractable neurohypophysial terminals recently revealed an unexpected property of activity-dependent neuropeptide release. A significant fraction of the voltage-dependent neurohypophysial neurosecretion was found to be independent of Ca2+ influx through voltage-gated Ca2+ channels. Together, these findings provide a snapshot of significant new advances in the electrophysiological signalling mechanisms and neuroplasticity of the hypothalamic-neurohypophysial system, a system that continues to make important contributions to the field of neurophysiology

    Assessing Code Authorship: The Case of the Linux Kernel

    Get PDF
    Code authorship is a key information in large-scale open source systems. Among others, it allows maintainers to assess division of work and identify key collaborators. Interestingly, open-source communities lack guidelines on how to manage authorship. This could be mitigated by setting to build an empirical body of knowledge on how authorship-related measures evolve in successful open-source communities. Towards that direction, we perform a case study on the Linux kernel. Our results show that: (a) only a small portion of developers (26 %) makes significant contributions to the code base; (b) the distribution of the number of files per author is highly skewed --- a small group of top authors (3 %) is responsible for hundreds of files, while most authors (75 %) are responsible for at most 11 files; (c) most authors (62 %) have a specialist profile; (d) authors with a high number of co-authorship connections tend to collaborate with others with less connections.Comment: Accepted at 13th International Conference on Open Source Systems (OSS). 12 page

    Return of the Great Spaghetti Monster : Learnings from a Twelve-Year Adventure in Web Software Development

    Get PDF
    The widespread adoption of the World Wide Web has fundamentally changed the landscape of software development. Only ten years ago, very few developers would write software for the Web, let alone consider using JavaScript or other web technologies for writing any serious software applications. In this paper, we reflect upon a twelve-year adventure in web development that began with the development of the Lively Kernel system at Sun Microsystems Labs in 2006. Back then, we also published some papers that identified important challenges in web-based software development based on established software engineering principles. We will revisit our earlier findings and compare the state of the art in web development today to our earlier learnings, followed by some reflections and suggestions for the road forward.Peer reviewe

    A framework for the simulation of structural software evolution

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2008 ACM.As functionality is added to an aging piece of software, its original design and structure will tend to erode. This can lead to high coupling, low cohesion and other undesirable effects associated with spaghetti architectures. The underlying forces that cause such degradation have been the subject of much research. However, progress in this field is slow, as its complexity makes it difficult to isolate the causal flows leading to these effects. This is further complicated by the difficulty of generating enough empirical data, in sufficient quantity, and attributing such data to specific points in the causal chain. This article describes a framework for simulating the structural evolution of software. A complete simulation model is built by incrementally adding modules to the framework, each of which contributes an individual evolutionary effect. These effects are then combined to form a multifaceted simulation that evolves a fictitious code base in a manner approximating real-world behavior. We describe the underlying principles and structures of our framework from a theoretical and user perspective; a validation of a simple set of evolutionary parameters is then provided and three empirical software studies generated from open-source software (OSS) are used to support claims and generated results. The research illustrates how simulation can be used to investigate a complex and under-researched area of the development cycle. It also shows the value of incorporating certain human traits into a simulation—factors that, in real-world system development, can significantly influence evolutionary structures

    A Hypergraph Dictatorship Test with Perfect Completeness

    Full text link
    A hypergraph dictatorship test is first introduced by Samorodnitsky and Trevisan and serves as a key component in their unique games based \PCP construction. Such a test has oracle access to a collection of functions and determines whether all the functions are the same dictatorship, or all their low degree influences are o(1).o(1). Their test makes q≄3q\geq3 queries and has amortized query complexity 1+O(log⁥qq)1+O(\frac{\log q}{q}) but has an inherent loss of perfect completeness. In this paper we give an adaptive hypergraph dictatorship test that achieves both perfect completeness and amortized query complexity 1+O(log⁥qq)1+O(\frac{\log q}{q}).Comment: Some minor correction
    • 

    corecore