709 research outputs found

    Understanding print stability in material extrusion additive manufacturing of thermoset composites

    Get PDF
    Over the last several years, rapid progress has been made in 3D printing of thermoset polymer resins. Such materials offer desirable thermal and chemical stability, attractive strength and stiffness, and excellent compatibility with many existing high-performance fibers. Material extrusion additive manufacturing (AM) is an ideal technology to print thermoset-based composites because fibers align during extrusion through the deposition nozzle, thereby enabling the engineer to design fiber orientation into the printed component. Current efforts to scale thermoset AM up to large-scale have shown promise, but have also highlighted issues with print stability. To-date, very little research has focused on understanding how rheological properties of the feedstock dictate the mechanical stability of printed objects. This talk will describe our first efforts in this area by printing tall, thin walls to characterize buckling and yielding due to self-weight. The talk will begin with an overview of thermoset material extrusion AM, including a brief history and the current state of the art in small and large-scale printing. The talk will then describe simple thin-walled test geometry and experimental setup that enable quantitative assessment and monitoring of geometric stability during the printing process using machine vision. Two feed stocks are investigated, each having different rheological properties, and the height at which buckling begins and the height at which full collapse occurs are identified as a function of wall thickness. Complementary rheological characterization shows that collapse of thin printed walls is well predicted by the classical self-weight, elastic buckling model, provided the recovery behavior of the feedstock is accounted for. These tests highlight the importance of understanding recovery in material extrusion AM feedstocks and could lead to the design of better resins and fillers, and could provide guidelines for the selection of successful print parameters for both small and large-scale thermoset AM. The talk will conclude with a brief discussion of next steps and outlook on the future of material extrusion AM of thermoset materials

    Large scale reactive additive manufacturing and what to expect when scaling up

    Get PDF
    Additive manufacturing as a whole offers tremendous savings in time and cost for rapid prototyping and tooling. At present there is a significant number of thermoplastic printers available from small-scale filament-based extrusion to large scale pellet-based extrusion. Thermosets have seen less growth and have been primarily limited to small scale research setups. Recently, a large-scale thermoset printer, the Reactive Additive Manufacturing (RAM) printer was developed (cf. Figure 1). This printer consists of an overall build volume of 450 ft3 and a gantry speed up to 50 in/s. The RAM system is also equipped with a modular pumping station capable of pumping feedstock material at pressures of 3000 psi in 5 or 55 gallon reservoirs. This work intends to reveal the challenges of working with a large scale Direct Ink Writing (DIW) process and how to overcome them. Two material chemistries have been scaled up for this system and are presented herein: a peroxide cured vinyl ester and latent cured epoxy-anhydrides. Factors such as pumpability, printability, and performance vary significantly between these systems and are discussed using rheological characterization, modeling, printing setup and parameters, and part design. Figure Please click Additional Files below to see the full abstract

    Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

    Get PDF
    Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.Comment: 24 pages, 8 figure

    Frenkel and charge transfer excitons in C60

    Full text link
    We have studied the low energy electronic excitations of C60 using momentum dependent electron energy-loss spectroscopy in transmission. The momentum dependent intensity of the gap excitation allows the first direct experimental determination of the energy of the 1Hg excitation and thus also of the total width of the multiplet resulting from the gap transition. In addition, we could elucidate the nature of the following excitations - as either Frenkel or charge transfer excitons.Comment: RevTEX, 3 Figures, to appear in Phys. Rev.

    Real-World Goal Setting and Use of Outcome Measures According to the International Classification of Functioning, Disability and Health: A European Survey of Physical Therapy Practice in Multiple Sclerosis

    Get PDF
    Goal setting is a core component of physical therapy in multiple sclerosis (MS). It is unknown whether and to what extent goals are set at different levels of the International Classification of Functioning, Disability and Health (ICF), and whether, and to which, standardized outcome measures are used in real life for evaluation at the different ICF levels. Our aim was to describe the real-world use of goal setting and outcome measures in Europe. An online cross-sectional survey, completed by 212 physical therapists (PTs) specialized in MS from 26 European countries, was conducted. Differences between European regions and relationships between goals and assessments were analyzed. PTs regularly set goals, but did not always apply the Specific, Measurable, Achievable, Realistic, Timed (SMART) criteria. Regions did not differ in the range of activities assessed, but in goals set (e.g., Western and Northern regions set significantly more goals regarding leisure and work) and outcome measures used (e.g., the Berg Balance Scale was more frequently used in Northern regions). Quality of life was not routinely assessed, despite being viewed as an important therapy goal. Discrepancies existed both in goal setting and assessment across European regions. ICF assists in understanding these discrepancies and in guiding improved health-care for the future.</jats:p

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples

    Parameters of the Effective Singlet-Triplet Model for Band Structure of High-TcT_c Cuprates by Different Approaches

    Full text link
    The present paper covers the problem of parameters determination for High-TcT_c superconductive copper oxides. Different approaches, {\it ab initio} LDA and LDA+U calculations and Generalized Tight-Binding (GTB) method for strongly correlated electron systems, are used to calculate hopping and exchange parameters of the effective singlet-triplet model for CuO2CuO_2-layer. The resulting parameters are in remarkably good agreement with each other and with parameters extracted from experiment. This set of parameters is proposed for proper quantitative description of physics of hole doped High-TcT_c cuprates in the framework of effective models.Comment: PACS 74.72.h; 74.20.z; 74.25.Jb; 31.15.A

    Representing addition and subtraction : learning the formal conventions

    Get PDF
    The study was designed to test the effects of a structured intervention in teaching children to represent addition and subtraction. In a post-test only control group design, 90 five-year-olds experienced the intervention entitled Bi-directional Translation whilst 90 control subjects experienced typical teaching. Post-intervention testing showed some significant differences between the two groups both in terms of being able to effect the addition and subtraction operations and in being able to determine which operation was appropriate. The results suggest that, contrary to historical practices, children's exploration of real world situations should precede practice in arithmetical symbol manipulation

    Superconductivity in the two dimensional Hubbard Model.

    Full text link
    Quasiparticle bands of the two-dimensional Hubbard model are calculated using the Roth two-pole approximation to the one particle Green's function. Excellent agreement is obtained with recent Monte Carlo calculations, including an anomalous volume of the Fermi surface near half-filling, which can possibly be explained in terms of a breakdown of Fermi liquid theory. The calculated bands are very flat around the (pi,0) points of the Brillouin zone in agreement with photoemission measurements of cuprate superconductors. With doping there is a shift in spectral weight from the upper band to the lower band. The Roth method is extended to deal with superconductivity within a four-pole approximation allowing electron-hole mixing. It is shown that triplet p-wave pairing never occurs. Singlet d_{x^2-y^2}-wave pairing is strongly favoured and optimal doping occurs when the van Hove singularity, corresponding to the flat band part, lies at the Fermi level. Nearest neighbour antiferromagnetic correlations play an important role in flattening the bands near the Fermi level and in favouring superconductivity. However the mechanism for superconductivity is a local one, in contrast to spin fluctuation exchange models. For reasonable values of the hopping parameter the transition temperature T_c is in the range 10-100K. The optimum doping delta_c lies between 0.14 and 0.25, depending on the ratio U/t. The gap equation has a BCS-like form and (2*Delta_{max})/(kT_c) ~ 4.Comment: REVTeX, 35 pages, including 19 PostScript figures numbered 1a to 11. Uses epsf.sty (included). Everything in uuencoded gz-compressed .tar file, (self-unpacking, see header). Submitted to Phys. Rev. B (24-2-95
    corecore