746 research outputs found

    Penetration and cratering experiments of graphite by 0.5-mm diameter steel spheres at various impact velocities

    Get PDF
    Cratering experiments have been conducted with 0.5-mm diameter AISI 52100 steel spherical projectiles and 30-mm diameter, 15-mm long graphite targets. The latter were made of a commercial grade of polycrystalline and porous graphite named EDM3 whose behavior is known as macroscopically isotropic. A two-stage light-gas gun launched the steel projectiles at velocities between 1.1 and 4.5 km s 1. In most cases, post-mortem tomographies revealed that the projectile was trapped, fragmented or not, inside the target. It showed that the apparent crater size and depth increase with the impact velocity. This is also the case of the crater volume which appears to follow a power law significantly different from those constructed in previous works for similar impact conditions and materials. Meanwhile, the projectile depth of penetration starts to decrease at velocities beyond 2.2 km s 1. This is firstly because of its plastic deformation and then, beyond 3.2 km s 1, because of its fragmentation. In addition to these three regimes of penetration behavior already described by a few authors, we suggest a fourth regime in which the projectile melting plays a significant role at velocities above 4.1 km s 1. A discussion of these four regimes is provided and indicates that each phenomenon may account for the local evolution of the depth of penetration

    Dynamic cratering of graphite : experimental results and simulations

    Get PDF
    The cratering process in brittle materials under hypervelocity impact (HVI) is of major relevance for debris shielding in spacecraft or high-power laser applications. Amongst other materials, carbon is of particular interest since it is widely used as elementary component in composite materials. In this paper we study a porous polycrystalline graphite under HVI and laser impact, both leading to strong debris ejection and cratering. First, we report new experimental data for normal impacts at 4100 and 4200 m s-1 of a 500-μm-diameter steel sphere on a thick sample of graphite. In a second step, dynamic loadings have been performed with a high-power nanosecond laser facility. High-resolution X-ray tomographies and observations with a scanning electron microscope have been performed in order to visualize the crater shape and the subsurface cracks. These two post-mortem diagnostics also provide evidence that, in the case of HVI tests, the fragmented steel sphere was buried into the graphite target below the crater surface. The current study aims to propose an interpretation of the results, including projectile trapping. In spite of their efficiency to capture overall trends in crater size and shape, semi-empirical scaling laws do not usually predict these phenomena. Hence, to offer better insight into the processes leading to this observation, the need for a computational damage model is argued. After discussing energy partitioning in order to identify the dominant physical mechanisms occurring in our experiments, we propose a simple damage model for porous and brittle materials. Compaction and fracture phenomena are included in the model. A failure criterion relying on Weibull theory is used to relate material tensile strength to deformation rate and damage. These constitutive relations have been implemented in an Eulerian hydrocode in order to compute numerical simulations and confront them with experiments. In this paper, we propose a simple fitting procedure of the unknown Weibull parameters based on HVI results. Good agreement is found with experimental observations of crater shapes and dimensions, as well as debris velocity. The projectile inclusion below the crater is also reproduced by the model and a mechanism is proposed for the trapping process. At least two sets of Weibull parameters can be used to match the results. Finally, we show that laser experiment simulations may discriminate in favor of one set of parameters

    Diversité de recommandations : application à une plateforme de blogs et évaluation

    Get PDF
    International audienceLes systèmes de recommandations (SR) ont pour objectif de proposer automatiquement à l'usager des objets en relation avec ses intérêts. Dans le contexte de la recherche documentaire, les intérêts de l'usager peuvent être modélisés à partir des contenus des documents visités ou des actions réalisées. Pour tendre vers des recommandations plus pertinentes, nous proposons un modèle de SR qui construit une liste de recommandations répondant à un large spectre d'intérêts potentiels. L'orignialité de notre modèle est qu'il repose sur la notion de diversité, obtenue en agrégeant différentes mesures d'intérêt pour construire la liste de recommandations finale. Nous définissons également un protocole permettant d'évaluer l'intérêt de ces recommandations. Nous présentons enfin les résultats obtenus par notre SR basé sur la diversité dans le cadre de la recommandation de billets de blogs

    How do the grains slide in fine-grained zirconia polycrystals at high temperature?

    Full text link
    Degradation of mechanical properties of zirconia polycrystals is hardly discussed in terms of solution-precipitation grain-boundary sliding due to experimental controversies over imaging of intergranular amorphous phases at high and room temperatures. Here, the authors applied the techniques of mechanical spectroscopy and transmission electron microscopy (TEM) to shed light on the amorphization of grain interfaces at high temperature where the interface-reaction determines the behaviour of fine-grained zirconia polycrystals. They present mechanical spectroscopy results, which yield evidences of an intergranular amorphous phase in silica doped and high-purity zirconia at high temperature. Quenching of zirconia polycrystals reveals an intergranular amorphous phase on TEM images at room temperature.Comment: 12 pages, 3 figure

    ICAR, a tool for Blind Source Separation using Fourth Order Statistics only

    Get PDF
    International audienceThe problem of blind separation of overdetermined mixtures of sources, that is, with fewer sources than (or as many sources as) sensors, is addressed in this paper. A new method, named ICAR (Independent Component Analysis using Redundancies in the quadricovariance), is proposed in order to process complex data. This method, without any whitening operation, only exploits some redundancies of a particular quadricovariance matrix of the data. Computer simulations demonstrate that ICAR offers in general good results and even outperforms classical methods in several situations: ICAR ~(i) succeeds in separating sources with low signal to noise ratios, ~(ii) does not require sources with different SO or/and FO spectral densities, ~(iii) is asymptotically not affected by the presence of a Gaussian noise with unknown spatial correlation, (iv) is not sensitive to an over estimation of the number of sources

    Parallel Parsing of Context-Free Languages on an Array of Processors

    Get PDF
    Kosaraju [Kosaraju 69] and independently ten years later, Guibas, Kung and Thompson [Guibas 79] devised an algorithm (K-GKT) for solving on an array of processors a class of dynamic programming problems of which general context-free language (CFL) recognition is a member. I introduce an extension to K-GKT which allows parsing as well as recognition. The basic idea of the extension is to add counters to the processors. These act as pointers to other processors. The extended algorithm consists of three phases which I call the recognition phase, the marking phase and the parse output phase. I first consider the case of unambiguous grammars. I show that in that case, the algorithm has O(n2log n) space complexity and a linear time complexity. To obtain these results I rely on a counter implementation that allows the execution in constant time of each of the operations: set to zero, test if zero, increment by 1 and decrement by 1. I provide a proof of correctness of this implementation. I introduce the concept of efficient grammars. One factor in the multiplicative constant hidden behind the O(n2log n) space complexity measure for the algorithm is related to the number of non-terminals in the (unambiguous) grammar used. I say that a grammar is k-efficient if it allows the processors to store not more than k pointer pairs. I call a 1-efficient grammar an efficient grammar. I show that two properties that I call nt-disjunction and rhsdasjunction together with unambiguity are sufficient but not necessary conditions for grammar efficiency. I also show that unambiguity itself is not a necessary condition for efficiency. I then consider the case of ambiguous grammars. I present two methods for outputting multiple parses. Both output each parse in linear time. One method has O(n3log n) space complexity while the other has O(n2log n) space complexity. I then address the issue of problem decomposition. I show how part of my extension can be adapted, using a standard technique, to process inputs that would be too large for an array of some fixed size. I then discuss briefly some issues related to implementation. I report on an actual implementation on the I.C.L. DAP. Finally, I show how another systolic CFL parsing algorithm, by Chang, Ibarra and Palis [Chang 87], can be generalized to output parses in preorder and inorder
    corecore