6 research outputs found

    200 NL H2 hydrogen storage tank using MgH2–TiH2–C nanocomposite as H storage material

    No full text
    MgH2-based hydrogen storage materials are promising candidates for solid-state hydrogen storage allowing efficient thermal management in energy systems integrating metal hydride hydrogen store with a solid oxide fuel cell (SOFC) providing dissipated heat at temperatures between 400 and 600 °C. Recently, we have shown that graphite-modified composite of TiH2 and MgH2 prepared by high-energy reactive ball milling in hydrogen (HRBM), demonstrates a high reversible gravimetric H storage capacity exceeding 5 wt % H, fast hydrogenation/dehydrogenation kinetics and excellent cycle stability. In present study, 0.9 MgH2 + 0.1 TiH2 +5 wt %C nanocomposite with a maximum hydrogen storage capacity of 6.3 wt% H was prepared by HRBM preceded by a short homogenizing pre-milling in inert gas. 300 g of the composite was loaded into a storage tank accommodating an air-heated stainless steel metal hydride (MH) container equipped with transversal internal (copper) and external (aluminium) fins. Tests of the tank were carried out in a temperature range from 150 °C (H2 absorption) to 370 °C (H2 desorption) and showed its ability to deliver up to 185 NL H2 corresponding to a reversible H storage capacity of the MH material of appr. 5 wt% H. No significant deterioration of the reversible H storage capacity was observed during 20 heating/cooling H2 discharge/charge cycles. It was found that H2 desorption performance can be tailored by selecting appropriate thermal management conditions and an optimal operational regime has been proposed

    Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Get PDF
    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory
    corecore