241 research outputs found

    Effect of Hf additions on microstructure and mechanical properties of a Co-9Al-9W-2Ta alloy at room and high-temperatures

    Get PDF
    AbstractIn The microstructural evolution, room/high-temperature mechanical properties of a Co-9Al-9W-2Ta alloy with 2, 5, 6, 9 at.% Hf additions (referred as to 2Hf, 5Hf, 6Hf and 9Hf alloy hereafter, and content of W+Hf = 9 at.% for all alloys) prepared by arc-melting were investigated. It was found that the as-cast 2Hf∼6Hf alloys showed a microstructure composed of Co-base solid solution γ phase (γ-CoSS) and eutectic of γ+ intermetallic compound Co23Hf6, and the 9Hf alloy was composed of primary Co23Hf6 and (γ+ Co23Hf6) eutectic. While after 1170°C/8h solution and 800°C/100h aging, the cubic γ’ phase with a size of 200nm∼700nm homogeneously and coherently precipitates on the γ matrix for the 2Hf ∼ 6Hf alloys, no γ’ particles were found in the 9Hf alloy. The 2Hf alloy exhibits yield stress anomaly at temperatures above 600°C, and the temperature corresponding to the anomalies stress peak is at about 700°C. However, the other three alloys show no yield stress anomalies

    A comparison of pitting susceptibility of Q235 and HRB335 carbon steels used for reinforced concrete

    Get PDF
    The phase structure and the pitting susceptibility of two carbon steels, Q235 and HRB335, used for reinforced concrete, are investigated by phase observation, polarization curve measure-ments, electrochemical impedance spectroscopy, and Mott-Schottky analysis. It is found that Q235 is ferrite and HRB335 is pearlite. Q235 is more susceptible to chloride ions leading to pit-ting than HRB335. The polarization curves show that the breakdown potential of the passive film in saturated Ca(OH)2 solution containing 0.4 M NaCl is 0 V for Q235 and 0.34 V for HRB335. The Mott-Schottky analyses show that passive films formed on Q235 and HRB335 in saturated Ca(OH)2 solution containing chloride ions behave like an n-type semiconductor. The passive film formed on Q235 has a higher donor density, which explains why Q235 is more susceptible to pitting than HRB335

    Large Language Models in Finance: A Survey

    Full text link
    Recent advances in large language models (LLMs) have opened new possibilities for artificial intelligence applications in finance. In this paper, we provide a practical survey focused on two key aspects of utilizing LLMs for financial tasks: existing solutions and guidance for adoption. First, we review current approaches employing LLMs in finance, including leveraging pretrained models via zero-shot or few-shot learning, fine-tuning on domain-specific data, and training custom LLMs from scratch. We summarize key models and evaluate their performance improvements on financial natural language processing tasks. Second, we propose a decision framework to guide financial professionals in selecting the appropriate LLM solution based on their use case constraints around data, compute, and performance needs. The framework provides a pathway from lightweight experimentation to heavy investment in customized LLMs. Lastly, we discuss limitations and challenges around leveraging LLMs in financial applications. Overall, this survey aims to synthesize the state-of-the-art and provide a roadmap for responsibly applying LLMs to advance financial AI.Comment: Accepted by 4th ACM International Conference on AI in Finance (ICAIF-23) https://ai-finance.or

    Adonis: Practical and Efficient Control Flow Recovery through OS-Level Traces

    Get PDF
    Control flow recovery is critical to promise the software quality, especially for large-scale software in production environment. However, the efficiency of most current control flow recovery techniques is compromised due to their runtime overheads along with deployment and development costs. To tackle this problem, we propose a novel solution, Adonis, which harnesses OS-level traces, such as dynamic library calls and system call traces, to efficiently and safely recover control flows in practice. Adonis operates in two steps: it first identifies the call-sites of trace entries, then it executes a pair-wise symbolic execution to recover valid execution paths. This technique has several advantages. First, Adonis does not require the insertion of any probes into existing applications, thereby minimizing runtime cost. Second, given that OS-level traces are hardware-independent, Adonis can be implemented across various hardware configurations without the need for hardware-specific engineering efforts, thus reducing deployment cost. Third, as Adonis is fully automated and does not depend on manually created logs, it circumvents additional development cost. We conducted an evaluation of Adonis on representative desktop applications and real-world IoT applications. Adonis can faithfully recover the control flow with 86.8% recall and 81.7% precision. Compared to the state-of-the-art log-based approach, Adonis can not only cover all the execution paths recovered, but also recover 74.9% of statements that cannot be covered. In addition, the runtime cost of Adonis is 18.3× lower than the instrument-based approach; the analysis time and storage cost (indicative of the deployment cost) of Adonis is 50× smaller and 443× smaller than the hardware-based approach, respectively. To facilitate future replication and extension of this work, we have made the code and data publicly available

    Multi-task deep neural network acoustic models with model adaptation using discriminative speaker identity for whisper recognition

    Get PDF
    This paper presents a study on large vocabulary continuous whisper automatic recognition (wLVCSR). wLVCSR provides the ability to use ASR equipment in public places without concern for disturbing others or leaking private information. However the task of wLVCSR is much more challenging than normal LVCSR due to the absence of pitch which not only causes the signal to noise ratio (SNR) of whispers to be much lower than normal speech but also leads to flatness and formant shifts in whisper spectra. Furthermore, the amount of whisper data available for training is much less than for normal speech. In this paper, multi-task deep neural network (DNN) acoustic models are deployed to solve these problems. Moreover, model adaptation is performed on the multi-task DNN to normalize speaker and environmental variability in whispers based on discriminative speaker identity information. On a Mandarin whisper dictation task, with 55 hours of whisper data, the proposed SI multi-task DNN model can achieve 56.7% character error rate (CER) improvement over a baseline Gaussian Mixture Model (GMM), discriminatively trained only using the whisper data. Besides, the CER of the proposed model for normal speech can reach 15.2%, which is close to the performance of a state-of-the-art DNN trained with one thousand hours of speech data. From this baseline, the model-adapted DNN gains a further 10.9% CER reduction over the generic model

    Referring Image Segmentation via Cross-Modal Progressive Comprehension

    Full text link
    Referring image segmentation aims at segmenting the foreground masks of the entities that can well match the description given in the natural language expression. Previous approaches tackle this problem using implicit feature interaction and fusion between visual and linguistic modalities, but usually fail to explore informative words of the expression to well align features from the two modalities for accurately identifying the referred entity. In this paper, we propose a Cross-Modal Progressive Comprehension (CMPC) module and a Text-Guided Feature Exchange (TGFE) module to effectively address the challenging task. Concretely, the CMPC module first employs entity and attribute words to perceive all the related entities that might be considered by the expression. Then, the relational words are adopted to highlight the correct entity as well as suppress other irrelevant ones by multimodal graph reasoning. In addition to the CMPC module, we further leverage a simple yet effective TGFE module to integrate the reasoned multimodal features from different levels with the guidance of textual information. In this way, features from multi-levels could communicate with each other and be refined based on the textual context. We conduct extensive experiments on four popular referring segmentation benchmarks and achieve new state-of-the-art performances.Comment: Accepted by CVPR 2020. Code is available at https://github.com/spyflying/CMPC-Refse

    A global long-term (1981–2000) land surface temperature product for NOAA AVHRR

    Get PDF
    Land surface temperature (LST) plays an important role in the research of climate change and various land surface processes. Before 2000, global LST products with relatively high temporal and spatial resolutions are scarce, despite a variety of operational satellite LST products. In this study, a global 0.05∘×0.05∘ historical LST product is generated from NOAA advanced very-high-resolution radiometer (AVHRR) data (1981–2000), which includes three data layers: (1) instantaneous LST, a product generated by integrating several split-window algorithms with a random forest (RF-SWA); (2) orbital-drift-corrected (ODC) LST, a drift-corrected version of RF-SWA LST; and (3) monthly averages of ODC LST. For an assumed maximum uncertainty in emissivity and column water vapor content of 0.04 and 1.0 g cm−2, respectively, evaluated against the simulation dataset, the RF-SWA method has a mean bias error (MBE) of less than 0.10 K and a standard deviation (SD) of 1.10 K. To compensate for the influence of orbital drift on LST, the retrieved RF-SWA LST was normalized with an improved ODC method. The RF-SWA LST were validated with in situ LST from Surface Radiation Budget (SURFRAD) sites and water temperatures obtained from the National Data Buoy Center (NDBC). Against the in situ LST, the RF-SWA LST has a MBE of 0.03 K with a range of −1.59–2.71 K, and SD is 1.18 K with a range of 0.84–2.76 K. Since water temperature only changes slowly, the validation of ODC LST was limited to SURFRAD sites, for which the MBE is 0.54 K with a range of −1.05 to 3.01 K and SD is 3.57 K with a range of 2.34 to 3.69 K, indicating good product accuracy. As global historical datasets, the new AVHRR LST products are useful for filling the gaps in long-term LST data. Furthermore, the new LST products can be used as input to related land surface models and environmental applications. Furthermore, in support of the scientific research community, the datasets are freely available at https://doi.org/10.5281/zenodo.3934354 for RF-SWA LST (Ma et al., 2020a), https://doi.org/10.5281/zenodo.3936627 for ODC LST (Ma et al., 2020c), and https://doi.org/10.5281/zenodo.3936641 for monthly averaged LST (Ma et al., 2020b)
    corecore