161 research outputs found

    Sedimentation rates of the middle Miocene Clarkia Lake deposit, Nothern Idaho, USA

    Get PDF
    A global warming phase related to the onset of the Columbia River volcanism in the USA is recorded in the middle Miocene Clarkia Lake deposit, which yields abundant fossil leaves of subtropical to warm-termperate species preserved in extraordinary conditions [1]. These leaf fossils are found in varve-like laminated successions that presumably represent seasonal phases interleaved with volcanic-ash layers [2]. Despite being studied for over four decades, this paleolake deposit remains poorly constrained in its time-scale. Defining its sedimentation rate is pivotal for reconstructing the paleoclimatic conditions during the middle Miocene. X-Ray Fluorescence (XRF) scanning of key intervals offered insights about the elemental ratio distribution in the Clarkia Lake deposit, which might hold the answer to the sedimentation rate question. Accelerating voltages of 10, 30, and 50 kV detected counts of Mg, Al, Si, P, S, Cl, Ar, K, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Br, Rb, Sr, Y, Zr, Nb, Mo, and Ba. Plots of ratios using 702 element-combinations show a strong, positive correlation between the observed varve-like structures and S/Rb and Zr/Rb ratios. The former ratio is interpreted as a tracer of fluvial dilution of the presumably constant rate of reduced sulfur deposition, and the latter denotes variation in the grain-size distribution. For both ratios, low countings represent light-colored, coarse-grained, and quartz-rich layers while high countings correspond to dark-colored, fine-grained, and organic-rich layers. Volcanic ash-layers are distinguishable by enhanced signals of Si, Al, Ti, Zn, and Rb as well as low counts of Fe and Mn. Ratios of Zn and trace elements remarkably detect the extension of these layers along the profiles. Preliminary statistic treatment of this XRF data, employing spectral analysis, suggests depositional cycles at every 1.

    Context-Aware Telco Outdoor Localization

    Get PDF
    Recent years have witnessed the fast growth in telecommunication (Telco) techniques from 2G to upcoming 5G. Precise outdoor localization is important for Telco operators to manage, operate and optimize Telco networks. Differing from GPS, Telco localization is a technique employed by Telco operators to localize outdoor mobile devices by using measurement report (MR) data. When given MR samples containing noisy signals (e.g., caused by Telco signal interference and attenuation), Telco localization often suffers from high errors. To this end, the main focus of this paper is how to improve Telco localization accuracy via the algorithms to detect and repair outlier positions with high errors. Specifically, we propose a context-aware Telco localization technique, namely RLoc, which consists of three main components: a machine-learning-based localization algorithm, a detection algorithm to find flawed samples, and a repair algorithm to replace outlier localization results by better ones (ideally ground truth positions). Unlike most existing works to detect and repair every flawed MR sample independently, we instead take into account spatio-temporal locality of MR locations and exploit trajectory context to detect and repair flawed positions. Our experiments on the real MR data sets from 2G GSM and 4G LTE Telco networks verify that our work RLoc can greatly improve Telco location accuracy. For example, RLoc on a large 4G MR data set can achieve 32.2 meters of median errors, around 17.4 percent better than state-of-the-art.Peer reviewe

    Visualization Challenges of Virtual Reality 3D Images in New Media Environments

    Get PDF
    This paper proposes a three-dimensional image visualization process to face-drawing three-dimensional image reconstruction algorithm to obtain the data field with three-dimensional space, using color adjustment based on global color correction and local Poisson fusion to optimize the splicing seams between the texture color blocks and updating the visualization technology of three-dimensional images. Divide the digital display design and create a virtual reality visualization display using 3D modeling in combination with the new media environment. Propose design steps to visualize virtual reality three-dimensional images in the new media environment by combining the key algorithms of three-dimensional image visualization from the previous section. Combined with the application of new media displaying 3D images, the concept of artifact shape in reconstructed images is proposed to analyze the quality of 3D image reconstruction by taking the Herman model and Sheep-Logan model as the research object. Test experiments are conducted to examine the visual impact of texture mapping algorithms, and different sampling intervals are set to measure the drawing time of 3D reconstruction. For the data size and number of pictures of other organizations, the processing time of the 3D image reconstruction algorithm based on surface drawing is no more than 2s. The denser the sampling points are, the higher the degree of fitting, the more complete the preservation of isosurface information is, the finer the effect of 3D reconstruction, and the higher the quality of the image

    High-speed surface-property recognition by 140-GHz frequency

    Full text link
    In the field of integrated sensing and communication, there's a growing need for advanced environmental perception. The terahertz (THz) frequency band, significant for ultra-high-speed data connections, shows promise in environmental sensing, particularly in detecting surface textures crucial for autonomous system's decision-making. However, traditional numerical methods for parameter estimation in these environments struggle with accuracy, speed, and stability, especially in high-speed scenarios like vehicle-to-everything communications. This study introduces a deep learning approach for identifying surface roughness using a 140-GHz setup tailored for high-speed conditions. A high-speed data acquisition system was developed to mimic real-world scenarios, and a diverse set of rough surface samples was collected for realistic high-speed datasets to train the models. The model was trained and validated in three challenging scenarios: random occlusions, sparse data, and narrow-angle observations. The results demonstrate the method's effectiveness in high-speed conditions, suggesting terahertz frequencies' potential in future sensing and communication applications.Comment: Submitted to IEEE Transactions on Terahertz Science and Technolog

    Absolute frequency measurements with a robust, transportable ^{40}Ca^{+} optical clock

    Full text link
    We constructed a transportable 40Ca+ optical clock (with an estimated minimum systematic shift uncertainty of 1.3*10^(-17) and a stability of 5*10^(-15)/sqrt{tau} ) that can operate outside the laboratory. We transported it from the Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan to the National Institute of Metrology, Beijing. The absolute frequency of the 729 nm clock transition was measured for up to 35 days by tracing its frequency to the second of International System of Units. Some improvements were implemented in the measurement process, such as the increased effective up-time of 91.3 % of the 40Ca+ optical clock over a 35-day-period, the reduced statistical uncertainty of the comparison between the optical clock and hydrogen maser, and the use of longer measurement times to reduce the uncertainty of the frequency traceability link. The absolute frequency measurement of the 40Ca+ optical clock yielded a value of 411042129776400.26 (13) Hz with an uncertainty of 3.2*10^(-16), which is reduced by a factor of 1.7 compared with our previous results. As a result of the increase in the operating rate of the optical clock, the accuracy of 35 days of absolute frequency measurement can be comparable to the best results of different institutions in the world based on different optical frequency measurements.Comment: 15 pages, 5 figure

    Characterization of the Differential Aroma Compounds among 10 Different Kinds of Premium Soy Sauce

    Get PDF
    Investigation of the aroma differences among different kinds of soy sauces is beneficial for controlling their flavor quality and processing improvement from the perspectives of raw materials and brewing techniques. The aroma compounds in ten premium soy sauces (CB, HT1, HT2, LH, LJJ1, LJJ2, QH, XH1, XH2, WZ) were qualitative and quantitative analyzed by solid phase extraction and solid-phase microextraction combined with gas chromatography-mass spectrometry (GC-MS). The contributions of aroma compounds to the aroma characteristics of premium soy sauce was determined by sensory evaluation, calculation of aroma activity value (OAV) and partial least squares regression analysis (PLSR). A total of 86 volatile compounds were identified in 10 premium soy sauces, 44 of them were both detected in 10 soy sauce. The 30 aroma compounds with OAV≥1 were detected, the 5-ethyl-4-hydroxy-2-methyl-3(2H)-furanone showed the highest OAV (373~4698), followed by 4-methoxy-2,5-dimethyl-3(2H)-furanone (0~1473). WZ soy sauce had a strong smoky aroma due to the highest variety of phenolic and ketone compounds. The overall aroma profile of CB soy sauce was the weakest with the lowest concentration of ethanol (25.775 μg/L), but the highest content of pyrazine compounds (182.796 μg/L), of which 2,6-dimethylpyrazine was 66.256 μg/L. XH1 soy sauce had a strong sauce aroma and alcoholic notes, due to the highest ethanol content (147.257 μg/L) and higher phenolic content, for example the concentration of 4-ethyl-2-methoxyphenol (18240.479 μg/L) was the highest. XH2 soy sauce had a strong malty aroma. The content of 2-methyl-1-propanol (51.223 μg/L) and 2,3-butanediol (57921.798 μg/L) in LH soy sauce was the highest among others. The content of 1-octen-3-ol (61.219 μg/L) in HT1 soy sauce was the highest. Combination of OAV and PLSR analysis confirmed the ethyl acetate, 3-hydroxy-2-butanone, 2,3-butanediol, 3-ethyl-2,5-dimethylpyrazine, 4-methoxy-2,5-dimethyl-3(2H)-furanone, 4-ethylguaiacol and 4-ethylphenol were the key aroma-active components that contribute to the aroma differences among 10 kinds of premium soy sauce

    Applications of advanced metrology for understanding the effects of drying temperature in the lithium-ion battery electrode manufacturing process

    Get PDF
    The performance of lithium-ion batteries is determined by the architecture and properties of electrodes formed during manufacturing, particularly in the drying process when solvent is removed and the electrode structure is formed. Temperature is one of the most dominant parameters that influences the process, and therefore a comparison of temperature effects on both NMC622-based cathodes (PVDF-based binder) and graphite-based anodes (water-based binder) dried at RT, 60, 80, 100 and 120 °C has been undertaken. X-ray computed tomography showed that NMC622 particles concentrated at the surface of the cathode coating except when dried at 60 °C. However, anodes showed similar graphite distributions at all temperatures. The discharge capacities for the cathodes dried at 60, 80, 100 and 120 °C displayed the following trend: 60 °C < 80 °C < 100 °C < 120 °C as C-rate was increased which was consistent with the trends found in adhesion testing between 60 and 120 °C. Focused-ion beam scanning electrode microscopy and energy-dispersive X-ray spectroscopy suggested that the F-rich binder distribution was largely insensitive to temperature for cathodes. In contrast, conductivity enhancing fine carbon agglomerated on the upper surface of the active NMC particles in the cathode as temperature increased. The cathode dried at RT had the highest adhesion force of 0.015 N mm−1 and the best electrochemical rate performance. Conversely, drying temperature had no significant effect on the electrochemical performance of the anode, which was consistent with only a relatively small change in the adhesion, related to the use of lower adhesion water-based binders

    Insights into surface chemistry down to nanoscale: an accessible colour hyperspectral imaging approach for scanning electron microscopy

    Get PDF
    Chemical imaging (CI) is the spatial identification of molecular chemical composition and is critical to characterising the (in-) homogeneity of functional material surfaces. Nanoscale CI on bulk functional material surfaces is a longstanding challenge in materials science and is addressed here. We demonstrate the feasibility of surface sensitive CI in the scanning electron microscope (SEM) using colour enriched secondary electron hyperspectral imaging (CSEHI). CSEHI is a new concept in the SEM, where secondary electron emissions in up to three energy ranges are assigned to RGB (red, green, blue) image colour channels. The energy ranges are applied to a hyperspectral image volume which is collected in as little as 50 s. The energy ranges can be defined manually or automatically. Manual application requires additional information from the user as first explained and demonstrated for a lithium metal anode (LMA) material, followed by manual CSEHI for a range of materials from art history to zoology. We introduce automated CSEHI, eliminating the need for additional user information, by finding energy ranges using a non-negative matrix factorization (NNMF) based method. Automated CSEHI is evaluated threefold: (1) benchmarking to manual CSEHI on LMA; (2) tracking controlled changes to LMA surfaces; (3) comparing automated CSEHI and manual CI results published in the past to reveal nanostructures in peacock feather and spider silk. Based on the evaluation, CSEHI is well placed to differentiate/track several lithium compounds formed through a solution reaction mechanism on a LMA surface (eg. lithium carbonate, lithium hydroxide and lithium nitride). CSEHI was used to provide insights into the surface chemical distribution on the surface of samples from art history (mineral phases) to zoology (di-sulphide bridge localisation) that are hidden from existing surface analysis techniques. Hence, the CSEHI approach has the potential to impact the way materials are analysed for scientific and industrial purposes

    Autonomous motion and control of lower limb exoskeleton rehabilitation robot

    Get PDF
    Introduction: The lower limb exoskeleton rehabilitation robot should perform gait planning based on the patient’s motor intention and training status and provide multimodal and robust control schemes in the control strategy to enhance patient participation.Methods: This paper proposes an adaptive particle swarm optimization admittance control algorithm (APSOAC), which adaptively optimizes the weights and learning factors of the PSO algorithm to avoid the problem of particle swarm falling into local optimal points. The proposed improved adaptive particle swarm algorithm adjusts the stiffness and damping parameters of the admittance control online to reduce the interaction force between the patient and the robot and adaptively plans the patient’s desired gait profile. In addition, this study proposes a dual RBF neural network adaptive sliding mode controller (DRNNASMC) to track the gait profile, compensate for frictional forces and external perturbations generated in the human-robot interaction using the RBF network, calculate the required moments for each joint motor based on the lower limb exoskeleton dynamics model, and perform stability analysis based on the Lyapunov theory.Results and discussion: Finally, the efficiency of the APSOAC and DRNNASMC algorithms is demonstrated by active and passive walking experiments with three healthy subjects, respectively
    • …
    corecore