1,398 research outputs found

    Risk premium on crude oil futures prices

    Get PDF
    1 online resource (iv, 28 p.) : col. ill.Includes abstract.Includes bibliographical references (p. 26-28).This paper tests the risk premium on crude oil future prices in the US market. The history data on five crude oil spot prices and one crude oil future are collected through the period 2011 to 2013. To examine the risk premium on crude oil future prices based on the sample, this paper employs the cost of carry model. Meanwhile, OLS and GLS are the major regression used in this research. The results of the empirical study show that the risk premium on crude oil future prices is positive. This paper concludes that if the spot price grows, the potential profit to invest in crude oil futures is optimistic

    An Angular Position-Based Two-Stage Friction Modeling and Compensation Method for RV Transmission System

    Get PDF
    In RV transmission system (RVTS), friction is closely related to rotational speed and angular position. However, classical friction models do not consider the influence of angular position on friction, resulting in limited accuracy in describing the RVTS frictional behavior. For this reason, this paper proposes an angular position-based two-stage friction model for RVTS, and achieves a more accurate representation of friction of RVTS. The proposed model consists of two parts, namely pre-sliding model and sliding model, which are divided by the maximum elastic deformation recovery angle of RVTS obtained from loading-unloading tests. The pre-sliding friction behavior is regarded as a spring model, whose stiffness is determined by the angular position and the acceleration when the velocity crosses zero, while the sliding friction model is established by the angular-segmented Stribeck function, and the friction parameters of the adjacent segment are linearly smoothed. A feedforward compensation based on the proposed model was performed on the RVTS, and its control performance was compared with that using the classical Stribeck model. The comparison results show that when using the proposed friction model, the low-speed-motion smoothness of the RVTS can be improved by 14.2%, and the maximum zero-crossing speed error can be reduced by 37.5%, which verifies the validity of the proposed friction model, as well as the compensation method

    DifferSketching: How Differently Do People Sketch 3D Objects?

    Full text link
    Multiple sketch datasets have been proposed to understand how people draw 3D objects. However, such datasets are often of small scale and cover a small set of objects or categories. In addition, these datasets contain freehand sketches mostly from expert users, making it difficult to compare the drawings by expert and novice users, while such comparisons are critical in informing more effective sketch-based interfaces for either user groups. These observations motivate us to analyze how differently people with and without adequate drawing skills sketch 3D objects. We invited 70 novice users and 38 expert users to sketch 136 3D objects, which were presented as 362 images rendered from multiple views. This leads to a new dataset of 3,620 freehand multi-view sketches, which are registered with their corresponding 3D objects under certain views. Our dataset is an order of magnitude larger than the existing datasets. We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics, and within and across groups of creators. We found that the drawings by professionals and novices show significant differences at stroke-level, both intrinsically and extrinsically. We demonstrate the usefulness of our dataset in two applications: (i) freehand-style sketch synthesis, and (ii) posing it as a potential benchmark for sketch-based 3D reconstruction. Our dataset and code are available at https://chufengxiao.github.io/DifferSketching/.Comment: SIGGRAPH Asia 2022 (Journal Track

    A Monte Carlo Study of Erraticity Behavior in Nucleus-Nucleus Collisions at High Energies

    Get PDF
    It is demonstrated using Monte Carlo simulation that in different nucleus-nucleus collision samples, the increase of the fluctuation of event factorial moments with decreasing phase space scale, called erraticity, is still dominated by the statistical fluctuations. This result does not depend on the Monte Carlo models. Nor does it depend on the concrete conditions, e.g. the collision energy, the mass of colliding nuclei, the cut of phase space, etc.. This means that the erraticity method is sensitive to the appearance of novel physics in the central collisions of heavy nuclei.Comment: 9 pages, 4 figures (in eps form

    Extracorporeal Delivery of a Therapeutic Enzyme

    Get PDF
    To remove circulating harmful small biochemical(s)/substrates causing/deteriorating certain chronic disease, therapeutic enzyme(s) delivered via vein injection/infusion suffer(s) from immunoresponse after repeated administration at proper intervals for a long time and short half-lives since delivery. Accordingly, a novel, generally-applicable extracorporeal delivery of a therapeutic enzyme is proposed, by refitting a conventional hemodialysis device bearing a dialyzer, two pumps and connecting tubes, to build a routine extracorporeal blood circuit but a minimal dialysate circuit closed to circulate the therapeutic enzyme in dialysate. A special quantitative index was derived to reflect pharmacological action and thus pharmacodynamics of the delivered enzyme. With hyperuricemic blood in vitro and hyperuricemic geese, a native uricase via extracorporeal delivery was active in the dialysate for periods much longer than that in vivo through vein injection, and exhibited the expected pharmacodynamics to remove uric acid in hyperuricemic blood in vitro and multiple forms of uric acid in hyperuricemic geese. Therefore, the extracorporeal delivery approach of therapeutic enzymes was effective to remove unwanted circulating small biochemical(s)/substrates, and was expected to avoid immunogenicity problems of therapeutic enzymes after repeated administration at proper intervals for a long time due to no contacts with macromolecules and cells in the body

    NeuroSeg-II: A deep learning approach for generalized neuron segmentation in two-photon Ca2+ imaging

    Get PDF
    The development of two-photon microscopy and Ca2+ indicators has enabled the recording of multiscale neuronal activities in vivo and thus advanced the understanding of brain functions. However, it is challenging to perform automatic, accurate, and generalized neuron segmentation when processing a large amount of imaging data. Here, we propose a novel deep-learning-based neural network, termed as NeuroSeg-II, to conduct automatic neuron segmentation for in vivo two-photon Ca2+ imaging data. This network architecture is based on Mask region-based convolutional neural network (R-CNN) but has enhancements of an attention mechanism and modified feature hierarchy modules. We added an attention mechanism module to focus the computation on neuron regions in imaging data. We also enhanced the feature hierarchy to extract feature information at diverse levels. To incorporate both spatial and temporal information in our data processing, we fused the images from average projection and correlation map extracting the temporal information of active neurons, and the integrated information was expressed as two-dimensional (2D) images. To achieve a generalized neuron segmentation, we conducted a hybrid learning strategy by training our model with imaging data from different labs, including multiscale data with different Ca2+ indicators. The results showed that our approach achieved promising segmentation performance across different imaging scales and Ca2+ indicators, even including the challenging data of large field-of-view mesoscopic images. By comparing state-of-the-art neuron segmentation methods for two-photon Ca2+ imaging data, we showed that our approach achieved the highest accuracy with a publicly available dataset. Thus, NeuroSeg-II enables good segmentation accuracy and a convenient training and testing process

    Identification of heavy-flavour jets with the CMS detector in pp collisions at 13 TeV

    Get PDF
    Many measurements and searches for physics beyond the standard model at the LHC rely on the efficient identification of heavy-flavour jets, i.e. jets originating from bottom or charm quarks. In this paper, the discriminating variables and the algorithms used for heavy-flavour jet identification during the first years of operation of the CMS experiment in proton-proton collisions at a centre-of-mass energy of 13 TeV, are presented. Heavy-flavour jet identification algorithms have been improved compared to those used previously at centre-of-mass energies of 7 and 8 TeV. For jets with transverse momenta in the range expected in simulated tt\mathrm{t}\overline{\mathrm{t}} events, these new developments result in an efficiency of 68% for the correct identification of a b jet for a probability of 1% of misidentifying a light-flavour jet. The improvement in relative efficiency at this misidentification probability is about 15%, compared to previous CMS algorithms. In addition, for the first time algorithms have been developed to identify jets containing two b hadrons in Lorentz-boosted event topologies, as well as to tag c jets. The large data sample recorded in 2016 at a centre-of-mass energy of 13 TeV has also allowed the development of new methods to measure the efficiency and misidentification probability of heavy-flavour jet identification algorithms. The heavy-flavour jet identification efficiency is measured with a precision of a few per cent at moderate jet transverse momenta (between 30 and 300 GeV) and about 5% at the highest jet transverse momenta (between 500 and 1000 GeV)
    corecore