629 research outputs found
ANALISIS PERBEDAAN HEDGING KAKAO FUTURES DENGAN CROSS HEDGING KOPI ROBUSTA FUTURES YANG DIPERDAGANGAN DI BURSA BERJANGKA JAKARTA PERIODE: 2012-2016
Penelitian ini dilakukan untuk menganalisis perbedaan antara hedging kakao futures dengan cross hedging kopi robusta futures dalam meminimalkan risiko di pasar fisik komoditi kakao dengan membandingkan nilai varians return yang dihasilkan dari kedua kontrak futures. Data yang digunakan pada penelitian ini adalah data harian kakao spot, kakao futures, dan kopi robusta futures yang diperdagangkan di Bursa Berjangka Jakarta (BBJ) pada periode 2012-2016. Alat
analisis yang digunakan adalah uji korelasi pearson yang bertujuan untuk menguji hubungan antara harga spot dengan futures pada saat melakukan hedging ataupun cross hedging; uji akar unit digunakan untuk melihat kestasioneritas data sebelum dilakukan uji beda; uji regresi sederhana digunakan untuk menghitung nilai ratio hedged dan ratio cross hedged; uji independent sample t-test digunakan untuk membandingkan nilai varians return yang dihasilkan dari kedua kontrak futures tersebut. Hasil analisis menunjukkan bahwa tidak terdapat perbedaan varian returns yang dihasilkan dari keduanya. Dalam hal ini, penanganan risiko pada komoditi kakao di pasar fisiknya pada saat melakukan hedging ataupun cross hedging memiliki tingkat risiko yang sama, sehingga hedging kakao futures dan cross hedging kopi robusta futures sama-sama dapat digunakan untuk meminimalkan risiko pada pasar fisik kakao
Problems in Learning under Limited Resources and Information
The main theme of this thesis is to investigate how learning problems can be solved in the face of limited resources and with limited information to base inferences on. We study feature-efficient prediction when each feature comes with a cost and our goal is to construct a good predictor during training time with total cost not exceeding the given budget constraints. We also study complexity-theoretic properties of models for recovering social networks with knowledge only about how people in the network vote or how information propagates through the network
Data_Sheet_1_Assessment of Language and Literacy Teachers’ Distance Teaching in COVID-19 Lockdown Time.docx
The full text of this article can be freely accessed on the publisher's website
More results.
Panel A. AIC over time (lower is better) for the baseline model, and the UnIT-augmented models with Poisson and and negative binomial (NB) regressions in the first stage respectively. The NB-based approach has lower AIC on average. Similar conclusion is reached in panel B considering the log-likelihood of the models over time (higher is better). The Poisson-approach (red) ultimately makes slightly better predictions from the two stage modeling, as shown in the bottom row of the figure. Panel C illustrates that influenza is a good choice for a COVID-19-similar disease, producing the largest coefficients for the risk variable among bacterial infections such as Staphylococcus aureus (which is worse than Influenza), or chronic infections such as HIV (which is still worse). Panel D. shows that temporal variation of the regression coefficients for UnIT risk and % of urban population. Here we used Poisson regression leaving out the urban-UnIT risk covariate in the augmented model to highlight the role of UnIT risk vs % urban population: except in the shaded periods, the coefficient for the UnIT risk dominates. Panel E and panel F show the variation of the coefficients for the UnIT risk and % urban population with adjusted R2. We note that the LOWESS fit shows that R2 increases and saturates as the coefficient for UnIT risk increases, whereas it drops rapidly with increasing values of the coefficient for % urban population. This suggests that when the covariate for the % of urban population is more important, our explained variance is low. panel G illustrates the mean absolute forecast errors at different points in the pandemic, highlighting the results obtained with Poisson and NB regressions (See also S1 Fig).</p
UnIT risk calculation.
Panel A. Our approach begins with collecting weekly county-wise new case counts of the seasonal flu epidemic spanning Jan. 2003 to Dec. 2012 from a large national database of insurance claims records (Truven MarketScan). We identify weekly Influenza diagnoses using ICD codes related to influenza infection (See Materials and methods), and end up with county-specific integer-valued time series for each US county for each flu season. Panel B. These 471-week-long integer-valued time-series are used to compute pairwise similarity between the counties using our new approach of computing intrinsic similarity between stochastic sample paths (See (5)). This similarity matrix induces county clusters C0, C1, C2 and C3, inferred via standard spectral clustering. Panel C. The flu incidence time series allow us to identify counties which register cases in the first couple of weeks of each flu season. Averaged over all the seasons this gives us a measure of average epidemic initiation risk. Panel D. Using the incidence series for the county cluster with maximal average initiation risk we compute a specialized HMM model (PFSA, see Materials and methods) G⋆. Panel E. Then, we compute the UnIT risk phenotype of each county as the sequence likelihood divergence (SLD, See (8)) between the incidence sequence observed and the inferred PFSA model G⋆. Panels F and G. Finally, the urban-UnIT risk is computed by scaling up the UnIT risk with the fraction of urban population in each county, as obtained from US census (Panel f). We show that this risk phenotype is highly predictive of weekly case count of COVID-19, while only dependent on Influenza epidemic history.</p
Text with supplementary tables, pseudocode, sofware usage instructions, and proof of Theorem 1.
Table A: COVID-19 ForecastHub (https://covid19forecasthub.org/community) Community Team Summary. Table B: Coefficients in multi-variate regression for COVID-19-related death count total as of 2021–05-30. Table C: Coefficients inferred in multi-variate regression for weekly COVID-19-related death totals. List of Algorithm Pseudocodes. Algorithm A: PFSA Log-likelihood. Algorithm B: Weekly confirmed case forecasting. Algorithm C: Weekly death forecasting. (PDF)</p
Fig 5 -
Panel a. We compare our forecasts of weekly case counts (1 week ahead forecasts) with observed confirmed cases on counties from the state of New York. Panel b. We compare the weekly forecasts with observed count for the state of California. We note that in both states, for the weeks included in this limited snapshot, the predicted count matches up well with what is ultimately observed. The cartography in this figure is generated from scratch using opensource shape files available at https://www.sciencebase.gov/catalog/item/581d051de4b08da350d523cc using GeoPandas [33].</p
Modeling scheme.
We use a national insurance claims database with more than 150 million people tracked over a decade (Truven Claims database) to curate geospatial incidence records for past Influenza epidemics over nearly a decade, which informs our new UnIT score. This score is then used as an additional fixed effect along with other putative socio-economic and demographic covariates obtained from US Census to infer a General Linear Model (GLM) explaining the weekly county-specific case COVID-19 case count. Using this inferred GLM model we “correct” the observed weekly case count, and use it as the only feature in an ensemble regressor to forecast county-specific count totals. The GLM model and the regressor is recomputed weekly, while the UnIT score remains invariant, representing a geospatial phenotype modulating transmission.</p
S5 Fig -
Panel A-D Four pre-specified PFSAs to estimate similarity between stochastic sample paths (See Eq (5) in main text). An edge connecting state q to q′ is labeled as if δ(q, σ) = q′ (See Defn. 1). Panel e. Performance and run time comparisons of SLD distance and DTW on a synthetic dataset. We denote the SLD distance by the length of the input sequence and DTW by their window size in Panel e. The average run time of of SLD distance is.042 second. Panel f. Run time v.s. sequence length comparison between DTW30 and the SLD distance. Panel g: 2D embeddings produced by Algorithm A in S1 Text and DTW5 on the “FordA” dataset from the UCR time series classification archive [79] with decision boundaries obtained by using Support Vector Machines (SVM) and neural networks respectively trained with features constructed from the corresponding dissimilarity measures. The SLD approach yields significantly improved separation. (TIF)</p
- …
