9 research outputs found
Test of subjective methods for data collection and analysis of data errors influence on management suggestions
Vid skogsbruksplanlÀggning Àr det viktigt att kunna samla in data med sÄ hög kvalitet som
möjligt om skogstillstÄndet samtidigt som kostnaderna för inventeringen hÄlls pÄ en rimlig
nivÄ. Det Àr ocksÄ viktigt att kunna ange ett ekonomiskt optimalt ÄtgÀrdsförslag för att
markÀgaren ska fÄ ut högsta möjliga avkastning frÄn sitt skogsinnehav, förutsatt att det Àr
mÄlet. I dagslÀget anvÀnder sig Norrskog m.fl. av en subjektiv inventeringsmetod dÀr de med
hjÀlp av olika stödmÀtningar uppskattar skogstillstÄndet och utifrÄn en bedömning föreslÄr
ÄtgÀrder.
I detta arbete har tvÄ olika planlÀggningsmetoder studerats med avseende pÄ kvalitet och
tidsÄtgÄng dÀr resultatet har jÀmförts mot en objektiv cirkelyteinventering. Vidare har de
subjektiva ÄtgÀrdsförslagen jÀmförts mot de ÄtgÀrdsförslag som genererats utifrÄn analyser
med Indelningspaketet av skogstillstÄndet utifrÄn data frÄn bÄde de subjektiva och den
objektiva inventeringen.
Studien Àr gjord pÄ tolv gallrings- och slutavverkningsavdelningar pÄ tvÄ privatÀgda
fastigheter belÀgna i JÀmtlands lÀn. Den subjektiva inventeringen Àr utförd av fyra planlÀggare
anlitade av Norrskog för mÄngbruksplanlÀggning, medan den objektiva inventerigen Àr gjord
av författaren. De tvÄ olika subjektiva inventeringsmetoder som testats Àr:
MetodA- Subjektiv bedömning med stödmÀtning och
Metod B -Klavning av tre subjektivt utlagda cirkelprovytor.
V ariansanalysen visar att det endast Àr för variablerna stamantal och tidsÄtgÄng som det inte
gÄr att utesluta en viss signifikant skillnad mellan de bÄda subjektiva inventeringsmetoderna.
DÀremot visar resultatet av plottningen av olika bestÄndsvariabler att det finns en tendens till
underskattning av framförallt volym och grundyta för bestÄnd med höga volymer, som i detta
fall bestÄr av gammal granskog. För variablerna medeldiameter, medelhöjd och Älder gÄr det
utifrÄn resultatet inte att se nÄgon skillnad mellan de olika metoderna. Den genomsnittliga
medeldifferensen gentemot den objektiva inventeringen blev nÄgot lÀgre med metod B för alla
bestÄndsvariabler utom för Älder och medelhöjd.
Resultaten visar att det i vissa fall kan tillÄtas stora avvikelser i bestÄndsdata, framförallt
volymen, utan att det pÄverkar ÄtgÀrdsförslagen vid en analys med
lP medan det i andra fall
rÀcker med en över- eller underskattning pÄ knappt 5% för att det ska leda till ett avvikande
ÄtgÀrdsförslag och dÀrmed en inoptimalförlust. Antalet avvikande ÄtgÀrdsförslag blev betydlig
fler vid den subjektiva bedömningen Àn vad en analys med
lP utifrÄn subjektiva data gav. De
flesta av de avvikande ÄtgÀrdsförslagen kommer frÄn medelÄlders och talldominerande
bestÄnd dÀr det i mÄnga avdelningar var aktuellt med gallring.
Att enbart utifrÄn denna studie avgöra vilken inventeringsmetod som Àr den mest
kostnadseffektiva och samtidigt ger den bÀsta tillstÄndsbeskrivningen Àr inte lÀtt. studien
visar dÀremot att det Àr betydligt viktigare att minimera totalkostnaden för beslutsförlusten Àn
att enbart minimera inventeringskostnaden. Det gÀller bara att markÀgaren Àr medveten om
hur stor förlusten kan bli p.g.a. avvikande ÄtgÀrdsförslag. Det finns sÄledes ett stort
ekonomiskt utrymme för att i framtiden utveckla alternativa inventerings- och analysmetoder.When making forest management plans it is very important to have high quality data about
the forest stand, and the costs for the inventory should be reasonable. It is also important to
give the forest owner econornically optimal management proposals (if that is the owners
objective). Today surveyors in Norrskog are estimating forest stand data and giving
management suggestions by using traditional subjective inventory methods.
In this study the quality of stand data and the time used have been studied for two different
subjective inventory methods, and campared with an objective circular sample plot method.
The management alternatives generatedfrom the Forest Management Planning Package
(FMPP), based on analysis for both the subjective and objective inventory, have been
campared with the surveyors' management proposals.
Twelve thinning and final felling stands situated on two private forest holdings in the county
of JĂ€mtland have been studied. The subjective inventory was made by four surveyors working
for Norrskog, while the objective inventory was made by the author. The methods studiedare
Method A, a subjective estimation including measuring on subjectively ehosen places, and
Method B, calipering of all trees on three circular plats on subjectively ehosen places.
The results show that same stand characteristics were measured with higher precision than
others. The variables Mean height, Mean diameter and Stand age were procured with high
precision. For Basal area and Volume there was a tendency to underestimate with Method A
in old spruce stands with high stand volume. The analysis of variance showed a significant
difference between the two subjective methods for the variables Number of stems and Time
used, but not for other variables. According to the mean difference, Method B was better for
all of the stand variables except for Stand age and Mean height.
The analysis with FMPP showed that in same cases large divergences in Stand volume had no
effect on the management proposals, while in other cases an Underestimatian of lessthan 5 %
could give a different proposal and an econornic loss. The subjective inventory gave alarger
number of diverting proposals than analysis with FMPP using data from the subjective
in ventory method. Most of the diverting proposals came from middle age pine stands, which
rnight be thinned.
It is important that an evaluation of forest inventory methods is based on eost-plus-loss
analyses. The total east of a method is calculated as the sum of in
ventory east and expected
losses due to inoptimal decisions. It is also important that the forest owners are conscious
about the sizes of these losses. Forthat reasonthere is a large econornical motivation for
testing alternatives in inventory- and analysis methods
Peer effects at work: The common stock investments of co-workers
Stock market behavior of individual investors is highly correlated with stock market behavior of their co-workers. For example, a ten percentage point increase in the fraction of co-workers that purchase stocks in a given month is associated with a two percentage point increase in the likelihood of individuals making a purchase. The high correlation exists even after taking controlling for individual socio-demographic characteristics and for time, stock, zip code, and plant fixed effects. Using data on family relations and on residential zip code, we show that the high correlation is not driven by peer effects at the family or zip code level. Moreover, workplace peer effects appear to be strong relative to geographical peer effects
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
The Sovereign Debt Crisis: Rebalancing or Freezes?
Using high-frequency data we document that episodes of market turmoil in the European sovereign bond market are on average associated with large decreases in trading volume. The response of trading volume to market stress is conditional on transaction costs. Low transaction cost turmoil episodes are associated with volume increases (investors rebalance), while high transaction cost turmoil periods are associated with abnormally low volume (market freezes). We find suggestive evidence of market freezes in response to shocks to the risk bearing capacity of market makers while investor rebalancing is triggered by wealth shocks. Overall, our results show that the recent sovereign debt crisis was not associated with large-scale investor rebalancing
Reliable capacity provisioning for distributed cloud/edge/fog computing applications
The REliable CApacity Provisioning and enhanced remediation for distributed cloud applications (RECAP) project aims to advance cloud and edge computing technology, to develop mechanisms for reliable capacity provisioning, and to make application placement, infrastructure management, and capacity provisioning autonomous, predictable and optimized. This paper presents the RECAP vision for an integrated edge-cloud architecture,discusses the scientific foundation of the project, and outlines plans for toolsets for continuous data collection, application performance modeling, application and component auto-scaling and remediation, and deploymentoptimization. The paper also presents four use cases from complementing fields that will be used to showcase the advancements of RECAP
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants