692 research outputs found
Tackling bogus self-employment: some lessons from Romania
In recent years, recognition that bogus self-employment is rapidly growing, not least because of the advent of what has been called the āgig,ā āsharingā or ācollaborativeā economy, has led governments to search for ways to tackle this form of dependent self-employment that is widely viewed as diminishing the quality of working conditions. Until now, however, there have been few ex-post evaluations of policy initiatives that seek to tackle this phenomenon. Therefore, the aim of this paper is to provide one of the first ex-post evaluations by examining the outcomes of a 2016 legislative initiative in Romania to tackle bogus self-employment. Reporting both descriptive statistics and OLS regression analysis on monthly official data from August 2014 to August 2016, the finding is that while other business types and waged employment rates followed a similar trend to the years before the introduction of the new legislation, the number of self-employed started a negative trend after the new legislation was announced. After controlling for other indicators related to the economy (i.e. GDP) and labor market (i.e. employees, other companies, vacancy rates), the impact of the new legislation on the self-employed remains negative, offering reasonable grounds for assuming bogus self-employed was lowered by the new legislation. The paper concludes by discussing the wider implications of these findings
ASCORE: an up-to-date cardiovascular risk score for hypertensive patients reflecting contemporary clinical practice developed using the (ASCOT-BPLA) trial data.
A number of risk scores already exist to predict cardiovascular (CV) events. However, scores developed with data collected some time ago might not accurately predict the CV risk of contemporary hypertensive patients that benefit from more modern treatments and management. Using data from the randomised clinical trial Anglo-Scandinavian Cardiac Outcomes Trial-BPLA, with 15ā955 hypertensive patients without previous CV disease receiving contemporary preventive CV management, we developed a new risk score predicting the 5-year risk of a first CV event (CV death, myocardial infarction or stroke). Cox proportional hazard models were used to develop a risk equation from baseline predictors. The final risk model (ASCORE) included age, sex, smoking, diabetes, previous blood pressure (BP) treatment, systolic BP, total cholesterol, high-density lipoprotein-cholesterol, fasting glucose and creatinine baseline variables. A simplified model (ASCORE-S) excluding laboratory variables was also derived. Both models showed very good internal validity. User-friendly integer score tables are reported for both models. Applying the latest Framingham risk score to our data significantly overpredicted the observed 5-year risk of the composite CV outcome. We conclude that risk scores derived using older databases (such as Framingham) may overestimate the CV risk of patients receiving current BP treatments; therefore, 'updated' risk scores are needed for current patients
Testing for Network and Spatial Autocorrelation
Testing for dependence has been a well-established component of spatial
statistical analyses for decades. In particular, several popular test
statistics have desirable properties for testing for the presence of spatial
autocorrelation in continuous variables. In this paper we propose two
contributions to the literature on tests for autocorrelation. First, we propose
a new test for autocorrelation in categorical variables. While some methods
currently exist for assessing spatial autocorrelation in categorical variables,
the most popular method is unwieldy, somewhat ad hoc, and fails to provide
grounds for a single omnibus test. Second, we discuss the importance of testing
for autocorrelation in data sampled from the nodes of a network, motivated by
social network applications. We demonstrate that our proposed statistic for
categorical variables can both be used in the spatial and network setting
The role of mentorship in protege performance
The role of mentorship on protege performance is a matter of importance to
academic, business, and governmental organizations. While the benefits of
mentorship for proteges, mentors and their organizations are apparent, the
extent to which proteges mimic their mentors' career choices and acquire their
mentorship skills is unclear. Here, we investigate one aspect of mentor
emulation by studying mentorship fecundity---the number of proteges a mentor
trains---with data from the Mathematics Genealogy Project, which tracks the
mentorship record of thousands of mathematicians over several centuries. We
demonstrate that fecundity among academic mathematicians is correlated with
other measures of academic success. We also find that the average fecundity of
mentors remains stable over 60 years of recorded mentorship. We further uncover
three significant correlations in mentorship fecundity. First, mentors with
small mentorship fecundity train proteges that go on to have a 37% larger than
expected mentorship fecundity. Second, in the first third of their career,
mentors with large fecundity train proteges that go on to have a 29% larger
than expected fecundity. Finally, in the last third of their career, mentors
with large fecundity train proteges that go on to have a 31% smaller than
expected fecundity.Comment: 23 pages double-spaced, 4 figure
Universality, limits and predictability of gold-medal performances at the Olympic Games
Inspired by the Games held in ancient Greece, modern Olympics represent the
world's largest pageant of athletic skill and competitive spirit. Performances
of athletes at the Olympic Games mirror, since 1896, human potentialities in
sports, and thus provide an optimal source of information for studying the
evolution of sport achievements and predicting the limits that athletes can
reach. Unfortunately, the models introduced so far for the description of
athlete performances at the Olympics are either sophisticated or unrealistic,
and more importantly, do not provide a unified theory for sport performances.
Here, we address this issue by showing that relative performance improvements
of medal winners at the Olympics are normally distributed, implying that the
evolution of performance values can be described in good approximation as an
exponential approach to an a priori unknown limiting performance value. This
law holds for all specialties in athletics-including running, jumping, and
throwing-and swimming. We present a self-consistent method, based on normality
hypothesis testing, able to predict limiting performance values in all
specialties. We further quantify the most likely years in which athletes will
breach challenging performance walls in running, jumping, throwing, and
swimming events, as well as the probability that new world records will be
established at the next edition of the Olympic Games.Comment: 8 pages, 3 figures, 1 table. Supporting information files and data
are available at filrad.homelinux.or
Acute respiratory failure in patients with hematological malignancies: outcomes according to initial ventilation strategy. A groupe de recherche respiratoire en rƩanimation onco-hƩmatologique (Grrr-OH) study
Comparison of Framingham, PROCAM, SCORE, and Diamond Forrester to predict coronary atherosclerosis and cardiovascular events
Impact of smoking on health-related quality of Life after percutaneous coronary intervention treated with drug-eluting stents: a longitudinal observational study
Is Middle-Upper Arm Circumference ānormallyā distributed? Secondary data analysis of 852 nutrition surveys
Recommended from our members
Risk measures for direct real estate investments with non-normal or unknown return distributions
The volatility of returns is probably the most widely used risk measure for real estate. This is rather surprising since a number of studies have cast doubts on the view that volatility can capture the manifold risks attached to properties and corresponds to the risk attitude of investors. A central issue in this discussion is the statistical properties of real estate returnsāin contrast to neoclassical capital market theory they are mostly non-normal and often unknown, which render many statistical measures useless. Based on a literature review and an analysis of data from Germany we provide evidence that volatility alone is inappropriate for measuring the risk of direct real estate.
We use a unique data sample by IPD, which includes the total returns of 939 properties across different usage types (56% office, 20% retail, 8% others and 16% residential properties) from 1996 to 2009, the German IPD Index, and the German Property Index. The analysis of the distributional characteristics shows that German real estate returns in this period were not normally distributed and that a logistic distribution would have been a better fit. This is in line with most of the current literature on this subject and leads to the question which indicators are more appropriate to measure real estate risks. We suggest that a combination of quantitative and qualitative risk measures more adequately captures real estate risks and conforms better with investor attitudes to risk. Furthermore, we present criteria for the purpose of risk classification
- ā¦