638 research outputs found

    Combatting Skepticism Towards HR

    Get PDF
    [Excerpt] When assessing the essentiality of HR within a firm, one must first ask what is meant by the word “essential” within a business context. The trickiness here, however, is that such a definition is highly contingent on the type and size of a particular firm. If one defines “essential” as “indispensable,” then HR is almost certainly not essential in very small firms. In such instances, the work of HR can be done by other managers and the owners themselves. On the other hand, if one defines “essential” as “adding considerable value,” then innovative human resource policies can create a competitive advantage even in the smallest of firms. Instead of relying on a single definition of essentiality, this essay will focus on the reasons why human resources practices are often called into question in the first place. Furthermore, I will propose recommendations on how to combat skepticism toward HR

    Hedging Options in a GARCH Environment: Testing the Term Structure of Stochastic Volatility Models

    Get PDF
    This paper develops a methodology for testing the term structure of volatility forecasts derived from stochastic volatility models, and implements it to analyze models of S&P 500 index volatility. Volatility models are compared by their ability to hedge options positions sensitive to the term structure of volatility. Overall, the most effective hedge is a Black-Scholes (BS) delta-gamma hedge, while the BS delta-vega hedge is the least effective. The most successful volatility hedge is GARCH components delta-gamma, suggesting that the GARCH components estimate of the term structure of volatility is most accurate. The success of the BS delta-gamma hedge may be due to mispricing in the options market over the sample period.

    Nationalize or Localize: Senatorial Incumbent and Challenger Differences in Issue Prioritization

    Get PDF
    How do politicians choose which issues to emphasize in an election? Studying campaign behavior is crucial to understanding how political ads target voters and prioritize issues. Senate candidates normally attempt to either nationalize the election or emphasize state issues in their campaigns. How do Senate incumbent and challenger candidates differ in terms of issue prioritization? I hypothesize the challengers attempt to nationalize the election, while the incumbents generally focus their efforts on state issues. Political conventional wisdom indicates challengers typically try to nationalize the election by attempting to criticize the incumbent for either supporting or voting against the current presidential administration. In contrast, incumbents tend to focus on state issues because they have the ability to claim credit for work done in their state, and usually know their constituency better than the challenger. However, current literature is inconclusive, requiring further research. This study is qualitative and uses content analysis to examine political ads from five different senate elections in 2014: Arkansas, Colorado, North Carolina, New Hampshire, and Alaska. The data are compelling as they reflect trends during a midterm election of a second term presidency in which the constituents appear to be rising against the current party in power

    Kū Kia‘I Mauna: Protecting Indigenous Religious Rights

    Get PDF
    Courts historically side with private interests at the expense of Indigenous religious rights. Continuing this trend, the Hawai‘i State Supreme Court allowed the Thirty- Meter-Telescope to be built atop Maunakea, a mountain sacred to Native Hawaiians. This decision led to a mass protest that was organized by Native Hawaiian rights advocates and community members. However, notwithstanding the mountain’s religious and cultural significance, Indigenous plaintiffs could not prevent construction of the telescope on Maunakea. Unlike most First Amendment rights, religious Free Exercise Clause claims are not generally subject to strict constitutional scrutiny. Congress has mandated the application of strict scrutiny to federal government action that imposes a substantial burden on religious activity through the Religious Freedom Restoration Act (RFRA). However, because most courts narrowly interpret “substantial burden,” it has become nearly impossible for Indigenous plaintiffs to succeed on claims involving violations of religious freedom. Moreover, RFRA does not apply to state governments, and most states—including Hawai‘i—have not enacted similar protections for religious rights. This Comment suggests that the Hawai‘i State Legislature should enact a state version of RFRA that would apply strict scrutiny to government actions that impose a substantial burden on religious rights. Further, this Comment urges Congress and state legislatures to enact a more expansive definition of “substantial burden” that respects the First Amendment rights of Indigenous people to practice their beliefs

    Sharp Thresholds For The Frog Model And Related Systems

    Get PDF
    The frog model refers to a system of interacting random walks on a rooted graph. It begins with a single active particle (i.e. frog) at the root, and some distribution of inactive particles among the non-root vertices. Active particles perform discrete-time-nearest neighbor random walks on the graph and activate passive particles upon landing on them. Once activated, the trajectories of distinct particles are independent. In this thesis, we examine the frog model in several different environments, and in each case, work towards identifying conditions under which the model is recurrent, transient, or neither, in terms of the number of distinct frogs that return to the root. We begin by looking at a continuous analog of the model on R\R in chapter 2, following which I analyze several different models on Z\Z in chapters 2 and 3. I then conclude by examining the frog model on trees in chapter 4. The strategy used for analyzing the model on R\R primarily revolves around looking at a closely related birth-death process. Somewhat similar techniques are then used for the model on Z\Z. For the frog model on trees I exploit some of the self-similarity properties of the model in order to construct an operator which is used to analyze its long term behaviour, as it relates to questions of recurrence vs. transience

    Asset pricing puzzles: Evidence from options markets

    Get PDF
    This paper proposes and implements a consumption-based pricing kernel (stochastic discount factor) testing methodology that focuses on the covariance between the pricing kernel and asset squared excess returns. This covariance has an intuitive economic interpretation as a risk-neutral variance risk-premium, i.e. the difference between the risk-neutral return variance and the objective return variance. In the same way that an asset riskpremium puzzle is due to a failure of the pricing kernel to adequately covary with asset excess returns, a riskneutral variance puzzle is due to a failure of the pricing kernel to adequately covary with asset squared excess returns. This paper tests a consumption-based pricing kernel specification that is compatible with habit formation, consumption durability, and constant relative risk-aversion over a range of plausible preference parameter values. The difference between consumption-based and semi-parametric option-based estimates of unconditional risk-neutral S&P500 return variance is used as a pricing kernel specification test statistic. Evidence is found of a risk-neutral S&P500 return variance puzzle if constant relative risk-aversion is assumed. The puzzle is resolved when the pricing kernel is allowed to exhibit habit formation. The acceptable habit pricing kernels exhibit higher habit levels, higher utility function concavity, and lower rates of timepreference than estimates in related papers. When the full history of consumption data is used, the preference parameter estimates are more similar to those of related papers

    Implied volatility functions: a reprise

    Get PDF
    Dumas, Fleming, Whaley (DFW, 1998) find that option models based on deterministic volatility functions (DVF) perform poorly because the estimated volatility function is unstable over time. DFW provide evidence that the DVF changes significantly on a weekly basis. This paper proposes a new class of dynamic implied volatility function models (DIVF). This class of models separates a time-invariant implied volatility function from the stochastic state variables that drive changes in the individual implied volatilities. The dynamics of the state variables are modeled explicitly. This framework facilitates consistent pricing and hedging with time-variation in the implied volatility function (IVF). In tests conducted using the full history of S&P500 futures option prices, the DIVF model is found to substantially improve pricing performance compared to static implied volatility function models and benchmark pricing models such as Black and Scholes (1973)

    Empirical tests of interest rate model pricing kernels

    Get PDF
    This paper estimates and tests consumption-based pricing kernels used in common equilibrium interest rate term structure models. In contrast to previous papers that use return orthogonality conditions, estimation in this paper is accomplished using moment conditions from a consumption-based option pricing equation and market prices of interest rate options. This methodology is more sensitive to preference misspecification over states associated with large changes in consumption than previous techniques. In addition, this methodology provides a large set of natural moment conditions to use in estimation and testing compared to an arbitrary choice of return orthogonality conditions (e.g. instruments selected) used in GMM estimation. Eurodollar futures option prices and an estimated joint model of quarterly aggregate consumption and three month Eurodollar rates suggest are used to estimate and test pricing kernels based on logarithmic, power, and exponential utility functions. Using the market prices of interest rate options, evidence is found which is consistent with the equity premium puzzle; very high levels of risk aversion are needed to justify the observed premium associated with an investment position positively correlated with aggregate consumption. In addition, evidence is found which is consistent with the riskfree rate puzzle: at high levels of risk-aversion for power or exponential utility, negative rates of time preference are needed to fit the observed low riskless interest rates. These results suggest that typical term structure models are misspecified in terms of assumed preferences. This may have deleterious effects on model estimates of the interest rate term structure estimates and interest rate option prices

    Keeping the Lid on Confidentiality: Mediation Privilege and Conflict of Laws

    Get PDF
    Published in cooperation with the American Bar Association Section of Dispute Resolutio

    Teaching Empathy in Law School

    Get PDF
    • 

    corecore