33,450 research outputs found
Patient Risk and Data Standards in Healthcare Supply Chain
Patient safety is one of the most important health care challenges. It is a big concern since 1 in every 10 patients around the world is affected by healthcare errors. The focus of this study is given to preventable adverse events that caused by the errors or system flaw that could have been avoided. In this study, simulation models are developed using Arena to evaluate the impact of GS1 data standards on patient risk in healthcare supply chain. The focus was given to the provider hospital supply chain operations where inventory discrepancy and performance deficiencies in recall, return, and outdate management can directly affect patient safety. Simulation models are developed for various systems and scenarios to compare different performance measures and analyze the impact of GS1. The results indicates that as the validation points are closer to the point of use, the number of recalled or outdated products administered to a patient are still reduced significantly so checking at the bedside or PAR is critical. But validation only at these points may cause some problems such as stock outs; therefore, validating in other locations is also needed
Price Variations in a Stock Market With Many Agents
Large variations in stock prices happen with sufficient frequency to raise
doubts about existing models, which all fail to account for non-Gaussian
statistics. We construct simple models of a stock market, and argue that the
large variations may be due to a crowd effect, where agents imitate each
other's behavior. The variations over different time scales can be related to
each other in a systematic way, similar to the Levy stable distribution
proposed by Mandelbrot to describe real market indices. In the simplest, least
realistic case, exact results for the statistics of the variations are derived
by mapping onto a model of diffusing and annihilating particles, which has been
solved by quantum field theory methods. When the agents imitate each other and
respond to recent market volatility, different scaling behavior is obtained. In
this case the statistics of price variations is consistent with empirical
observations. The interplay between ``rational'' traders whose behavior is
derived from fundamental analysis of the stock, including dividends, and
``noise traders'', whose behavior is governed solely by studying the market
dynamics, is investigated. When the relative number of rational traders is
small, ``bubbles'' often occur, where the market price moves outside the range
justified by fundamental market analysis. When the number of rational traders
is larger, the market price is generally locked within the price range they
define.Comment: 39 pages (Latex) + 20 Figures and missing Figure 1 (sorry), submitted
to J. Math. Eco
On the dark side of the market: identifying and analyzing hidden order placements
Trading under limited pre-trade transparency becomes increasingly popular on financial markets. We provide first evidence on traders’ use of (completely) hidden orders which might be placed even inside of the (displayed) bid-ask spread. Employing TotalView-ITCH data on order messages at NASDAQ, we propose a simple method to conduct statistical inference on the location of hidden depth and to test economic hypotheses. Analyzing a wide cross-section of stocks, we show that market conditions reflected by the (visible) bid-ask spread, (visible) depth, recent price movements and trading signals significantly affect the aggressiveness of ’dark’ liquidity supply and thus the ’hidden spread’. Our evidence suggests that traders balance hidden order placements to (i) compete for the provision of (hidden) liquidity and (ii) protect themselves against adverse selection, front-running as well as ’hidden order detection strategies’ used by high-frequency traders. Accordingly, our results show that hidden liquidity locations are predictable given the observable state of the market
Stock Picking via Nonsymmetrically Pruned Binary Decision Trees
Stock picking is the field of financial analysis that is of particular interest for many professional investors and researchers. In this study stock picking is implemented via binary classification trees. Optimal tree size is believed to be the crucial factor in forecasting performance of the trees. While there exists a standard method of tree pruning, which is based on the cost-complexity tradeoff and used in the majority of studies employing binary decision trees, this paper introduces a novel methodology of nonsymmetric tree pruning called Best Node Strategy (BNS). An important property of BNS is proven that provides an easy way to implement the search of the optimal tree size in practice. BNS is compared with the traditional pruning approach by composing two recursive portfolios out of XETRA DAX stocks. Performance forecasts for each of the stocks are provided by constructed decision trees. It is shown that BNS clearly outperforms the traditional approach according to the backtesting results and the Diebold-Mariano test for statistical significance of the performance difference between two forecasting methods.decision tree, stock picking, pruning, earnings forecasting, data mining
On the Dark Side of the Market: Identifying and Analyzing Hidden Order Placements
Trading under limited pre-trade transparency becomes increasingly popular on financial markets. We provide first evidence on traders’ use of (completely) hidden orders which might be placed even inside of the (displayed) bid-ask spread. Employing TotalView-ITCH data on order messages at NASDAQ, we propose a simple method to conduct statistical inference on the location of hidden depth and to test economic hypotheses. Analyzing a wide cross-section of stocks, we show that market conditions reflected by the (visible) bid-ask spread, (visible) depth, recent price movements and trading signals significantly affect the aggressiveness of ’dark’ liquidity supply and thus the ’hidden spread’. Our evidence suggests that traders balance hidden order placements to (i) compete for the provision of (hidden) liquidity and (ii) protect themselves against adverse selection, front-running as well as ’hidden order detection strategies’ used by high-frequency traders. Accordingly, our results show that hidden liquidity locations are predictable given the observable state of the market.limit order market, hidden liquidity, high-frequency trading, non-display order, iceberg orders
The supervised hierarchical Dirichlet process
We propose the supervised hierarchical Dirichlet process (sHDP), a
nonparametric generative model for the joint distribution of a group of
observations and a response variable directly associated with that whole group.
We compare the sHDP with another leading method for regression on grouped data,
the supervised latent Dirichlet allocation (sLDA) model. We evaluate our method
on two real-world classification problems and two real-world regression
problems. Bayesian nonparametric regression models based on the Dirichlet
process, such as the Dirichlet process-generalised linear models (DP-GLM) have
previously been explored; these models allow flexibility in modelling nonlinear
relationships. However, until now, Hierarchical Dirichlet Process (HDP)
mixtures have not seen significant use in supervised problems with grouped data
since a straightforward application of the HDP on the grouped data results in
learnt clusters that are not predictive of the responses. The sHDP solves this
problem by allowing for clusters to be learnt jointly from the group structure
and from the label assigned to each group.Comment: 14 page
Avoiding Chaos in Wonderland
Wonderland, a compact, integrated economic, demographic and environmental
model is investigated using methods developed for studying critical phenomena.
Simulation results show the parameter space separates into two phases, one of
which contains the property of long term, sustainable development. By employing
information contain in the phase diagram, an optimal strategy involving
pollution taxes is developed as a means of moving a system initially in a
unsustainable region of the phase diagram into a region of sustainability while
ensuring minimal regret with respect to long term economic growth.Comment: 22 pages, 9 figures. Submitted to Physica
Making – or Picking – Winners: Evidence of Internal and External Price Effects in Historic Preservation Policies
Much has been written identifying property price effects of historic preservation policies. Little attention has been paid to the possible policy endogeneity in hedonic price models. This paper outlines a general case of land use regulation in the presence of externalities and then demonstrates the usefulness of the model in an instrumental-variables estimation of a hedonic price analysis – with an application to historic preservation in Chicago. The theoretical model casts doubt on previous results concerning price effects of preservation policies. The comparative statics identify some determinants of regulation that seem, on the face of it, most unlikely to also belong in a hedonic price equation. The analysis employs these determinants as instruments for endogenous regulatory treatment in a hedonic price analysis. OLS estimation of the hedonic offers results consistent with much of previous literature, namely that property values are higher for historic landmarks. In the 2SLS hedonic, robust estimates of the "own" price effect of historic designation are shown to be large and negative (approx. -27%) for homes in landmark districts. Further, significant and substantively important (positive) external price effects of landmark designations are found. The paper concludes with a discussion of the policy implications of these findings for historic preservation.hedonics, built heritage, heritage valuation, real estate economics
Evaluating the Effect of Public Subsidies on firm R&D activity: an Application to Italy Using the Community Innovation Survey
WP 09/2008; The aim of the paper is twofold: to verify a full policy failure of public support on private R&D effort, when in presence of a potential plurality of public incentives; to compare the most recent econometric methods used for the analysis of the input additionality. Compared to previous studies our work wants to trace out an advance in two directions: adding more robustness by comparing results from various econometric techniques and providing an analysis of the R&D policy effect behind the average results. A by-product of the paper is a taxonomy of the econometric methods used in the literature, according to the structure of the models, the type of dataset and the available policy information. We exploit the third wave of the Community Innovation Survey for Italy (1998-2000) with a sample size of 1,221 supported and 1,319 non-supported firms. Given the used type of data, the article presents two main limits: first, we do not know the level of the subsidy, so that we can control only for the presence of a total crowding-out; second, we can check only the short-run effect of the supporting policy, while an increase in the private R&D effort could be more likely in the medium term. Our results suggest that: 1. the main factors influencing the probability to participate to the incentive policy are R&D experience, human skills, liquidity constraints, but also foreign capital ownership; 2. on average, the total substitution of private funding by the public one is excluded for Italy as a whole, although some cases of total crowding-out are found: low knowledge intensive services, very small firms (10-19 employees) and the auto-vehicle industry. We get, on average, 885 additional thousand Euros of R&D expenditure per firm with a ratio equal to 4.62: it means that if a generic control unit does 1 thousand Euros of R&D expenditure a matched treated does 4.62 thousand Euros. The additionality for the R&D intensity is about 0.014 with a ratio of about 2.67
- …