10,074 research outputs found
EEOC v. AHMC Garfield Medical Center LP dba Garfield Medical Center, Inc., and Does 1-10, Inclusive
Recommended from our members
The application of performance measures in the UK retail sector: an exploratory analysis
Abstract: An empirical investigation of the use of performance measurement by small and medium sized online retailers in the UK. The purpose of the study is to investigate type and range of performance measures applied and extent to which measures are likely to affect business performance and strategy development. The key findings are that whilst a good range of measures are applied, the measures are more likely to be used for checking strategy implementation rather than strategy formulation or for informing corrective action to ensure longer term strategic success. Further work is required to explore relationships between strategy and business performance
Qualitative System Identification from Imperfect Data
Experience in the physical sciences suggests that the only realistic means of
understanding complex systems is through the use of mathematical models.
Typically, this has come to mean the identification of quantitative models
expressed as differential equations. Quantitative modelling works best when the
structure of the model (i.e., the form of the equations) is known; and the
primary concern is one of estimating the values of the parameters in the model.
For complex biological systems, the model-structure is rarely known and the
modeler has to deal with both model-identification and parameter-estimation. In
this paper we are concerned with providing automated assistance to the first of
these problems. Specifically, we examine the identification by machine of the
structural relationships between experimentally observed variables. These
relationship will be expressed in the form of qualitative abstractions of a
quantitative model. Such qualitative models may not only provide clues to the
precise quantitative model, but also assist in understanding the essence of
that model. Our position in this paper is that background knowledge
incorporating system modelling principles can be used to constrain effectively
the set of good qualitative models. Utilising the model-identification
framework provided by Inductive Logic Programming (ILP) we present empirical
support for this position using a series of increasingly complex artificial
datasets. The results are obtained with qualitative and quantitative data
subject to varying amounts of noise and different degrees of sparsity. The
results also point to the presence of a set of qualitative states, which we
term kernel subsets, that may be necessary for a qualitative model-learner to
learn correct models. We demonstrate scalability of the method to biological
system modelling by identification of the glycolysis metabolic pathway from
data
Non-universal from Fluxed GUTs
We make a first systematic study of non-universal TeV scale neutral gauge
bosons arising naturally from a class of F-theory inspired models broken
via by flux. The phenomenological models we consider may originate from
semi-local F-theory GUTs arising from a single point of local
enhancement, assuming the minimal monodromy in order to allow for
a renormalisable top quark Yukawa coupling. We classify such non-universal
anomaly-free models requiring a minimal low energy spectrum and also
allowing for a vector-like family. We discuss to what extent such models can
account for the anomalous -decay ratios and .Comment: 14 page
Gauge Coupling Unification in E6 F-Theory GUTs with Matter and Bulk Exotics from Flux Breaking
We consider gauge coupling unification in E6 F-Theory Grand Unified Theories
(GUTs) where E6 is broken to the Standard Model (SM) gauge group using fluxes.
In such models there are two types of exotics that can affect gauge coupling
unification, namely matter exotics from the matter curves in the 27 dimensional
representation of E6 and the bulk exotics from the adjoint 78 dimensional
representation of E6. We explore the conditions required for either the
complete or partial removal of bulk exotics from the low energy spectrum. In
the latter case we shall show that (miraculously) gauge coupling unification
may be possible even if there are bulk exotics at the TeV scale. Indeed in some
cases it is necessary for bulk exotics to survive to the TeV scale in order to
cancel the effects coming from other TeV scale matter exotics which would by
themselves spoil gauge coupling unification. The combination of matter and bulk
exotics in these cases can lead to precise gauge coupling unification which
would not be possible with either type of exotics considered by themselves. The
combination of matter and bulk exotics at the TeV scale represents a unique and
striking signature of E6 F-theory GUTs that can be tested at the LHC.Comment: 21 pages, 5 figure
Workplace screening programs for chronic disease prevention: a rapid review
This review examined the effectiveness of workplace screening programs for chronic disease prevention based on evidence retrieved from the main databases of biomedical and health economic literature published to March 2012, supplemented with relevant reports. The review found: 1. Strong evidence of effectiveness of HRAs (when used in combination with other interventions) in relation to tobacco use, alcohol use, dietary fat intake, blood pressure and cholesterol 2. Sufficient evidence for effectiveness of worksite programs to control overweight and obesity 3. Sufficient evidence of effectiveness for workplace HRAs in combination with additional interventions to have favourable impact on the use of healthcare services (such as reductions in emergency department visits, outpatient visits, and inpatient hospital days over the longer term) 4. Sufficient evidence for effectiveness of benefits-linked financial incentives in increasing HRA and program participation 5. Sufficient evidence that for every dollar invested in these programs an annual gain of 1.40 to $4.60) can be achieved 6. Promising evidence that even higher returns on investment can be achieved in programs incorporating newer technologies such as telephone coaching of high risk individuals and benefits-linked financial incentive
Recommended from our members
Muck: A Build Tool for Data Journalists
Veracity and reproducibility are vital qualities for any data journalism project. As computational investigations become more complex and time consuming, the effort required to maintain correctness of code and conclusions increases dramatically. This report presents Muck, a new tool for organizing and reliably reproducing data computations. Muck is a command line program that plays the role of the build system in traditional software development, except that instead of being used to compile code into executable applications, it runs data processing scripts to produce output documents (e.g., data visualizations or tables of statistical results). In essence, it automates the task of executing a series of computational steps to produce an updated product. The system supports a variety of languages, formats, and tools, and draws upon well-established Unix software conventions.
A great deal of data journalism work can be characterized as a process of deriving data from original sources. Muck models such work as a graph of computational steps and uses this model to update results efficiently whenever the inputs or code change. This algorithmic approach relieves programmers from having to constantly worry about the dependency relationships between various parts of a project. At the same time, Muck encourages programmers to organize their code into modular scripts, which can make the code more readable for a collaborating group. The system relies on a naming convention to connect scripts to their outputs, and automatically infers the dependency graph from these implied relationships. Thus, unlike more traditional build systems, Muck requires no configuration files, which makes altering the structure of a project less onerous.
Muckβs development was motivated by conversations with working data journalists and students. This report describes the rationale for building a new tool, its compelling features, and preliminary experience testing it with several demonstration projects. Muck has proven successful for a variety of use cases, but work remains to be done on documentation, compatibility, and testing. The long-term goal of the project is to provide a simple, language-agnostic tool that allows journalists to better develop and maintain ambitious data projects
- β¦