1,718,513 research outputs found
Industrially Applicable System Regression Test Prioritization in Production Automation
When changes are performed on an automated production system (aPS), new
faults can be accidentally introduced in the system, which are called
regressions. A common method for finding these faults is regression testing. In
most cases, this regression testing process is performed under high time
pressure and on-site in a very uncomfortable environment. Until now, there is
no automated support for finding and prioritizing system test cases regarding
the fully integrated aPS that are suitable for finding regressions. Thus, the
testing technician has to rely on personal intuition and experience, possibly
choosing an inappropriate order of test cases, finding regressions at a very
late stage of the test run. Using a suitable prioritization, this iterative
process of finding and fixing regressions can be streamlined and a lot of time
can be saved by executing test cases likely to identify new regressions
earlier. Thus, an approach is presented in this paper that uses previously
acquired runtime data from past test executions and performs a change
identification and impact analysis to prioritize test cases that have a high
probability to unveil regressions caused by side effects of a system change.
The approach was developed in cooperation with reputable industrial partners
active in the field of aPS engineering, ensuring a development in line with
industrial requirements. An industrial case study and an expert evaluation were
performed, showing promising results.Comment: 13 pages, https://ieeexplore.ieee.org/abstract/document/8320514
The Impact of Philanthropy on the Passage of the Affordable Care Act
This report has two aims. First, it seeks to examine the role of philanthropy in the passage of the Patient Protection and Affordable Care Act (ACA) in March 2010; in this regard, it resembles a traditional case study of philanthropic impact. But it also uses that examination to address some of the epistemic and methodological challenges involved in evaluating policy advocacy more generally; in this way, it also seeks to present a metastudy of the narratives of impact that have emerged regarding philanthropy and health care reform and the evidentiary support on which they are grounded.The challenges in evaluating philanthropy's hand in shaping policy have been well documented; this report has certainly run up against many of them. Yet at least one of these challenges is addressed directly through the retrospective, historical approach that this report takes. If foundations have often found it difficult to evaluate grants aimed at affecting policy change because of the broad time horizon such transformation often requires, looking backwards from the vantage point of such a significant change—the passage of the ACA—provides an outstanding perspective on the question of philanthropic impact. Analysis is staked, in this case, to a particular legislative outcome. For this reason, this report does not engage the role of philanthropy in the implementation of the Affordable Care Act. However, it is important to note that many of the funders discussed below have taken a leading role in supporting that process and appreciate that passage of the legislation represented only an initial step in a lengthier campaign to ensure that all Americans have access to affordable, quality health care
Recommended from our members
Assessing the Economic and Flow Regime Outcomes of Alternative Hydropower Operations on the Connecticut River\u27s Mainstem
Hydropower provides a source of reliable and inexpensive energy, producing approximately 20% of the global energy supply, though it comes at a cost to riverine ecosystems. To maximize revenues, major hydropower facilities store and release water with respect to short-term changes in energy price, causing significant sub-daily flow regime alterations that impact downstream ecological communities. In the United States, the Federal Energy Regulatory Commission (FERC) is responsible for hydropower regulation and this is administered, in part, during periodic relicensing of existing facilities. The process of relicensing provides the opportunity to evaluate the goals and concerns of interested parties and evaluate potential operational changes in licensure which may support these goals, often including constraints aimed at supporting ecological improvements.
This paper explores potential changes in reservoir operating rules for a series of five peaking hydropower facilities on the Connecticut River undergoing FERC relicensing that should complete in 2019. This paper evaluates the trade-offs between two primary goals: maximizing revenues from hydroelectric power generation and returning the river to a more natural flow regime. These trade-offs are assessed using the Connecticut River Hydropower Operations Program (CHOP), a linear programming (LP) optimization model applied at an hourly time-step to capture the sub-daily effects to the flow regime. The model objective function is formulated to maximize hydropower revenues with respect to historical regional energy price data and is demonstrated to accurately mimic hydropeaking operating conditions and match historical power generating rates.
A case study compares modeled hydropower operating conditions between current hydropeaking operations and a strict run-of-river condition, where dam inflows must be directly released as outflows at all times. Analysis suggests that the run-of-river condition would result in a total economic loss of 7-9% of average annual revenues at the four mainstem facilities and as much as 17% at the larger, pumped-storage facility. However, an exploration of operating revenue losses at the pumped-storage facility suggests that there is potential for reoperations within the run-of-river operating condition to substantially reduce these losses. The run-of-river operation is demonstrated to improve the Connecticut River’s flow regime on the sub-daily time scale, with significant reductions in rates of change in flows to levels that approach those observed at a nearby unaltered location. The modeled improvements to the flow regime demonstrate the merit of this run-of-river condition as a potential reoperation for the hydropower system
Profiling Methodology and Performance Tuning of the Met Office Unified Model for Weather and Climate Simulations
Global weather and climate modelling is a compute-intensive task that is mission-critical to government departments concerned with meteorology and climate change. The dominant component of these models is a global atmosphere model. One such model, the Met Office Unified Model (MetUM), is widely used in both Europe and Australia for this purpose. This paper describes our experiences in developing an efficient profiling methodology and scalability analysis of the MetUM version 7.5 at both low scale and high scale atmosphere grid resolutions. Variability within the execution of the MetUM and variability of the run-time of identical jobs on a highly shared cluster are taken into account. The methodology uses a lightweight profiler internal to the MetUM which we have enhanced to have minimal overhead and enables accurate profiling with only a relatively modest usage of processor time. At high-scale resolution, the MetUM scaled to core counts of 2048, with load imbalance accounting a significant fraction the loss from ideal performance. Recent patches have removed two relatively small sources of inefficiency. Internal segment size parameters gave a modest performance improvement at low-scale resolution (such as are used in climate simulation); this however was not significant a higher scales. Near-square process grid configurations tended to give the best performance. Byte-swapping optimizations vastly improved I/O performance, which has in turn a large impact on performance in operational runs
Technological change, diffusion and output growth
The thesis presents a critical review of both traditional and new growth models
emphasising their main implications and points of controversy. Three main research
directions have been followed, refining hypothesis advanced in the sixties. We first find
models which follow the learning by doing hypothesis and therefore consider knowledge
embodied in physical capital. The second class of models incorporate knowledge within
human capital while the third approach considers knowledge as generated by the research
sector which sells designs to the manufacturing sector producing capital goods. A typical
outcome of such models is the existence of externalities which causes divergence
between market and socially optimal equilibria. Policy intervention aimed at subsidising
either human capital or physical capital is thus justified.
Empirical analysis has received new impetus from the theoretical debate.
However, past empirical tests are mainly based on heterogeneous cross section data
which take into account mean growth rates over given periods of time, and ignore pure
time series analysis. On empirical grounds, the role of investment in the growth process
has been emphasised. This variable has also been decomposed to consider the impact of
machinery and equipment investment alone.
In this thesis we have underlined six aspects of endogenous growth models,
which in our opinion reflect the main points of controversy:
i) scale effects;
ii) the treatment of knowledge as a production input;
iii) the role of institutions;
iv) the empirical controversy dealing with the robustness of growth regression
estimates and the measurement of the impact of some crucial variables (e.g.,
investment) on growth;
v) the simplified representation of R&D;
vi) the absence of any discussion of diffusion phenomena.
We then propose a new version of an R&D endogenous growth model, which
explicitly incorporates the diffusion of innovations and permits comparison with results
derived from other models which do not consider the diffusion process. In this new
model the interaction between the sector producing final output and the sector producing
capital goods generates the time path of diffusion and hence the growth rate of the
economy.
In this new model there is a clear growth effect of a change in the interest rate.
Such a change, on the one hand, affects the determination of the value of human capital
in research, and, on the other hand, affects the diffusion path of new producer durables.
This is important for policy because policy aimed at stimulating growth may be mainly
concerned with reductions of the interest rate and will thus cause a higher allocation to
human capital in research and a larger supply (and use) of new intermediate goods. In addition, there is another clear growth effect which derives from changes in the
parameter which defines the diffusion path of new capital goods. An increase in the
value of this parameter again causes an increase in human capital devoted to research and
an upward shift of the diffusion path, thus increasing the long-run growth rate. This
result underlines the difference with previous R&D endogenous growth models in that
we now have a clear distinction between the sectors producing and using new capital
goods.
The empirical implications of the theoretical models are then investigated by
testing the causal link between R&D and investment, on the one hand, and output
growth and investment on the other hand. Indeed, a crucial task of any empirical
investigation dealing with endogenous growth theories is to explain the nature of the
links between industrial research, investment and economic growth. There is much room
for study in this framework, as there are still only a few studies analysing these
relationships. Our analysis deals with both aggregate data for the US and UK economies
and an intersectoral analysis for the US manufacturing sector. We have used a test
procedure which allows us to analyse both the short-run and the long-run properties of
the variables using cointegration techniques. We are able to test for any feedback
between these variables, thus giving more detailed and robust evidence on the forces
underlying the growth process.
The results suggests that R&D Granger causes investment in machinery and
equipment only in the US economy. However, there is evidence of long-run feed-back
implying that investment may also affect R&D. In the UK economy there is no evidence
for R&D causing investment nor is there strong evidence of long-run feed-back between
the two variables. This suggests that the causal link between R&D and investment may
not be thought of as a stylised fact in industrialised economies.
We have also analysed the relationship between investment and output growth to
test whether investment may be considered as the key factor in the growth process. We
find little support for the hypothesis that investment has a long-run effect on growth. In
addition, causality tests support bi-directional causality between these variables in the US
economy while in the UK economy, output growth causes investment both in the shortrun
and in the long-run
An Agent Based Simulation Model of the Potential Impact of Second Generation Bioenergy Commodities on the Grain – Livestock Economy of South-Eastern Saskatchewan
Second-generation biofuel technology is in its early stages of development in Canada and their impact on the Canadian Prairies is currently unclear. The development of policy incentives for second-generation biofuels must be examined carefully to give the correct signals to encourage farmers to shift land-use into the socially optimal land-use. Traditionally the policy process involves Prairie farmers and the landscape commonly modeled as being homogenous. Agricultural policy tends to be formed on the one size fits all notion through the use of aggregated data and the homogenous stereotype of Prairie farmers. The complex nature of the various soil productivity levels amongst the landscape and farmer characteristics and attitudes create impractical representations at the farm-level using traditional modelling (typically econometric or general equilibrium analysis).
In this thesis an agent based simulation modelling (ABSM) methodology was used to examine the competitiveness of second-generation biofuel crops with existing crops and beef cows at the farm level and their impact on the farm structure building on the work of Stolniuk (2008) and Freeman (2005). ABSM are well suited to problems involving large numbers of interacting actors located on a heterogeneous landscape. In assessing alternative policies, ABSM considers actions between individual farmers in land markets and allows an individual agent (farmer) to make decisions representative to their farm and not from aggregated regional data, avoiding the aggregation bias found in many regional models.
In addition, three sequential (strategic, tactical and recourse) optimization stages are used in order to better reflect the uncertainty and recourse decisions available to Prairie farmers to determine short-run and long-run production decisions using linear and integer programming techniques. In the first decision stage, a Mixed Integer Programming (MIP) model is used to determine long-run strategic decisions associated with herd size, perennial crops, and machinery used in annual cropping systems along with short-run decisions that optimize annual crop rotations to maximize profits. The second-stage decision is a tactical decision process in the sense that it supports the strategic investment decisions of the farm enterprise by maximizing short-run profits that utilizes linear programming (LP). The third-stage, also a LP model, is a maximization problem, as these are short-run recourse decisions using stochastic yields and stochastic prices to balance feed rations for beef cow enterprises that minimize feeding costs. Each farmer agent’s optimal decision is influenced by their own expected prices and yields, variable costs, operating capital/cash flow, and the constraints endowed by the farm agent’s land allocation.
The farmer agent profiles are developed using actual census of agriculture and whole farm survey data, with each farmer agent developed differently from the next. The landscape is modelled using the actual soil productivity ratings from Saskatchewan Assessment Management Agency (SAMA) for each 640 acre farmland plot. Due to the importance of transitional and marginal lands, the landscape employed as the case study area is Census Agricultural Region (CAR) 1A of the Assiniboine River Basin of Saskatchewan.
Following Stolniuk (2008), a bootstrapping procedure on historical price and yield data is used to generate 50 different price and yield time paths. The 50 different time paths are used in the model, simulating 30 years into the future to identify the structural change implications from the introduction of energy crops at the farm-level. Three scenarios are simulated including a base case scenario (no energy crops), along with two energy price scenarios (4/GJ) based on the identical 50 price and yield time paths.
Perhaps not surprisingly, the simulation results indicate that energy crops have the potential to change the structure of agriculture in this region. Energy crops emerge in the model in both of the energy price scenarios, while total farm sector equity and total sector net income is improved over the base scenario. Farmers with significant quantities of marginal land would experience the greatest change in their farm structures by adopting energy crops if they chose to go down this path. Marginal land-use has a large effect on the energy crop scenarios, primarily on hay and forage acres. Beef cow farmer agents improve their situation the most over the base scenario due to the introduction of energy crops
Time-Interval Analysis for Radiation Monitoring
On-line radiation monitoring is essential to the U.S. Department of Energy (DOE) Environmental Management Science Program for assessing the impact of contaminated media at DOE sites. The goal of on-line radiation monitoring is to quickly detect small or abrupt changes in activity levels in the presence of a significant ambient background. The focus of this research is on developing effective statistical algorithms to meet the goal of on-line monitoring based on time-interval (time-difference between two consecutive radiation pulses) data. Compared to the more commonly used count data which are registered in a fixed count time, time-interval data possess the potential to reduce the sampling time required to obtain statistically sufficient information to detect changes in radiation levels. This dissertation has been formulated into three sections based on three statistical methods: sequential probability ratio test (SPRT), Bayesian statistics, and cumulative sum (CUSUM) control chart. In each section, time-interval analysis based on one of the three statistical methods was investigated and compared to conventional analyses based on count data in terms of average run length (ARL or average time to detect a change in radiation levels) and detection probability with both experimental and simulated data. The experimental data were acquired with a DGF-4C (XIA, Inc) system in list mode. Simulated data were obtained by using Monte Carlo techniques to obtain a random sampling of a Poisson process. Statistical algorithms were developed using the statistical software package R and the programming function built in the data analysis environment IGOR Pro. 4.03. Overall, the results showed that the statistical analyses based on time-interval data provided similar or higher detection probabilities relative to other statistical analyses based on count data, but were able to make a quicker detection with fewer pulses at relatively higher radiation levels. To increase the detection probability and further reduce the time needed to detect a change in radiation levels for time-interval analyses, modifications or adjustments were proposed for each of the three chosen statistical methods. Parameter adjustment to the preset background level in the SPRT test could reduce the average time to detect a source by 50%. Enhanced reset modification and moving prior modification proposed for the Bayesian analysis of time-intervals resulted in a higher detection probability than the Bayesian analysis without modifications, and were independent of the amount of background data registered before a radioactive source was present. The robust CUSUM control chart coupled with a modified runs rule showed the ability to further reduce the ARL to respond to changes in radiation levels, and keep the false positive rate at a required level, e.g., about 40% shorter than the standard time-interval CUSUM control chart at 10.0cps relative to a background count rate of 2.0cps. The developed statistical algorithms for time-interval data analyses demonstrate the feasibility and versatility for on-line radiation monitoring. The special properties of time-interval information provide an alternative for low-level radiation monitoring. These findings establish an important base for future on-line monitoring applications when time-interval data are registered
- …