4,023 research outputs found

    Machine Learning for Ad Publishers in Real Time Bidding

    Get PDF

    How predictable is technological progress?

    Get PDF
    Recently it has become clear that many technologies follow a generalized version of Moore's law, i.e. costs tend to drop exponentially, at different rates that depend on the technology. Here we formulate Moore's law as a correlated geometric random walk with drift, and apply it to historical data on 53 technologies. We derive a closed form expression approximating the distribution of forecast errors as a function of time. Based on hind-casting experiments we show that this works well, making it possible to collapse the forecast errors for many different technologies at different time horizons onto the same universal distribution. This is valuable because it allows us to make forecasts for any given technology with a clear understanding of the quality of the forecasts. As a practical demonstration we make distributional forecasts at different time horizons for solar photovoltaic modules, and show how our method can be used to estimate the probability that a given technology will outperform another technology at a given point in the future

    Accuracy in design cost estimating

    Get PDF
    The level of achieved accuracy in design cost estimating is generally accepted by researchers as being less than desirable. Low accuracy has been attributed to the nature of historical cost data, estimating method and the expertise of the estimator. Previous researchers have suggested that the adoption of resource based estimating by designers could eliminate data and method-related problems. The work in this thesis has shown that this will not solve the problem of inaccuracy in estimating. A major problem in assessing accuracy in design cost estimating has been the absence of a generally agreed definition of the'true cost' of a construction project. Hitherto, studies of accuracy in design cost estimating have relied solely on the assessment of errors using the low bid as a datum. Design cost estimators do not always focus on predicting the low bid. Rather, they may focus on the lowest, second lowest, third lowest or any other bid, mean/median of bids, or sometimes, on just being'within the collection'. This has resulted in designers and researchers having different views on the level of achieved accuracy in estimating. To resolve this problem, an analysis package, ACCEST (ACCuracy in ESTimating), was developed to facilitate 'fair' assessment of accuracy in design cost estimates. Tests - using cost data from 7 offices, the ACCEST package and the OPEN ACCESS II package on an IBM PS/2 - have shown that error in design cost estimating (averaging 3.6% higher than the predicted parameter) is much lower than portrayed in construction literature (averagel3% higher than the low bid). Also, false associations between project environment factors (such as geographical location, market conditions, number of bidders, etc.) and the level of achieved accuracy has been developed by researchers through using the low bid as a datum. Previous researches have also demonstrated that design estimators do not learn sufficiently from experience on past projects. A controlled experiment on design cost estimating information selection was designed to explain this occurrence. Failure to learn, and the persistent use of information on one project for estimating, has been shown to result from the method of information storage in design offices, the illusion of validity of inaccurate rules and over-confidence resulting from inaccurate assessment of individual expertise. A procedure for aiding learning from experience in design cost estimating has been suggested. Finally, the work has shown that by distinguishing between different trades, and selectively applying different estimating strategies, based on the objective evaluation of the uncertainty associated with cost prediction for ear h trade, error in design cost estimating could be further reduced. Two formulae for predicting tender prices using data generated from historical cost estimating experience are represented

    An Automated Deep Reinforcement Learning Pipeline for Dynamic Pricing

    Get PDF
    A dynamic pricing problem is difficult due to the highly dynamic environment and unknown demand distributions. In this article, we propose a deep reinforcement learning (DRL) framework, which is a pipeline that automatically defines the DRL components for solving a dynamic pricing problem. The automated DRL pipeline is necessary because the DRL framework can be designed in numerous ways, and manually finding optimal configurations is tedious. The levels of automation make nonexperts capable of using DRL for dynamic pricing. Our DRL pipeline contains three steps of DRL design, including Markov decision process modeling, algorithm selection, and hyperparameter optimization. It starts with transforming available information to state representation and defining reward function using a reward shaping approach. Then, the hyperparameters are tuned using a novel hyperparameter optimization method that integrates Bayesian optimization and the selection operator of the genetic algorithm. We employ our DRL pipeline on reserve price optimization problems in online advertising as a case study. We show that using the DRL configuration obtained by our DRL pipeline, a pricing policy is obtained whose revenue is significantly higher than the benchmark methods. The evaluation is performed by developing a simulation for the real-time bidding environment that makes exploration possible for the reinforcement learning agent.</p

    Agent-Based Models and Human Subject Experiments

    Get PDF
    This paper considers the relationship between agent-based modeling and economic decision-making experiments with human subjects. Both approaches exploit controlled ``laboratory'' conditions as a means of isolating the sources of aggregate phenomena. Research findings from laboratory studies of human subject behavior have inspired studies using artificial agents in ``computational laboratories'' and vice versa. In certain cases, both methods have been used to examine the same phenomenon. The focus of this paper is on the empirical validity of agent-based modeling approaches in terms of explaining data from human subject experiments. We also point out synergies between the two methodologies that have been exploited as well as promising new possibilities.agent-based models, human subject experiments, zero- intelligence agents, learning, evolutionary algorithms

    Strategic Project Portfolio Management by Predicting Project Performance and Estimating Strategic Fit

    Get PDF
    Candidate project selections are extremely crucial for infrastructure construction companies. First, they determine how well the planned strategy will be realized during the following years. If the selected projects do not align with the competences of the organization major losses can occur during the projects’ execution phase. Second, participating in tendering competitions is costly manual labour and losing the bid directly increase the overhead costs of the organization. Still, contractors rarely utilize statistical methods to select projects that are more likely to be successful. In response to these two issues, a tool for project portfolio selection phase was developed based on existing literature about strategic fit estimation and project performance prediction. One way to define the strategic fit of a project is to evaluate the alignment between the characteristics of a project to the strategic objectives of an organisation. Project performance on the other-hand can be measured with various financial, technical, production, risk or human-resource related criteria. Depending on which measure is highlighted, the likelihood of succeeding with regards to a performance measure can be predicted with numerous machine learning methods of which decision trees were used in this study. By combining the strategic fit and likelihood of success measures, a two-by-two matrix was formed. The matrix can be used to categorize the project opportunities into four categories, ignore, analyse, cash-in and focus, that can guide candidate project selections. To test and demonstrate the performance of the matrix, the case company’s CRM data was used to estimate strategic fit and likelihood of succeeding in tendering competitions. First, the projects were plotted on the matrix and their position and accuracy was analysed per quartile. Afterwards, the project selections were simulated and compared against the case company’s real selections during a six-month period. The first implication after plotting the projects on the matrix was that only a handful of projects were positioned in the focus category of the matrix, which indicates a discrepancy between the planned strategy and the competences of the case company in tendering competitions. Second, the tendering competition outcomes were easier to predict in the low strategic fit quartiles as the project selections in them were more accurate than in the high strategic fit categories. Finally, the matrix also quite accurately filtered the worst low strategic fit projects out from the market. The simulation was done in two stages. First, by emphasizing the likelihood of success predictions the matrix increased the hit rate and average strategic fit of the selected project portfolio. When strategic fit values were emphasized on the other hand, the simulation did not yield useful results. The study contributes to the project portfolio management literature by developing a practice-oriented tool that emphasizes the strategical and statistical perspectives of the candidate project selection phase
    • …
    corecore