1,764,702 research outputs found
Integrating Distributed Sources of Information for Construction Cost Estimating using Semantic Web and Semantic Web Service technologies
A construction project requires collaboration of several organizations such as owner, designer, contractor, and material supplier organizations. These organizations need to exchange information to enhance their teamwork. Understanding the information received from other organizations requires specialized human resources. Construction cost estimating is one of the processes that requires information from several sources including a building information model (BIM) created by designers, estimating assembly and work item information maintained by contractors, and construction material cost data provided by material suppliers. Currently, it is not easy to integrate the information necessary for cost estimating over the Internet. This paper discusses a new approach to construction cost estimating that uses Semantic Web technology. Semantic Web technology provides an infrastructure and a data modeling format that enables accessing, combining, and sharing information over the Internet in a machine processable format. The estimating approach presented in this paper relies on BIM, estimating knowledge, and construction material cost data expressed in a web ontology language. The approach presented in this paper makes the various sources of estimating data accessible as Simple Protocol and Resource Description Framework Query Language (SPARQL) endpoints or Semantic Web Services. We present an estimating application that integrates distributed information provided by project designers, contractors, and material suppliers for preparing cost estimates. The purpose of this paper is not to fully automate the estimating process but to streamline it by reducing human involvement in repetitive cost estimating activities
Estimating the Application Rate of Liquid Chloride Products Based on Residual Salt Concentration on Pavement
This technical report summarizes the results of laboratory testing on asphalt and concrete pavement. A known quantity of salt brine was applied as an anti-icer, followed by snow application, traffic simulation, and mechanical snow removal via simulated plowing. Using a sample from this plowed snow, researchers measured the chloride concentration to determine the amount of salt brine (as chloride) that remained on the pavement surface. Under the investigated scenarios, the asphalt samples showed higher concentrations of chloride in the plowed-off snow, and therefore lower concentrations of chlorides remaining on the pavement surface. In comparison, the concrete samples had much lower chloride concentrations in the plowed-off snow, and much higher chloride concentrations remaining on the pavement surface. An interesting pattern revealed by the testing was the variation in the percentage of residual chloride on the pavement surface with changes in temperature. When pavement type was not considered, more residual chloride was present at warmer temperatures and less residual chloride was present at colder temperatures. This observation warrants additional testing to determine if the pattern is in fact a statistically valid trend. The findings from the study will help winter maintenance agencies reduce salt usage while meeting the defined Level of Service. In addition, findings will contribute to environmentally sustainable policies and reduce the level of salt usage (from snow- and ice-control products) introduced into the environment
Estimating Maximally Probable Constrained Relations by Mathematical Programming
Estimating a constrained relation is a fundamental problem in machine
learning. Special cases are classification (the problem of estimating a map
from a set of to-be-classified elements to a set of labels), clustering (the
problem of estimating an equivalence relation on a set) and ranking (the
problem of estimating a linear order on a set). We contribute a family of
probability measures on the set of all relations between two finite, non-empty
sets, which offers a joint abstraction of multi-label classification,
correlation clustering and ranking by linear ordering. Estimating (learning) a
maximally probable measure, given (a training set of) related and unrelated
pairs, is a convex optimization problem. Estimating (inferring) a maximally
probable relation, given a measure, is a 01-linear program. It is solved in
linear time for maps. It is NP-hard for equivalence relations and linear
orders. Practical solutions for all three cases are shown in experiments with
real data. Finally, estimating a maximally probable measure and relation
jointly is posed as a mixed-integer nonlinear program. This formulation
suggests a mathematical programming approach to semi-supervised learning.Comment: 16 page
Estimating Euler equations
In this paper we consider conditions under which the estimation of a log-linearized Euler equation for
consumption yields consistent estimates of preference parameters. When utility is isoelastic and a
sample covering a long time period is available, consistent estimates are obtained from the loglinearized
Euler equation when the innovations to the conditional variance of consumption growth are
uncorrelated with the instruments typically used in estimation.
We perform a Montecarlo experiment, consisting in solving and simulating a simple life cycle model
under uncertainty, and show that in most situations, the estimates obtained from the log-linearized
equation are not systematically biased. This is true even when we introduce heteroscedasticity in the
process generating income.
The only exception is when discount rates are very high (e.g. 47% per year). This problem arises
because consumers are nearly always close to the maximum borrowing limit: the estimation bias is
unrelated to the linearization and estimates using nonlinear GMM are as bad. Across all our situations,
estimation using a log-linearized Euler equation does better than nonlinear GMM despite the absence
of measurement error.
Finally, we plot life cycle profiles for the variance of consumption growth, which, except when the
discount factor is very high, is remarkably flat. This implies that claims that demographic variables in
log-linearized Euler equations capture changes in the variance of consumption growth are unwarranted
Estimating Mutual Information
We present two classes of improved estimators for mutual information
, from samples of random points distributed according to some joint
probability density . In contrast to conventional estimators based on
binnings, they are based on entropy estimates from -nearest neighbour
distances. This means that they are data efficient (with we resolve
structures down to the smallest possible scales), adaptive (the resolution is
higher where data are more numerous), and have minimal bias. Indeed, the bias
of the underlying entropy estimates is mainly due to non-uniformity of the
density at the smallest resolved scale, giving typically systematic errors
which scale as functions of for points. Numerically, we find that
both families become {\it exact} for independent distributions, i.e. the
estimator vanishes (up to statistical fluctuations) if . This holds for all tested marginal distributions and for all
dimensions of and . In addition, we give estimators for redundancies
between more than 2 random variables. We compare our algorithms in detail with
existing algorithms. Finally, we demonstrate the usefulness of our estimators
for assessing the actual independence of components obtained from independent
component analysis (ICA), for improving ICA, and for estimating the reliability
of blind source separation.Comment: 16 pages, including 18 figure
Learning Counterfactual Representations for Estimating Individual Dose-Response Curves
Estimating what would be an individual's potential response to varying levels
of exposure to a treatment is of high practical relevance for several important
fields, such as healthcare, economics and public policy. However, existing
methods for learning to estimate counterfactual outcomes from observational
data are either focused on estimating average dose-response curves, or limited
to settings with only two treatments that do not have an associated dosage
parameter. Here, we present a novel machine-learning approach towards learning
counterfactual representations for estimating individual dose-response curves
for any number of treatments with continuous dosage parameters with neural
networks. Building on the established potential outcomes framework, we
introduce performance metrics, model selection criteria, model architectures,
and open benchmarks for estimating individual dose-response curves. Our
experiments show that the methods developed in this work set a new
state-of-the-art in estimating individual dose-response
Estimating macrobenthic secondary production from body weight and biomass: a field test in a non-boreal intertidal habitat
Production (P) and biomass (B) data of different species from 3 stations in the intertidal zone of the Ria Formosa (southern Portugal, 37-degrees-N) were analysed. They were compared with equations from the literature to estimate P/BBAR ratios from body weight. A clear distinction must be made between (1) an intraspecific and (2) an interspecific comparison. (1) Results from 3 species supported a body weight exponent of -0.25 for the P/BBAR ratio, as is to be expected from a linear relationship between growth and respiration. (2) In an interspecific comparison, the weight exponent depends on the contribution of age or growth rate to the presence of large specimens in a sample. It is concluded that production in the specific habitat examined cannot be calculated properly from body weight and biomass by 1 simple equation which mixes interspecific and intraspecific effects, rather that both aspects should be separated into 2 different calculation steps.e Ger- man-Portuguese research project 'Die Biologie der Ria For- mosa', funded by the Bundesministerium fur Forschung und Technologie, Germany (Grant no. 03F0562Ainfo:eu-repo/semantics/publishedVersio
Estimating, planning and managing Agile Web development projects under a value-based perspective
Context: The processes of estimating, planning and managing are crucial for software development projects,
since the results must be related to several business strategies. The broad expansion of the Internet
and the global and interconnected economy make Web development projects be often characterized by
expressions like delivering as soon as possible, reducing time to market and adapting to undefined
requirements. In this kind of environment, traditional methodologies based on predictive techniques
sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has
provided some useful tools that, combined with Web Engineering techniques, can help to establish a
framework to estimate, manage and plan Web development projects.
Objective: This paper presents a proposal for estimating, planning and managing Web projects, by
combining some existing Agile techniques with Web Engineering principles, presenting them as an
unified framework which uses the business value to guide the delivery of features.
Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain
relevant conclusions.
Results: The results achieved after using the framework in a development project are presented, including
interesting results on project planning and estimation, as well as on team productivity throughout the
project.
Conclusion: It is concluded that the framework can be useful in order to better manage Web-based
projects, through a continuous value-based estimation and management process.Ministerio de Economía y Competitividad TIN2013-46928-C3-3-
- …