17,636 research outputs found
Implementing imperfect information in fuzzy databases
Information in real-world applications is often
vague, imprecise and uncertain. Ignoring the inherent imperfect
nature of real-world will undoubtedly introduce some deformation of human perception of real-world and may eliminate several
substantial information, which may be very useful in several
data-intensive applications. In database context, several fuzzy
database models have been proposed. In these works, fuzziness
is introduced at different levels. Common to all these proposals is
the support of fuzziness at the attribute level. This paper proposes
first a rich set of data types devoted to model the different kinds
of imperfect information. The paper then proposes a formal
approach to implement these data types. The proposed approach
was implemented within a relational object database model but it
is generic enough to be incorporated into other database models.ou
Optimization of fuzzy analogy in software cost estimation using linguistic variables
One of the most important objectives of software engineering community has
been the increase of useful models that beneficially explain the development of
life cycle and precisely calculate the effort of software cost estimation. In
analogy concept, there is deficiency in handling the datasets containing
categorical variables though there are innumerable methods to estimate the
cost. Due to the nature of software engineering domain, generally project
attributes are often measured in terms of linguistic values such as very low,
low, high and very high. The imprecise nature of such value represents the
uncertainty and vagueness in their elucidation. However, there is no efficient
method that can directly deal with the categorical variables and tolerate such
imprecision and uncertainty without taking the classical intervals and numeric
value approaches. In this paper, a new approach for optimization based on fuzzy
logic, linguistic quantifiers and analogy based reasoning is proposed to
improve the performance of the effort in software project when they are
described in either numerical or categorical data. The performance of this
proposed method exemplifies a pragmatic validation based on the historical NASA
dataset. The results were analyzed using the prediction criterion and indicates
that the proposed method can produce more explainable results than other
machine learning methods.Comment: 14 pages, 8 figures; Journal of Systems and Software, 2011. arXiv
admin note: text overlap with arXiv:1112.3877 by other author
Environmental statistics and optimal regulation
Any organism is embedded in an environment that changes over time. The
timescale for and statistics of environmental change, the precision with which
the organism can detect its environment, and the costs and benefits of
particular protein expression levels all will affect the suitability of
different strategies-such as constitutive expression or graded response-for
regulating protein levels in response to environmental inputs. We propose a
general framework-here specifically applied to the enzymatic regulation of
metabolism in response to changing concentrations of a basic nutrient-to
predict the optimal regulatory strategy given the statistics of fluctuations in
the environment and measurement apparatus, respectively, and the costs
associated with enzyme production. We use this framework to address three
fundamental questions: (i) when a cell should prefer thresholding to a graded
response; (ii) when there is a fitness advantage to implementing a Bayesian
decision rule; and (iii) when retaining memory of the past provides a selective
advantage. We specifically find that: (i) relative convexity of enzyme
expression cost and benefit influences the fitness of thresholding or graded
responses; (ii) intermediate levels of measurement uncertainty call for a
sophisticated Bayesian decision rule; and (iii) in dynamic contexts,
intermediate levels of uncertainty call for retaining memory of the past.
Statistical properties of the environment, such as variability and correlation
times, set optimal biochemical parameters, such as thresholds and decay rates
in signaling pathways. Our framework provides a theoretical basis for
interpreting molecular signal processing algorithms and a classification scheme
that organizes known regulatory strategies and may help conceptualize
heretofore unknown ones.Comment: 21 pages, 7 figure
Agentless robust load sharing strategy for utilising hetero-geneous resources over wide area network
Resource monitoring and performance prediction services have always been regarded as important keys to improving the performance of load sharing strategy. However, the traditional methodologies usually require specific performance information, which can only be collected by installing proprietary agents on all participating resources. This requirement of implementing a single unified monitoring service may not be feasible because of the differences in the underlying systems and organisation policies. To address this problem, we define a new load sharing strategy which bases the load decision on a simple performance estimation that can be measured easily at the coordinator node. Our proposed strategy relies on a stage-based dynamic task allocation to handle the imprecision of our performance estimation and to correct load distribution on-the-fly. The simulation results showed that the performance of our strategy is comparable or better than traditional strategies, especially when the performance information from the monitoring service is not accurate
- …