68,308 research outputs found
Differences and similarities between ecological and economic models for biodiversity conservation
In this paper we investigate an important obstacle which substantially complicates cooperation between ecologists and economists but which has received little attention so far: differences between the modelling approaches in economics and ecology. To understand these differences, 60 models addressing issues relevant to biodiversity conservation have been selected randomly from eight international economic and ecological journals. The models have been compared according to a number of criteria including the level of generality/universality the models aim at; the mathematical technique employed for formulation and solution of the model; the level of complexity and the way time, space and uncertainty are taken into account. The economic models sampled are formulated and analysed analytically, tend to be relatively simple and are generally used to investigate general questions. Furthermore, they often ignore space, dynamics and uncertainty. Although some ecological models have similar properties, there is also a substantial number of another type of ecological models that are relatively complex and analysed by simulation. These models tend to be rather specific and often explicitly consider dynamics, space and uncertainty. The integrated ecological-economic models are observed to lie 'in the middle' between ecological and economic models, an unexpected result being that they are not more complex than ecological and economic models (as one could have expected from a simple 'merger' of both modelling attitudes), but have an intermediate complexity. --Ecological-economic modelling,modelling,biodiversity,conservation
Recommended from our members
Optimized multi-objective design of herringbone micromixers
This paper was presented at the 2nd Micro and Nano Flows Conference (MNF2009), which was held at Brunel University, West London, UK. The conference was organised by Brunel University and supported by the Institution of Mechanical Engineers, IPEM, the Italian Union of Thermofluid dynamics, the Process Intensification Network, HEXAG - the Heat Exchange Action Group and the Institute of Mathematics and its Applications.A design method which systematically integrates Computational Fluids Dynamics (CFD) with an optimization scheme based on the use of the techniques Design of Experiments (DOE), Function Approximation technique (FA) and Multi-Objective Genetic Algorithm (MOGA), has been applied to the shape optimization of the staggered herringbone micromixer (SHM) at different Reynolds numbers. To quantify the mixing intensity in the mixer a Mixing index is defined on the basis of the intensity of segregation of the mass concentration on the outlet section. Four geometric parameters, i.e., aspect ratio of the mixing channel, ratio of groove depth to channel height, ratio of groove width to groove pitch and the asymmetry factor (offset) of groove, are the design variables selected for optimization. The mixing index at the outlet section and the pressure drop in the mixing channel are the performance criteria used as objective functions. The Pareto front with the optimum trade-offs, maximum mixing index with minimum pressure drop, is obtained. Experiments for qualitative and quantitative validation have been implemented.This study is supported by the Dorothy Hodgkin Postgraduate Award (DHPA) of the Engineering and Physical Sciences Research Council (EPSRC) of United Kingdom and Ebara Research Co. Ltd. of Japan
Information systems evaluation: Navigating through the problem domain
Information systems (IS) make it possible to improve organizational efficiency and effectiveness, which can provide
competitive advantage. There is, however, a great deal of difficulty reported in the normative literature when it comes to the
evaluation of investments in IS, with companies often finding themselves unable to assess the full implications of their IS
infrastructure. Although many of the savings resulting from IS are considered suitable for inclusion within traditional
accountancy frameworks, it is the intangible and non-financial benefits, together with indirect project costs that complicate the
justification process. In exploring this phenomenon, the paper reviews the normative literature in the area of IS evaluation, and
then proposes a set of conjectures. These were tested within a case study to analyze the investment justification process of a
manufacturing IS investment. The idiosyncrasies of the case study and problems experienced during its attempts to evaluate,
implement, and realize the holistic implications of the IS investment are presented and critically analyzed. The paper
concludes by identifying lessons learnt and thus, proposes a number of empirical findings for consideration by decisionmakers
during the investment evaluation process
Visual and computational analysis of structure-activity relationships in high-throughput screening data
Novel analytic methods are required to assimilate the large volumes of structural and bioassay data generated by combinatorial chemistry and high-throughput screening programmes in the pharmaceutical and agrochemical industries. This paper reviews recent work in visualisation and data mining that can be used to develop structure-activity relationships from such chemical/biological datasets
The determination of asymptotic and periodic behavior of dynamic systems arising in control system analysis Final report
Asymptotic and periodic behavior prediction for nonlinear control system with mathematical model of rigid body vehicl
Using the 2dF galaxy redshift survey to detect gravitationally-lensed quasars
Galaxy redshift surveys can be used to detect gravitationally-lensed quasars
if the spectra obtained are searched for the quasars' emission lines. Previous
investigations of this possibility have used simple models to show that the 2
degree Field (2dF) redshift survey could yield several tens of new lenses, and
that the larger Sloan Digital Sky Survey should contain an order of magnitude
more. However the particular selection effects of the samples were not included
in these calculations, limiting the robustness of the predictions; thus a more
detailed simulation of the 2dF survey was undertaken here. The use of an
isophotal magnitude limit reduces both the depth of the sample and the expected
number of lenses, but more important is the Automatic Plate Measuring survey's
star-galaxy separation algorithm, used to generate the 2dF input catalogue. It
is found that most quasar lenses are classed as merged stars, with only the few
lenses with low-redshift deflectors likely to be classified as galaxies.
Explicit inclusion of these selection effects implies that the 2dF survey
should contain 10 lenses on average. The largest remaining uncertainty is the
lack of knowledge of the ease with which any underlying quasars can be
extracted from the survey spectra.Comment: MNRAS, in press; 14 pages, 19 figure
- ā¦