139,544 research outputs found
Recommended from our members
Toward improved calibration of hydrologic models: Combining the strengths of manual and automatic methods
Automatic methods for model calibration seek to take advantage of the speed and power of digital computers, while being objective and relatively easy to implement. However, they do not provide parameter estimates and hydrograph simulations that are considered acceptable by the hydrologists responsible for operational forecasting and have therefore not entered into widespread use. In contrast, the manual approach which has been developed and refined over the years to result in excellent model calibrations is complicated and highly labor-intensive, and the expertise acquired by one individual with a specific model is not easily transferred to another person (or model). In this paper, we propose a hybrid approach that combines the strengths of each. A multicriteria formulation is used to "model" the evaluation techniques and strategies used in manual calibration, and the resulting optimization problem is solved by means of a computerized algorithm. The new approach provides a stronger test of model performance than methods that use a single overall statistic to aggregate model errors over a large range of hydrologic behaviors. The power of the new approach is illustrated by means of a case study using the Sacramento Soil Moisture Accounting model
Recommended from our members
Toward improved calibration of hydrologic models: Multiple and noncommensurable measures of information
Several contributions to the hydrological literature have brought into question the continued usefulness of the classical paradigm for hydrologic model calibration. With the growing popularity of sophisticated 'physically based' watershed models (e.g., landsurface hydrology and hydrochemical models) the complexity of the calibration problem has been multiplied many fold. We disagree with the seemingly widespread conviction that the model calibration problem will simply disappear with the availability of more and better field measurements. This paper suggests that the emergence of a new and more powerful model calibration paradigm must include recognition of the inherent multiobjective nature of the problem and must explicitly recognize the role of model error. The results of our preliminary studies are presented. Through an illustrative case study we show that the multiobjective approach is not only practical and relatively simple to implement but can also provide useful information about the limitations of a model
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
Toward improved identifiability of hydrologic model parameters: The information content of experimental data
We have developed a sequential optimization methodology, entitled the parameter identification method based on the localization of information (PIMLI) that increases information retrieval from the data by inferring the location and type of measurements that are most informative for the model parameters. The PIMLI approach merges the strengths of the generalized sensitivity analysis (GSA) method [Spear and Hornberger, 1980], the Bayesian recursive estimation (BARE) algorithm [Thiemann et al., 2001], and the Metropolis algorithm [Metropolis et al., 1953]. Three case studies with increasing complexity are used to illustrate the usefulness and applicability of the PIMLI methodology. The first two case studies consider the identification of soil hydraulic parameters using soil water retention data and a transient multistep outflow experiment (MSO), whereas the third study involves the calibration of a conceptual rainfall-runoff model
Numerical Fitting-based Likelihood Calculation to Speed up the Particle Filter
The likelihood calculation of a vast number of particles is the computational
bottleneck for the particle filter in applications where the observation
information is rich. For fast computing the likelihood of particles, a
numerical fitting approach is proposed to construct the Likelihood Probability
Density Function (Li-PDF) by using a comparably small number of so-called
fulcrums. The likelihood of particles is thereby analytically inferred,
explicitly or implicitly, based on the Li-PDF instead of directly computed by
utilizing the observation, which can significantly reduce the computation and
enables real time filtering. The proposed approach guarantees the estimation
quality when an appropriate fitting function and properly distributed fulcrums
are used. The details for construction of the fitting function and fulcrums are
addressed respectively in detail. In particular, to deal with multivariate
fitting, the nonparametric kernel density estimator is presented which is
flexible and convenient for implicit Li-PDF implementation. Simulation
comparison with a variety of existing approaches on a benchmark 1-dimensional
model and multi-dimensional robot localization and visual tracking demonstrate
the validity of our approach.Comment: 42 pages, 17 figures, 4 tables and 1 appendix. This paper is a
draft/preprint of one paper submitted to the IEEE Transaction
A Diabatic Three-State Representation of Photoisomerization in the Green Fluorescent Protein Chromophore
We give a quantum chemical description of bridge photoisomerization reaction
of green fluorescent protein (GFP) chromophores using a representation over
three diabatic states. Bridge photoisomerization leads to non-radiative decay,
and competes with fluorescence in these systems. In the protein, this pathway
is suppressed, leading to fluorescence. Understanding the electronic structure
of the photoisomerization is a prerequisite to understanding how the protein
suppresses this pathway and preserves the emitting state of the chromophore. We
present a solution to the state-averaged complete active space problem, which
is spanned at convergence by three fragment-localized orbitals. We generate the
diabatic-state representation by applying a block diagonalization
transformation to the Hamiltonian calculated for the anionic chromophore model
HBDI with multi-reference, multi-state perturbation theory. The diabatic states
that emerge are charge-localized structures with a natural valence-bond
interpretation. At planar geometries, the diabatic picture recaptures the
charge transfer resonance of the anion. The strong S0-S1 excitation at these
geometries is reasonably described within a two-state model, but extension to a
three-state model is necessary to describe decay via two possible pathways
associated with photoisomerization of the (methine) bridge. Parametric
Hamiltonians based on the three-state ansatz can be fit directly to data
generated using the underlying active space. We provide an illustrative example
of such a parametric Hamiltonian
- …