1,151 research outputs found

    Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    Get PDF
    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (t(max)), growth parameters, and water temperature by seeing how well they reproduce \u3e200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a t(max)-based estimator performs the best among all estimators evaluated. The t(max)-based estimators in turn perform better than the Alverson-Carney method based on t(max) and the von Bertalanffy K coefficient, Pauly\u27s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the t(max)-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a t(max)-based estimator (M = 4.899t(max)(-0.916), prediction error = 0.32) when possible and a growth-based method (M = 4.118K(0.73)L(infinity)(-0.33), prediction error = 0.6) otherwise

    Hyperparameter Selection

    Get PDF

    Cost-effective therapy remission assessment in lymphoma patients using 2-[fluorine-18]fluoro-2-deoxy-D-glucose-positron emission tomography/computed tomography: is an end of treatment exam necessary in all patients?

    Get PDF
    Background: The aim of this study was to evaluate the necessity of 2-[fluorine-18]fluoro-2-deoxy-D-glucose-positron emission tomography/computed tomography (FDG-PET/CT) after end of treatment in lymphoma patients who had an interim FDG-PET/CT. Patients and methods: In 38 patients with Hodgkin's disease (HD) and 30 patients with non-Hodgkin's lymphoma (NHL) interim PET/CT (intPET) after two to four cycles of chemotherapy and PET/CT after completion of first-line treatment (endPET) were carried out. Cost reduction was retrospectively calculated for the potentially superfluous endPET examinations. Results: In 31 (82%) HD patients, intPET demonstrated complete remission (CR) which was still present on endPET. The remaining seven HD patients (18%) had partial remission (PR) on intPET. For NHL, 22 (73%) patients had CR on intPET analysis which was still present on endPET. In the remaining eight NHL patients, intPET revealed PR in seven and stable disease in one patient. None of all intPET complete responders progressed until the end of therapy. Thus, of the 196 PET/CT's carried out in our study population, 53 endPET's (27.0%) were carried out in interim complete responders. Conclusion: End-treatment PET/CT is unnecessary if intPET shows CR and the clinical course is uncomplicated. An imaging cost reduction of 27% in our study population could have been achieved by omitting end of treatment FDG-PET/CT in interim complete responder

    Deep active learning for autonomous navigation.

    Get PDF
    Imitation learning refers to an agent's ability to mimic a desired behavior by learning from observations. A major challenge facing learning from demonstrations is to represent the demonstrations in a manner that is adequate for learning and efficient for real time decisions. Creating feature representations is especially challenging when extracted from high dimensional visual data. In this paper, we present a method for imitation learning from raw visual data. The proposed method is applied to a popular imitation learning domain that is relevant to a variety of real life applications; namely navigation. To create a training set, a teacher uses an optimal policy to perform a navigation task, and the actions taken are recorded along with visual footage from the first person perspective. Features are automatically extracted and used to learn a policy that mimics the teacher via a deep convolutional neural network. A trained agent can then predict an action to perform based on the scene it finds itself in. This method is generic, and the network is trained without knowledge of the task, targets or environment in which it is acting. Another common challenge in imitation learning is generalizing a policy over unseen situation in training data. To address this challenge, the learned policy is subsequently improved by employing active learning. While the agent is executing a task, it can query the teacher for the correct action to take in situations where it has low confidence. The active samples are added to the training set and used to update the initial policy. The proposed approach is demonstrated on 4 different tasks in a 3D simulated environment. The experiments show that an agent can effectively perform imitation learning from raw visual data for navigation tasks and that active learning can significantly improve the initial policy using a small number of samples. The simulated test bed facilitates reproduction of these results and comparison with other approaches

    Probabilistic Clustering of Time-Evolving Distance Data

    Full text link
    We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points. The proposed method utilizes the information given by adjacent time points to find the underlying cluster structure and obtain a smooth cluster evolution. This approach allows the number of objects and clusters to differ at every time point, and no identification on the identities of the objects is needed. Further, the model does not require the number of clusters being specified in advance -- they are instead determined automatically using a Dirichlet process prior. We validate our model on synthetic data showing that the proposed method is more accurate than state-of-the-art clustering methods. Finally, we use our dynamic clustering model to analyze and illustrate the evolution of brain cancer patients over time

    Kondo physics in carbon nanotubes

    Full text link
    The connection of electrical leads to wire-like molecules is a logical step in the development of molecular electronics, but also allows studies of fundamental physics. For example, metallic carbon nanotubes are quantum wires that have been found to act as one-dimensional quantum dots, Luttinger-liquids, proximity-induced superconductors and ballistic and diffusive one-dimensional metals. Here we report that electrically-contacted single-wall nanotubes can serve as powerful probes of Kondo physics, demonstrating the universality of the Kondo effect. Arising in the prototypical case from the interaction between a localized impurity magnetic moment and delocalized electrons in a metallic host, the Kondo effect has been used to explain enhanced low-temperature scattering from magnetic impurities in metals, and also occurs in transport through semiconductor quantum dots. The far higher tunability of dots (in our case, nanotubes) compared with atomic impurities renders new classes of Kondo-like effects accessible. Our nanotube devices differ from previous systems in which Kondo effects have been observed, in that they are one-dimensional quantum dots with three-dimensional metal (gold) reservoirs. This allows us to observe Kondo resonances for very large electron number (N) in the dot, and approaching the unitary limit (where the transmission reaches its maximum possible value). Moreover, we detect a previously unobserved Kondo effect, occurring for even values of N in a magnetic field.Comment: 7 pages, pdf onl

    Introduction to Khovanov Homologies. I. Unreduced Jones superpolynomial

    Full text link
    An elementary introduction to Khovanov construction of superpolynomials. Despite its technical complexity, this method remains the only source of a definition of superpolynomials from the first principles and therefore is important for development and testing of alternative approaches. In this first part of the review series we concentrate on the most transparent and unambiguous part of the story: the unreduced Jones superpolynomials in the fundamental representation and consider the 2-strand braids as the main example. Already for the 5_1 knot the unreduced superpolynomial contains more items than the ordinary Jones.Comment: 33 page

    About intrinsic transversality of pairs of sets

    Get PDF
    The article continues the study of the ‘regular’ arrangement of a collection of sets near a point in their intersection. Such regular intersection or, in other words, transversality properties are crucial for the validity of qualification conditions in optimization as well as subdifferential, normal cone and coderivative calculus, and convergence analysis of computational algorithms. One of the main motivations for the development of the transversality theory of collections of sets comes from the convergence analysis of alternating projections for solving feasibility problems. This article targets infinite dimensional extensions of the intrinsic transversality property introduced recently by Drusvyatskiy, Ioffe and Lewis as a sufficient condition for local linear convergence of alternating projections. Several characterizations of this property are established involving new limiting objects defined for pairs of sets. Special attention is given to the convex case

    A Regularized Graph Layout Framework for Dynamic Network Visualization

    Full text link
    Many real-world networks, including social and information networks, are dynamic structures that evolve over time. Such dynamic networks are typically visualized using a sequence of static graph layouts. In addition to providing a visual representation of the network structure at each time step, the sequence should preserve the mental map between layouts of consecutive time steps to allow a human to interpret the temporal evolution of the network. In this paper, we propose a framework for dynamic network visualization in the on-line setting where only present and past graph snapshots are available to create the present layout. The proposed framework creates regularized graph layouts by augmenting the cost function of a static graph layout algorithm with a grouping penalty, which discourages nodes from deviating too far from other nodes belonging to the same group, and a temporal penalty, which discourages large node movements between consecutive time steps. The penalties increase the stability of the layout sequence, thus preserving the mental map. We introduce two dynamic layout algorithms within the proposed framework, namely dynamic multidimensional scaling (DMDS) and dynamic graph Laplacian layout (DGLL). We apply these algorithms on several data sets to illustrate the importance of both grouping and temporal regularization for producing interpretable visualizations of dynamic networks.Comment: To appear in Data Mining and Knowledge Discovery, supporting material (animations and MATLAB toolbox) available at http://tbayes.eecs.umich.edu/xukevin/visualization_dmkd_201

    Polycation-Ï€ Interactions Are a Driving Force for Molecular Recognition by an Intrinsically Disordered Oncoprotein Family

    Get PDF
    Molecular recognition by intrinsically disordered proteins (IDPs) commonly involves specific localized contacts and target-induced disorder to order transitions. However, some IDPs remain disordered in the bound state, a phenomenon coined "fuzziness", often characterized by IDP polyvalency, sequence-insensitivity and a dynamic ensemble of disordered bound-state conformations. Besides the above general features, specific biophysical models for fuzzy interactions are mostly lacking. The transcriptional activation domain of the Ewing's Sarcoma oncoprotein family (EAD) is an IDP that exhibits many features of fuzziness, with multiple EAD aromatic side chains driving molecular recognition. Considering the prevalent role of cation-π interactions at various protein-protein interfaces, we hypothesized that EAD-target binding involves polycation- π contacts between a disordered EAD and basic residues on the target. Herein we evaluated the polycation-π hypothesis via functional and theoretical interrogation of EAD variants. The experimental effects of a range of EAD sequence variations, including aromatic number, aromatic density and charge perturbations, all support the cation-π model. Moreover, the activity trends observed are well captured by a coarse-grained EAD chain model and a corresponding analytical model based on interaction between EAD aromatics and surface cations of a generic globular target. EAD-target binding, in the context of pathological Ewing's Sarcoma oncoproteins, is thus seen to be driven by a balance between EAD conformational entropy and favorable EAD-target cation-π contacts. Such a highly versatile mode of molecular recognition offers a general conceptual framework for promiscuous target recognition by polyvalent IDPs. © 2013 Song et al
    • …
    corecore