483 research outputs found

    Effects of landscape metrics and land-use variables on macroinvertebrate communities and habitat characteristics

    Get PDF
    ABSTRACT: The growing number of studies establishing links between stream biota, environmental factors and river classification has contributed to a better understanding of fluvial ecosystem function. Environmental factors influencing river systems are distributed over hierarchically organised spatial scales. We used a nested hierarchical sampling design across four catchments to assess how benthic macroinvertebrate community composition and lower spatial scale habitat descriptors were shaped by landscape and land-use patterns. We found that benthic macroinvertebrate community structure and composition varied significantly from catchment to habitat level. We assessed and identified fractal metrics of landscape descriptors capable of explaining compositional and functional change in the benthic faunal indicators and compared them with the traditional variables describing land use and reach level habitat descriptors within a 1 km radius of each sampling site. We found that fractal landscape metrics were the best predictor variables for benthic macroinvertebrate community composition, function, instream habitat and river corridor characteristics

    Evaluation Activities in Pharmacognosy

    Get PDF
    La implantación del EEES comporta un nuevo sistema educativo enfocado al aprendizaje basado en el trabajo del estudiante, el cual deja de ser un sujeto pasivo que adquiere y memoriza conocimientos para convertirse en un sujeto activo de su desarrollo competencial y ser capaz de gestionar sus conocimientos eficientemente, bajo la tutela del profesor. Esto implica modificar no sólo la docencia sino también la evaluación, que como parte esencial del proceso educativo, asegura cubrir necesidades de aprendizaje y actualizar contenidos, proporciona retroalimentación, reflexión y análisis de la propia práctica y permite corregir deficiencias y mejorar metodologías. En Farmacognosia, actualmente en segundo curso de la Licenciatura de Farmacia, se han introducido estrategias de evaluación coherentes con los resultados de aprendizaje descritos, a considerar cuando se inicie el desarrollo de sus competencias en tercer curso de Grado. Inicialmente, se ha realizado una prueba de conocimientos previos. Se han aplicado dos tipos de pruebas, unas que enfatizan en la adquisición y comprensión de conocimientos y otras que abarcan competencias disciplinarias y transversales. Entre las primeras se han incluido: tests en aula virtual, que permiten discriminar información y dar una retroalimentación rápida; pruebas de respuesta abierta para comprobar capacidad de expresión, organización de ideas y razonamiento; y resolución de problemas para ver capacidad de gestionar información. Entre las segundas, después de realizar prácticas de laboratorio, se plantea una prueba de ejecución para una droga problema y se elabora un informe que demuestre el desarrollo de la ejecución, búsqueda y selección de información, observación e interpretación de resultados, y posterior exposición oral para valorar capacidad de comunicación.The implantation of the European Higher Education Area (EHEA) requires an educational system rooted in a competency-based learning approach in which, under professorial supervision, the students become active agents in order to reach a sufficient level of competence, retain more knowledge, and manage and apply this knowledge more efficiently. It implies modifying not only our teaching practices, but also our methods of evaluation, which, as an essential part of the education process, guarantees the acquisition of an ample range of skills and keeps course material up to date while providing students and educators with feed-back, reflection and analysis of the whole process. This, in turn, facilitates the correction of deficiencies and improvement of methodologies. In Pharmacognosy, which is currently taught in the second year of the Pharmacy program and in which ca. 200 students are enrolled, various evaluation strategies coherent with the established learning objectives were introduced to two groups of students. We first administered a questionnaire to ascertain the range of knowledge the students already had in related subjects. Then, two types of test were given: one type emphasizing the acquisition and understanding of knowledge and the other type focussing on more generic, interdisciplinary competence. The former type included: on-line multiple choice questionnaires, which allow for discernment of information and quick feed-back; open answer tests to determine the students’ ability to reason, organize their thoughts and express their ideas; and the resolution of problems to see how the students handle information. The latter type of exercise was given to pairs of students who, upon completing the laboratory component of the class, were given a proposal for solving a problem relating to a crude drug. The students then had to draft a scientific paper-like document describing the experimental protocol along with their observations, analysis of the results, and how they searched for and selected information. Finally, the students gave an oral presentation of the protocol and their findings, thus allowing the professor to evaluate their oral communication skills.Este trabajo ha sido financiado con un proyecto de innovación educativa, modalidad Finestra Oberta (29/FO/8) del Vicerectorat de Convergència Europea i Qualitat de la Universitat de València

    Precision Measurement of the Proton and Deuteron Spin Structure Functions g2 and Asymmetries A2

    Get PDF
    We have measured the spin structure functions g2p and g2d and the virtual photon asymmetries A2p and A2d over the kinematic range 0.02 < x < 0.8 and 0.7 < Q^2 < 20 GeV^2 by scattering 29.1 and 32.3 GeV longitudinally polarized electrons from transversely polarized NH3 and 6LiD targets. Our measured g2 approximately follows the twist-2 Wandzura-Wilczek calculation. The twist-3 reduced matrix elements d2p and d2n are less than two standard deviations from zero. The data are inconsistent with the Burkhardt-Cottingham sum rule if there is no pathological behavior as x->0. The Efremov-Leader-Teryaev integral is consistent with zero within our measured kinematic range. The absolute value of A2 is significantly smaller than the sqrt[R(1+A1)/2] limit.Comment: 12 pages, 4 figures, 2 table

    Measurement of the Proton and Deuteron Spin Structure Functions g2 and Asymmetry A2

    Full text link
    We have measured the spin structure functions g2p and g2d and the virtual photon asymmetries A2p and A2d over the kinematic range 0.02 < x < 0.8 and 1.0 < Q^2 < 30(GeV/c)^2 by scattering 38.8 GeV longitudinally polarized electrons from transversely polarized NH3 and 6LiD targets.The absolute value of A2 is significantly smaller than the sqrt{R} positivity limit over the measured range, while g2 is consistent with the twist-2 Wandzura-Wilczek calculation. We obtain results for the twist-3 reduced matrix elements d2p, d2d and d2n. The Burkhardt-Cottingham sum rule integral - int(g2(x)dx) is reported for the range 0.02 < x < 0.8.Comment: 12 pages, 4 figures, 1 tabl

    Efficient Training of Graph-Regularized Multitask SVMs

    Full text link
    We present an optimization framework for graph-regularized multi-task SVMs based on the primal formulation of the problem. Previous approaches employ a so-called multi-task kernel (MTK) and thus are inapplicable when the numbers of training examples n is large (typically n < 20,000, even for just a few tasks). In this paper, we present a primal optimization criterion, allowing for general loss functions, and derive its dual representation. Building on the work of Hsieh et al. [1,2], we derive an algorithm for optimizing the large-margin objective and prove its convergence. Our computational experiments show a speedup of up to three orders of magnitude over LibSVM and SVMLight for several standard benchmarks as well as challenging data sets from the application domain of computational biology. Combining our optimization methodology with the COFFIN large-scale learning framework [3], we are able to train a multi-task SVM using over 1,000,000 training points stemming from 4 different tasks. An efficient C++ implementation of our algorithm is being made publicly available as a part of the SHOGUN machine learning toolbox [4]

    Fast automated cell phenotype image classification

    Get PDF
    BACKGROUND: The genomic revolution has led to rapid growth in sequencing of genes and proteins, and attention is now turning to the function of the encoded proteins. In this respect, microscope imaging of a protein's sub-cellular localisation is proving invaluable, and recent advances in automated fluorescent microscopy allow protein localisations to be imaged in high throughput. Hence there is a need for large scale automated computational techniques to efficiently quantify, distinguish and classify sub-cellular images. While image statistics have proved highly successful in distinguishing localisation, commonly used measures suffer from being relatively slow to compute, and often require cells to be individually selected from experimental images, thus limiting both throughput and the range of potential applications. Here we introduce threshold adjacency statistics, the essence which is to threshold the image and to count the number of above threshold pixels with a given number of above threshold pixels adjacent. These novel measures are shown to distinguish and classify images of distinct sub-cellular localization with high speed and accuracy without image cropping. RESULTS: Threshold adjacency statistics are applied to classification of protein sub-cellular localization images. They are tested on two image sets (available for download), one for which fluorescently tagged proteins are endogenously expressed in 10 sub-cellular locations, and another for which proteins are transfected into 11 locations. For each image set, a support vector machine was trained and tested. Classification accuracies of 94.4% and 86.6% are obtained on the endogenous and transfected sets, respectively. Threshold adjacency statistics are found to provide comparable or higher accuracy than other commonly used statistics while being an order of magnitude faster to calculate. Further, threshold adjacency statistics in combination with Haralick measures give accuracies of 98.2% and 93.2% on the endogenous and transfected sets, respectively. CONCLUSION: Threshold adjacency statistics have the potential to greatly extend the scale and range of applications of image statistics in computational image analysis. They remove the need for cropping of individual cells from images, and are an order of magnitude faster to calculate than other commonly used statistics while providing comparable or better classification accuracy, both essential requirements for application to large-scale approaches

    Leptonic and Semileptonic Decays of Charm and Bottom Hadrons

    Get PDF
    We review the experimental measurements and theoretical descriptions of leptonic and semileptonic decays of particles containing a single heavy quark, either charm or bottom. Measurements of bottom semileptonic decays are used to determine the magnitudes of two fundamental parameters of the standard model, the Cabibbo-Kobayashi-Maskawa matrix elements VcbV_{cb} and VubV_{ub}. These parameters are connected with the physics of quark flavor and mass, and they have important implications for the breakdown of CP symmetry. To extract precise values of Vcb|V_{cb}| and Vub|V_{ub}| from measurements, however, requires a good understanding of the decay dynamics. Measurements of both charm and bottom decay distributions provide information on the interactions governing these processes. The underlying weak transition in each case is relatively simple, but the strong interactions that bind the quarks into hadrons introduce complications. We also discuss new theoretical approaches, especially heavy-quark effective theory and lattice QCD, which are providing insights and predictions now being tested by experiment. An international effort at many laboratories will rapidly advance knowledge of this physics during the next decade.Comment: This review article will be published in Reviews of Modern Physics in the fall, 1995. This file contains only the abstract and the table of contents. The full 168-page document including 47 figures is available at http://charm.physics.ucsb.edu/papers/slrevtex.p

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC

    Get PDF
    Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the anti-kt algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. Rcp varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables, submitted to Physics Letters B. All figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02
    corecore