404 research outputs found

    A global analysis of the spatial and temporal variability of usable Landsat observations at the pixel scale

    Get PDF
    The Landsat program has the longest collection of moderate-resolution satellite imagery, and the data are free to everyone. With the improvements of standardized image products, the flexibility of cloud computing platforms, and the development of time series approaches, it is now possible to conduct global-scale analyses of time series using Landsat data over multiple decades. Efforts in this regard are limited by the density of usable observations. The availability of usable Landsat Tier 1 observations at the scale of individual pixels from the perspective of time series analysis for land change monitoring is remarkably variable both in space (globally) and time (1985–2020), depending most immediately on which sensors were in operation, the technical capabilities of the mission, and the acquisition strategies and objectives of the satellite operators (e.g., USGS, commercial company) and the international ground receiving stations. Additionally, analysis of data density at the pixel scale allows for the integration of quality control data on clouds, cloud shadows, and snow as well as other properties returned from the atmospheric correction process. Maps for different time periods show the effect of excluding observations based on the presence of clouds, cloud shadows, snow, sensor saturation, hazy observations (based on atmospheric opacity), and lack of aerosol optical depth information. Two major discoveries are: 1) that filtering saturated and hazy pixels is helpful to reduce noise in the time series, although the impact may vary across different continents; 2) the atmospheric opacity band needs to be used with caution because many images are removed when no value is given in this band, when many of those observations are usable. The results provide guidance on when and where time series analysis is feasible, which will benefit many users of Landsat data.University of Connecticut; National Aeronautics and Space Administration; 80NSSC20K0022 - NASA; 20-DG-11132762-017 - Department of Agriculture/Forest Service; G12PC00070 - Department of the Interior/U.S. Geological SurveyPublished versio

    Sensitivity of Global Pasturelands to Climate Variation

    Get PDF
    Pasturelands are globally extensive, sensitive to climate, and support livestock production systems that provide an essential source of food in many parts of the world. In this paper, we integrate information from remote sensing, global climate, and land use databases to improve understanding of the resilience and resistance of this ecologically vulnerable and societally critical land use. To characterize the effect of climate on pastureland productivity at global scale, we analyze the relationship between satellite‐derived enhanced vegetation index data from MODIS and gridded precipitation data from CHIRPS at 3‐ and 6‐month time lags. To account for the effects of different production systems, we stratify our analysis by agroecological zones and by rangeland versus mixed crop‐livestock systems. Results show that 14.5% of global pasturelands experienced statistically significant greening or browning trends over the 15‐year study period, with the majority of these locations showing greening. In arid ecosystems, precipitation and lagged vegetation index anomalies explain up to 69% of variation in vegetation productivity in both crop‐livestock and rangeland‐based production systems. Livestock production systems in Australia are least resistant to contemporaneous and short‐term precipitation anomalies, while arid livestock production systems in Latin America are least resilient to short‐term vegetation greenness anomalies. Because many arid regions of the world are projected to experience decreased total precipitation and increased precipitation variability in the coming decades, improved understanding regarding the sensitivity of pasturelands to the joint effects of climate change and livestock production systems is required to support sustainable land management in global pasturelands

    Application of Pulsed Field Gel Electrophoresis to Determine Îł-ray-induced Double-strand Breaks in Yeast Chromosomal Molecules

    Get PDF
    The frequency of DNA double-strand breaks (dsb) was determined in yeast cells exposed to Îł-rays under anoxic conditions. Genomic DNA of treated cells was separated by pulsed field gel electrophoresis, and two different approaches for the evaluation of the gels were employed: (1) The DNA mass distribution profile obtained by electrophoresis was compared to computed profiles, and the number of DSB per unit length was then derived in terms of a fitting procedure; (2) hybridization of selected chromosomes was performed, and a comparison of the hybridization signals in treated and untreated samples was then used to derive the frequency of dsb

    ImageCLEF 2014: Overview and analysis of the results

    Full text link
    This paper presents an overview of the ImageCLEF 2014 evaluation lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and medical archives. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the image annotation and retrieval research landscape. The 2014 edition consists of four tasks: domain adaptation, scalable concept image annotation, liver CT image annotation and robot vision. This paper describes the tasks and the 2014 competition, giving a unifying perspective of the present activities of the lab while discussing future challenges and opportunities.This work has been partially supported by the tranScriptorium FP7 project under grant #600707 (M. V., R. P.).Caputo, B.; Müller, H.; Martinez-Gomez, J.; Villegas Santamaría, M.; Acar, B.; Patricia, N.; Marvasti, N.... (2014). ImageCLEF 2014: Overview and analysis of the results. En Information Access Evaluation. Multilinguality, Multimodality, and Interaction: 5th International Conference of the CLEF Initiative, CLEF 2014, Sheffield, UK, September 15-18, 2014. Proceedings. Springer Verlag (Germany). 192-211. https://doi.org/10.1007/978-3-319-11382-1_18S192211Bosch, A., Zisserman, A.: Image classification using random forests and ferns. In: Proc. CVPR (2007)Caputo, B., Müller, H., Martinez-Gomez, J., Villegas, M., Acar, B., Patricia, N., Marvasti, N., Üsküdarlı, S., Paredes, R., Cazorla, M., Garcia-Varea, I., Morell, V.: ImageCLEF 2014: Overview and analysis of the results. In: Kanoulas, E., et al. (eds.) CLEF 2014. LNCS, vol. 8685, Springer, Heidelberg (2014)Caputo, B., Patricia, N.: Overview of the ImageCLEF 2014 Domain Adaptation Task. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes (2014)de Carvalho Gomes, R., Correia Ribas, L., Antnio de Castro Jr., A., Nunes Gonalves, W.: CPPP/UFMS at ImageCLEF 2014: Robot Vision Task. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes (2014)Del Frate, F., Pacifici, F., Schiavon, G., Solimini, C.: Use of neural networks for automatic classification from high-resolution images. IEEE Transactions on Geoscience and Remote Sensing 45(4), 800–809 (2007)Feng, S.L., Manmatha, R., Lavrenko, V.: Multiple bernoulli relevance models for image and video annotation. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, p. II–1002. IEEE (2004)Friedl, M.A., Brodley, C.E.: Decision tree classification of land cover from remotely sensed data. Remote Sensing of Environment 61(3), 399–409 (1997)Goh, K.-S., Chang, E.Y., Li, B.: Using one-class and two-class svms for multiclass image annotation. IEEE Transactions on Knowledge and Data Engineering 17(10), 1333–1346 (2005)Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: Proc. CVPR. Extended Version Considering its Additional MaterialJie, L., Tommasi, T., Caputo, B.: Multiclass transfer learning from unconstrained priors. In: Proc. ICCV (2011)Kim, S., Park, S., Kim, M.: Image classification into object / non-object classes. In: Enser, P.G.B., Kompatsiaris, Y., O’Connor, N.E., Smeaton, A.F., Smeulders, A.W.M. (eds.) CIVR 2004. LNCS, vol. 3115, pp. 393–400. Springer, Heidelberg (2004)Ko, B.C., Lee, J., Nam, J.Y.: Automatic medical image annotation and keyword-based image retrieval using relevance feedback. Journal of Digital Imaging 25(4), 454–465 (2012)Kökciyan, N., Türkay, R., Üsküdarlı, S., Yolum, P., Bakır, B., Acar, B.: Semantic Description of Liver CT Images: An Ontological Approach. IEEE Journal of Biomedical and Health Informatics (2014)Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol.  2, pp. 2169–2178. IEEE (2006)Martinez-Gomez, J., Garcia-Varea, I., Caputo, B.: Overview of the imageclef 2012 robot vision task. In: CLEF (Online Working Notes/Labs/Workshop) (2012)Martinez-Gomez, J., Garcia-Varea, I., Cazorla, M., Caputo, B.: Overview of the imageclef 2013 robot vision task. In: CLEF 2013 Evaluation Labs and Workshop, Online Working Notes (2013)Martinez-Gomez, J., Cazorla, M., Garcia-Varea, I., Morell, V.: Overview of the ImageCLEF 2014 Robot Vision Task. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes (2014)Mueen, A., Zainuddin, R., Baba, M.S.: Automatic multilevel medical image annotation and retrieval. Journal of Digital Imaging 21(3), 290–295 (2008)Muller, H., Clough, P., Deselaers, T., Caputo, B.: ImageCLEF: experimental evaluation in visual information retrieval. Springer (2010)Park, S.B., Lee, J.W., Kim, S.K.: Content-based image classification using a neural network. Pattern Recognition Letters 25(3), 287–300 (2004)Patricia, N., Caputo, B.: Learning to learn, from transfer learning to domain adaptation: a unifying perspective. In: Proc. CVPR (2014)Pronobis, A., Caputo, B.: The robot vision task. In: Muller, H., Clough, P., Deselaers, T., Caputo, B. (eds.) ImageCLEF. The Information Retrieval Series, vol. 32, pp. 185–198. Springer, Heidelberg (2010)Pronobis, A., Christensen, H., Caputo, B.: Overview of the imageclef@ icpr 2010 robot vision track. In: Recognizing Patterns in Signals, Speech, Images and Videos, pp. 171–179 (2010)Qi, X., Han, Y.: Incorporating multiple svms for automatic image annotation. Pattern Recognition 40(2), 728–741 (2007)Reshma, I.A., Ullah, M.Z., Aono, M.: KDEVIR at ImageCLEF 2014 Scalable Concept Image Annotation Task: Ontology based Automatic Image Annotation. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes. Sheffield, UK, September 15-18 (2014)Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010)Sahbi, H.: CNRS - TELECOM ParisTech at ImageCLEF 2013 Scalable Concept Image Annotation Task: Winning Annotations with Context Dependent SVMs. In: CLEF 2013 Evaluation Labs and Workshop, Online Working Notes, Valencia, Spain, September 23-26 (2013)Sethi, I.K., Coman, I.L., Stan, D.: Mining association rules between low-level image features and high-level concepts. In: Aerospace/Defense Sensing, Simulation, and Controls, pp. 279–290. International Society for Optics and Photonics (2001)Shi, R., Feng, H., Chua, T.-S., Lee, C.-H.: An adaptive image content representation and segmentation approach to automatic image annotation. In: Enser, P.G.B., Kompatsiaris, Y., O’Connor, N.E., Smeaton, A.F., Smeulders, A.W.M. (eds.) CIVR 2004. LNCS, vol. 3115, pp. 545–554. Springer, Heidelberg (2004)Tommasi, T., Caputo, B.: Frustratingly easy nbnn domain adaptation. In: Proc. ICCV (2013)Tommasi, T., Quadrianto, N., Caputo, B., Lampert, C.H.: Beyond dataset bias: Multi-task unaligned shared knowledge transfer. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part I. LNCS, vol. 7724, pp. 1–15. Springer, Heidelberg (2013)Tsikrika, T., de Herrera, A.G.S., Müller, H.: Assessing the scholarly impact of imageCLEF. In: Forner, P., Gonzalo, J., Kekäläinen, J., Lalmas, M., de Rijke, M. (eds.) CLEF 2011. LNCS, vol. 6941, pp. 95–106. Springer, Heidelberg (2011)Ünay, D., Soldea, O., Akyüz, S., Çetin, M., Erçil, A.: Medical image retrieval and automatic annotation: Vpa-sabanci at imageclef 2009. In: The Cross-Language Evaluation Forum (CLEF) (2009)Vailaya, A., Figueiredo, M.A., Jain, A.K., Zhang, H.J.: Image classification for content-based indexing. IEEE Transactions on Image Processing 10(1), 117–130 (2001)Villegas, M., Paredes, R.: Overview of the ImageCLEF 2012 Scalable Web Image Annotation Task. In: Forner, P., Karlgren, J., Womser-Hacker, C. (eds.) CLEF 2012 Evaluation Labs and Workshop, Online Working Notes, Rome, Italy, September 17-20 (2012), http://mvillegas.info/pub/Villegas12_CLEF_Annotation-Overview.pdfVillegas, M., Paredes, R.: Overview of the ImageCLEF 2014 Scalable Concept Image Annotation Task. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes, Sheffield, UK, September 15-18 (2014), http://mvillegas.info/pub/Villegas14_CLEF_Annotation-Overview.pdfVillegas, M., Paredes, R., Thomee, B.: Overview of the ImageCLEF 2013 Scalable Concept Image Annotation Subtask. In: CLEF 2013 Evaluation Labs and Workshop, Online Working Notes, Valencia, Spain, September 23-26 (2013), http://mvillegas.info/pub/Villegas13_CLEF_Annotation-Overview.pdfVillena Román, J., González Cristóbal, J.C., Goñi Menoyo, J.M., Martínez Fernández, J.L.: MIRACLE’s naive approach to medical images annotation. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(7), 1088–1099 (2005)Wong, R.C., Leung, C.H.: Automatic semantic annotation of real-world web images. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(11), 1933–1944 (2008)Yang, C., Dong, M., Fotouhi, F.: Image content annotation using bayesian framework and complement components analysis. In: IEEE International Conference on Image Processing, ICIP 2005, vol. 1, pp. I–1193. IEEE (2005)Yılmaz, K.Y., Cemgil, A.T., Simsekli, U.: Generalised coupled tensor factorisation. In: Advances in Neural Information Processing Systems, pp. 2151–2159 (2011)Zhang, Y., Qin, J., Chen, F., Hu, D.: NUDTs Participation in ImageCLEF Robot Vision Challenge 2014. In: CLEF 2014 Evaluation Labs and Workshop, Online Working Notes (2014

    Primary decomposition and the fractal nature of knot concordance

    Full text link
    For each sequence of polynomials, P=(p_1(t),p_2(t),...), we define a characteristic series of groups, called the derived series localized at P. Given a knot K in S^3, such a sequence of polynomials arises naturally as the orders of certain submodules of the sequence of higher-order Alexander modules of K. These group series yield new filtrations of the knot concordance group that refine the (n)-solvable filtration of Cochran-Orr-Teichner. We show that the quotients of successive terms of these refined filtrations have infinite rank. These results also suggest higher-order analogues of the p(t)-primary decomposition of the algebraic concordance group. We use these techniques to give evidence that the set of smooth concordance classes of knots is a fractal set. We also show that no Cochran-Orr-Teichner knot is concordant to any Cochran-Harvey-Leidy knot.Comment: 60 pages, added 4 pages to introduction, minor corrections otherwise; Math. Annalen 201

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe

    Search for the standard model Higgs boson decaying into two photons in pp collisions at sqrt(s)=7 TeV

    Get PDF
    A search for a Higgs boson decaying into two photons is described. The analysis is performed using a dataset recorded by the CMS experiment at the LHC from pp collisions at a centre-of-mass energy of 7 TeV, which corresponds to an integrated luminosity of 4.8 inverse femtobarns. Limits are set on the cross section of the standard model Higgs boson decaying to two photons. The expected exclusion limit at 95% confidence level is between 1.4 and 2.4 times the standard model cross section in the mass range between 110 and 150 GeV. The analysis of the data excludes, at 95% confidence level, the standard model Higgs boson decaying into two photons in the mass range 128 to 132 GeV. The largest excess of events above the expected standard model background is observed for a Higgs boson mass hypothesis of 124 GeV with a local significance of 3.1 sigma. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-150 GeV is estimated to be 1.8 sigma. More data are required to ascertain the origin of this excess.Comment: Submitted to Physics Letters

    Measurement of the Lambda(b) cross section and the anti-Lambda(b) to Lambda(b) ratio with Lambda(b) to J/Psi Lambda decays in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    The Lambda(b) differential production cross section and the cross section ratio anti-Lambda(b)/Lambda(b) are measured as functions of transverse momentum pt(Lambda(b)) and rapidity abs(y(Lambda(b))) in pp collisions at sqrt(s) = 7 TeV using data collected by the CMS experiment at the LHC. The measurements are based on Lambda(b) decays reconstructed in the exclusive final state J/Psi Lambda, with the subsequent decays J/Psi to an opposite-sign muon pair and Lambda to proton pion, using a data sample corresponding to an integrated luminosity of 1.9 inverse femtobarns. The product of the cross section times the branching ratio for Lambda(b) to J/Psi Lambda versus pt(Lambda(b)) falls faster than that of b mesons. The measured value of the cross section times the branching ratio for pt(Lambda(b)) > 10 GeV and abs(y(Lambda(b))) < 2.0 is 1.06 +/- 0.06 +/- 0.12 nb, and the integrated cross section ratio for anti-Lambda(b)/Lambda(b) is 1.02 +/- 0.07 +/- 0.09, where the uncertainties are statistical and systematic, respectively.Comment: Submitted to Physics Letters

    Search for new physics in events with opposite-sign leptons, jets, and missing transverse energy in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    A search is presented for physics beyond the standard model (BSM) in final states with a pair of opposite-sign isolated leptons accompanied by jets and missing transverse energy. The search uses LHC data recorded at a center-of-mass energy sqrt(s) = 7 TeV with the CMS detector, corresponding to an integrated luminosity of approximately 5 inverse femtobarns. Two complementary search strategies are employed. The first probes models with a specific dilepton production mechanism that leads to a characteristic kinematic edge in the dilepton mass distribution. The second strategy probes models of dilepton production with heavy, colored objects that decay to final states including invisible particles, leading to very large hadronic activity and missing transverse energy. No evidence for an event yield in excess of the standard model expectations is found. Upper limits on the BSM contributions to the signal regions are deduced from the results, which are used to exclude a region of the parameter space of the constrained minimal supersymmetric extension of the standard model. Additional information related to detector efficiencies and response is provided to allow testing specific models of BSM physics not considered in this paper.Comment: Replaced with published version. Added journal reference and DO
    • …
    corecore