243,234 research outputs found

    Image time series processing for agriculture monitoring

    Get PDF
    AbstractGiven strong year-to-year variability, increasing competition for natural resources, and climate change impacts on agriculture, monitoring global crop and natural vegetation conditions is highly relevant, particularly in food insecure areas. Data from remote sensing image series at high temporal and low spatial resolution can help to assist in this monitoring as they provide key information in near-real time over large areas. The SPIRITS software, presented in this paper, is a stand-alone toolbox developed for environmental monitoring, particularly to produce clear and evidence-based information for crop production analysts and decision makers. It includes a large number of tools with the main aim of extracting vegetation indicators from image time series, estimating the potential impact of anomalies on crop production and sharing this information with different audiences. SPIRITS offers an integrated and flexible analysis environment with a user-friendly graphical interface, which allows sequential tasking and a high level of automation of processing chains. It is freely distributed for non-commercial use and extensively documented

    Rice area mapping in Palakkad district of Kerala using Sentinel-2 data and Geographic information system technique

    Get PDF
    Proper calculation of rice cultivation area well before harvest is critical for projecting rice yields and developing policies to assure food security. This research looks at how Remote Sensing (RS) and Geographic Information System (GIS) can be used to map rice fields in Palakkad district of Kerala. The area was delineated using three multi-temporal cloud free Sentinel-2 data with 10 m spatial resolution, matching to crop's reproductive stage during mundakan season (September-October to December-January), 2020-21. To make classification easier, the administrative boundary of district was placed over the mosaicked image. The rice acreage estimation and land use classification of the major rice tract of Palakkad district comprising five blocks was done using Iterative Self-Organisation Data Analysis Technique (ISODATA) unsupervised classification provision in ArcGIS 10.1 software, employing False Colour Composite (FCC) including Blue (B2), Green (B3), Red (B4) and Near-infrared (B8) Bands of Sentinel-2 images. The classification accuracy was determined by locating a total of 60 validation points throughout the district, comprising 30 rice and 30 non-rice points. The total estimated area was 24742.76 ha, with an average accuracy of 88.33% and kappa coefficient 0.766 in five blocks of Palakkad district. The information generated will be helpful in assessing the anticipated production as well as the water demand of the rice fields

    Joint Video and Text Parsing for Understanding Events and Answering Queries

    Full text link
    We propose a framework for parsing video and text jointly for understanding events and answering user queries. Our framework produces a parse graph that represents the compositional structures of spatial information (objects and scenes), temporal information (actions and events) and causal information (causalities between events and fluents) in the video and text. The knowledge representation of our framework is based on a spatial-temporal-causal And-Or graph (S/T/C-AOG), which jointly models possible hierarchical compositions of objects, scenes and events as well as their interactions and mutual contexts, and specifies the prior probabilistic distribution of the parse graphs. We present a probabilistic generative model for joint parsing that captures the relations between the input video/text, their corresponding parse graphs and the joint parse graph. Based on the probabilistic model, we propose a joint parsing system consisting of three modules: video parsing, text parsing and joint inference. Video parsing and text parsing produce two parse graphs from the input video and text respectively. The joint inference module produces a joint parse graph by performing matching, deduction and revision on the video and text parse graphs. The proposed framework has the following objectives: Firstly, we aim at deep semantic parsing of video and text that goes beyond the traditional bag-of-words approaches; Secondly, we perform parsing and reasoning across the spatial, temporal and causal dimensions based on the joint S/T/C-AOG representation; Thirdly, we show that deep joint parsing facilitates subsequent applications such as generating narrative text descriptions and answering queries in the forms of who, what, when, where and why. We empirically evaluated our system based on comparison against ground-truth as well as accuracy of query answering and obtained satisfactory results

    Destination image analytics through traveller-generated content

    Get PDF
    The explosion of content generated by users, in parallel with the spectacular growth of social media and the proliferation of mobile devices, is causing a paradigm shift in research. Surveys or interviews are no longer necessary to obtain users' opinions, because researchers can get this information freely on social media. In the field of tourism, online travel reviews (OTRs) hosted on travel-related websites stand out. The objective of this article is to demonstrate the usefulness of OTRs to analyse the image of a tourist destination. For this, a theoretical and methodological framework is defined, as well as metrics that allow for measuring different aspects (designative, appraisive and prescriptive) of the tourist image. The model is applied to the region of Attica (Greece) through a random sample of 300,000 TripAdvisor OTRs about attractions, activities, restaurants and hotels written in English between 2013 and 2018. The results show trends, preferences, assessments, and opinions from the demand side, which can be useful for destination managers in optimising the distribution of available resources and promoting sustainability

    A multi-temporal phenology based classification approach for Crop Monitoring in Kenya

    Get PDF
    The SBAM (Satellite Based Agricultural Monitoring) project, funded by the Italian Space Agency aims at: developing a validated satellite imagery based method for estimating and updating the agricultural areas in the region of Central-Africa; implementing an automated process chain capable of providing periodical agricultural land cover maps of the area of interest and, possibly, an estimate of the crop yield. The project aims at filling the gap existing in the availability of high spatial resolution maps of the agricultural areas of Kenya. A high spatial resolution land cover map of Central-Eastern Africa including Kenya was compiled in the year 2000 in the framework of the Africover project using Landsat images acquired, mostly, in 1995. We investigated the use of phenological information in supporting the use of remotely sensed images for crop classification and monitoring based on Landsat 8 and, in the near future, Sentinel 2 imagery. Phenological information on crop condition was collected using time series of NDVI (Normalized Difference Vegetation Index) based on Landsat 8 images. Kenyan countryside is mainly characterized by a high number of fragmented small and medium size farmlands that dramatically increase the difficulty in classification; 30 m spatial resolution images are not enough for a proper classification of such areas. So, a pan-sharpening FIHS (Fast Intensity Hue Saturation) technique was implemented to increase image resolution from 30 m to 15 m. Ground test sites were selected, searching for agricultural vegetated areas from which phenological information was extracted. Therefore, the classification of agricultural areas is based on crop phenology, vegetation index behaviour retrieved from a time series of satellite images and on AEZ (Agro Ecological Zones) information made available by FAO (FAO, 1996) for the area of interest. This paper presents the results of the proposed classification procedure in comparison with land cover maps produced in the past years by other projects. The results refer to the Nakuru County and they were validated using field campaigns data. It showed a satisfactory overall accuracy of 92.66 % which is a significant improvement with respect to previous land cover maps

    Analyzing First-Person Stories Based on Socializing, Eating and Sedentary Patterns

    Full text link
    First-person stories can be analyzed by means of egocentric pictures acquired throughout the whole active day with wearable cameras. This manuscript presents an egocentric dataset with more than 45,000 pictures from four people in different environments such as working or studying. All the images were manually labeled to identify three patterns of interest regarding people's lifestyle: socializing, eating and sedentary. Additionally, two different approaches are proposed to classify egocentric images into one of the 12 target categories defined to characterize these three patterns. The approaches are based on machine learning and deep learning techniques, including traditional classifiers and state-of-art convolutional neural networks. The experimental results obtained when applying these methods to the egocentric dataset demonstrated their adequacy for the problem at hand.Comment: Accepted at First International Workshop on Social Signal Processing and Beyond, 19th International Conference on Image Analysis and Processing (ICIAP), September 201
    • …
    corecore