865 research outputs found

    Geoinformatic methodologies and quantitative tools for detecting hotspots and for multicriteria ranking and prioritization: application on biodiversity monitoring and conservation

    Get PDF
    Chi ha la responsabilità di gestire un’area protetta non solo deve essere consapevole dei problemi ambientali dell’area ma dovrebbe anche avere a disposizione dati aggiornati e appropriati strumenti metodologici per esaminare accuratamente ogni singolo problema. In effetti, il decisore ambientale deve organizzare in anticipo le fasi necessarie a fronteggiare le prevedibili variazioni che subirà la pressione antropica sulle aree protette. L’obiettivo principale della Tesi è di natura metodologica e riguarda il confronto tra differenti metodi statistici multivariati utili per l’individuazione di punti critici nello spazio e per l’ordinamento degli “oggetti ambientali” di studio e quindi per l’individuazione delle priorità di intervento ambientale. L’obiettivo ambientale generale è la conservazione del patrimonio di biodiversità. L’individuazione, tramite strumenti statistici multivariati, degli habitat aventi priorità ecologica è solamente il primo fondamentale passo per raggiungere tale obiettivo. L’informazione ecologica, integrata nel contesto antropico, è un successivo essenziale passo per effettuare valutazioni ambientali e per pianificare correttamente le azioni volte alla conservazione. Un’ampia serie di dati ed informazioni è stata necessaria per raggiungere questi obiettivi di gestione ambientale. I dati ecologici sono forniti dal Ministero dell’Ambiente Italiano e provengono al Progetto “Carta della Natura” del Paese. I dati demografici sono invece forniti dall’Istituto Italiano di Statistica (ISTAT). I dati si riferiscono a due aree geografiche italiane: la Val Baganza (Parma) e l’Oltrepò Pavese e Appennino Ligure-Emiliano. L’analisi è stata condotta a due differenti livelli spaziali: ecologico-naturalistico (l’habitat) e amministrativo (il Comune). Corrispondentemente, i risultati più significativi ottenuti sono: 1. Livello habitat: il confronto tra due metodi di ordinamento e determinazione delle priorità, il metodo del Vettore Ideale e quello della Preminenza, tramite l’utilizzo di importanti metriche ecologiche come il Valore Ecologico (E.V.) e la Sensibilità Ecologica (E.S.), fornisce dei risultati non direttamente comparabili. Il Vettore Ideale, non essendo un procedimento basato sulla ranghizzazione dei valori originali, sembra essere preferibile nel caso di paesaggi molto eterogenei in senso spaziale. Invece, il metodo della Preminenza probabilmente è da preferire in paesaggi ecologici aventi un basso grado di eterogeneità intesa nel senso di differenze non troppo grandi nel E.V. ed E.S. degli habitat. 2. Livello comunale: Al fine di prendere delle decisioni gestionali ed essendo gli habitat solo delle suddivisioni naturalistiche di un dato territorio, è necessario spostare l’attenzione sulle corrispondenti unità amministrative territoriali (i Comuni). Da questo punto di vista, l’introduzione della demografia risulta essere un elemento centrale oltre che di novità nelle analisi ecologico-ambientali. In effetti, l’analisi demografica rende il risultato di cui al punto 1 molto più realistico introducendo altre dimensioni (la pressione antropica attuale e le sue tendenze) che permettono l’individuazione di aree ecologicamente fragili. Inoltre, tale approccio individua chiaramente le responsabilità ambientali di ogni singolo ente territoriale nei riguardi della difesa della biodiversità. In effetti un ordinamento dei Comuni sulla base delle caratteristiche ambientali e demografiche, chiarisce le responsabilità gestionali di ognuno di essi. Un’applicazione concreta di questa necessaria quanto utile integrazione di dati ecologici e demografici viene discussa progettando una Rete Ecologica (E.N.). La Rete cosi ottenuta infatti presenta come elemento di novità il fatto di non essere “statica” bensì “dinamica” nel senso che la sua pianificazione tiene in considerazione il trend di pressione antropica al fine di individuare i probabili punti di futura fragilità e quindi di più critica gestione.Who has the responsibility to manage a conservation zone, not only must be aware of environmental problems but should have at his disposal updated databases and appropriate methodological instruments to examine carefully each individual case. In effect he has to arrange, in advance, the necessary steps to withstand the foreseeable variations in the trends of human pressure on conservation zones. The essential objective of this Thesis is methodological that is to compare different multivariate statistical methods useful for environmental hotspot detection and for environmental prioritization and ranking. The general environmental goal is the conservation of the biodiversity patrimony. The individuation, through multidimensional statistical tools, of habitats having top ecological priority, is only the first basic step to accomplish this aim. Ecological information integrated in the human context is an essential further step to make environmental evaluations and to plan correct conservation actions. A wide series of data and information has been necessary to accomplish environmental management tasks. Ecological data are provided by the Italian Ministry of the Environment and they refer to the Map of Italian Nature Project database. The demographic data derives from the Italian Institute of Statistics (ISTAT). The data utilized regards two Italian areas: Baganza Valley and Oltrepò Pavese and Ligurian-Emilian Apennine. The analysis has been carried out at two different spatial/scale levels: ecological-naturalistic (habitat level) and administrative (Commune level). Correspondingly, the main obtained results are: 1. Habitat level: comparing two ranking and prioritization methods, Ideal Vector and Salience, through important ecological metrics like Ecological Value (E.V.) and Ecological Sensitivity (E.S.), gives results not directly comparable. Being not based on a ranking process, Ideal Vector method seems to be used preferentially in landscapes characterized by high spatial heterogeneity. On the contrary, Salience method is probably to be preferred in ecological landscapes characterized by a low degree of heterogeneity in terms of not large differences concerning habitat E.V. and E.S.. 2. Commune level: Being habitat only a naturalistic partition of a given territory, it is necessary, for management decisions, to move towards the corresponding administrative units (Communes). From this point of view, the introduction of demography is an essential element of novelty in environmental analysis. In effect, demographic analysis makes the goal at point 1 more realistic introducing other dimensions (actual human pressure and its trend) which allows the individuation of environmentally fragile areas. Furthermore this approach individuates clearly the environmental responsibility of each administrative body for what concerns the biodiversity conservation. In effect communes’ ranking, according to environmental/demographic features, clarify the responsibilities of each administrative body. A concrete application of this necessary and useful integration of ecological and demographic data has been developed in designing an Ecological Network (E.N.).The obtained E.N. has the novelty to be not “static” but “dynamic” that is the network planning take into account the demographic pressure trends in the individuation of the probable future fragile points

    Learning Feature Selection and Combination Strategies for Generic Salient Object Detection

    No full text
    For a diverse range of applications in machine vision from social media searches to robotic home care providers, it is important to replicate the mechanism by which the human brain selects the most important visual information, while suppressing the remaining non-usable information. Many computational methods attempt to model this process by following the traditional model of visual attention. The traditional model of attention involves feature extraction, conditioning and combination to capture this behaviour of human visual attention. Consequently, the model has inherent design choices at its various stages. These choices include selection of parameters related to the feature computation process, setting a conditioning approach, feature importance and setting a combination approach. Despite rapid research and substantial improvements in benchmark performance, the performance of many models depends upon tuning these design choices in an ad hoc fashion. Additionally, these design choices are heuristic in nature, thus resulting in good performance only in certain settings. Consequentially, many such models exhibit low robustness to difficult stimuli and the complexities of real-world imagery. Machine learning and optimisation technique have long been used to increase the generalisability of a system to unseen data. Surprisingly, artificial learning techniques have not been investigated to their full potential to improve generalisation of visual attention methods. The proposed thesis is that artificial learning can increase the generalisability of the traditional model of visual attention by effective selection and optimal combination of features. The following new techniques have been introduced at various stages of the traditional model of visual attention to improve its generalisation performance, specifically on challenging cases of saliency detection: 1. Joint optimisation of feature related parameters and feature importance weights is introduced for the first time to improve the generalisation of the traditional model of visual attention. To evaluate the joint learning hypothesis, a new method namely GAOVSM is introduced for the tasks of eye fixation prediction. By finding the relationships between feature related parameters and feature importance, the developed method improves the generalisation performance of baseline method (that employ human encoded parameters). 2. Spectral matting based figure-ground segregation is introduced to overcome the artifacts encountered by region-based salient object detection approaches. By suppressing the unwanted background information and assigning saliency to object parts in a uniform manner, the developed FGS approach overcomes the limitations of region based approaches. 3. Joint optimisation of feature computation parameters and feature importance weights is introduced for optimal combination of FGS with complementary features for the first time for salient object detection. By learning feature related parameters and their respective importance at multiple segmentation thresholds and by considering the performance gaps amongst features, the developed FGSopt method improves the object detection performance of the FGS technique also improving upon several state-of-the-art salient object detection models. 4. The introduction of multiple combination schemes/rules further extends the generalisability of the traditional attention model beyond that of joint optimisation based single rules. The introduction of feature composition based grouping of images, enables the developed IGA method to autonomously identify an appropriate combination strategy for an unseen image. The results of a pair-wise ranksum test confirm that the IGA method is significantly better than the deterministic and classification based benchmark methods on the 99% confidence interval level. Extending this line of research, a novel relative encoding approach enables the adapted XCSCA method to group images having similar saliency prediction ability. By keeping track of previous inputs, the introduced action part of the XCSCA approach enables learning of generalised feature importance rules. By more accurate grouping of images as compared with IGA, generalised learnt rules and appropriate application of feature importance rules, the XCSCA approach improves upon the generalisation performance of the IGA method. 5. The introduced uniform saliency assignment and segmentation quality cues enable label free evaluation of a feature/saliency map. By accurate ranking and effective clustering, the developed DFS method successfully solves the complex problem of finding appropriate features for combination (on an-image-by-image basis) for the first time in saliency detection. The DFS method enables ground truth free evaluation of saliency methods and advances the state-of-the-art in data driven saliency aggregation by detection and deselection of redundant information. The final contribution is that the developed methods are formed into a complete system where analysis shows the effects of their interactions on the system. Based on the saliency prediction accuracy versus computational time trade-off, specialised variants of the proposed methods are presented along with the recommendations for further use by other saliency detection systems. This research work has shown that artificial learning can increase the generalisation of the traditional model of attention by effective selection and optimal combination of features. Overall, this thesis has shown that it is the ability to autonomously segregate images based on their types and subsequent learning of appropriate combinations that aid generalisation on difficult unseen stimuli
    • …
    corecore