650 research outputs found

    Benthic algal communities of shallow reefs in the Eastern Cape: availability of abalone habitat

    Get PDF
    Marine ranching has been identified as an alternative to traditional aquacultural rearing and growing organisms for consumption. In the Eastern Cape, abalone ranching is a new and experimental industry. The aims of the research were to: first develop a GIS model to assist management in site selection for abalone seeding; and secondly to develop and standardize the sampling methodology in order to ground truth the sites, and assist in the monitoring and habitat identification of abalone. The GIS model developed in Chapter 3 was created using an unsupervised classification and fuzzy logic approach. Both vector and raster datasets were utilized to represent 7 different layers. Predominantly satellite imagery was used to classify the different substrate groups according to pixel colour signatures. The basic process was to apply a fuzzy rule set (membership) to rasters which gave an output raster (Fuzzification). The membership output rasters were overlaid which creates a single model output. It was found that model accuracy increased significantly as more layers were overlaid, due to the high variability within each of the individual layers. Model ground-truthing showed a strong and significant correlation (r2 = 0.91; p < 0.001) between the model outputs and actual site suitability based on in situ evaluation. Chapter 4 describes the investigation towards the optimal sampling methods for abalone ranching habitat assessments. Both destructive sampling methods and imagery methods were considered as methods of data collection. The study also evaluated whether quadrat and transects were going to be suitable methods to assess sites, and what size or length respectively they should be to collect the appropriate data. Transect length showed great variation according to the factor assessed. A transect of 15 metres was found to be optimal. Abalone counts showed no significant (p = 0.1) change in the Coefficent of Variance (CV) for transect lengths greater than 15m, and had a mean of 0.2 abalone per metre. Quadrat size showed a significant difference in functional group richness between quadrat sizes of 0.0625m2, and 0.25m2 but no difference between 0.25m2 and 1m2 quadrats for both scape and photographic quadrats. It was also found that between 5 and 10 replicates (p = 0.08) represents the functional groups appropriately using quadrats and that a 0.25m2 quadrat is most suitable for sampling. Chapter 5 describes the benthic community structure of Cape Recife shallow water reefs. Using the standardized methodology previously mentioned, 45 sites were assessed to identify the community structure. These sites were grouped into 5 different groups influenced by depth and substrate, as well as functional group composition according to a Wards classification. The community structure showed that depth and substrate play a significant role (p < 0.05) in the community type. There is also a significant relationship (p < 0.05) between complexity, rugosity, abalone presence and substrate. During this study the basic protocols for site selection and benthic community monitoring have been developed to support the abalone ranching initiative in the Cape Recife area. It has also provided a baseline of the benthic community in the ranching concession area which will be used as a benchmark for future monitoring efforts. The site selection, sampling, and monitoring methods developed during the course of this work have now been rolled out as Standard Operating Procedures for the ranching programme in this area

    Practicable methodologies for delivering comprehensive spatial soils information

    Get PDF
    This thesis is concerned with practicable methodologies for delivering comprehensive spatial soil information to end-users. There is a need for relevant spatial soil information to complement objective decision-making for addressing current problems associated with soil degradation; for modelling, monitoring and measurement of particular soil services; and for the general management of soil resources. These are real-world situations, which operate at spatial scales ranging from field to global scales. As such, comprehensive spatial soil information is tailored to meet the spatial scale specifications of the end user, and is of a nature that fully characterises the whole-soil profile with associated prediction uncertainties, and where possible, both the predictions and uncertainties have been independently validated. ‘Practicable’ is an idealistic pursuit but nonetheless necessary because of a need to equip land-holders, private-sector and non-governmental stakeholders and, governmental departments including soil mapping agencies with the necessary tools to ensure wide application of the methodologies to match the demand for relevant spatial soil information. Practicable methodologies are general and computationally efficient; can be applied to a wide range of soil attributes; can handle variable qualities of data; and are effective when working with very large datasets. In this thesis, delivering comprehensive spatial soil information relies on coupling legacy soil information (principally site observations made in the field) with Digital Soil Mapping (DSM) which comprises quantitative, state-of-the-art technologies for soil mapping. After the General Introduction, a review of the literature is given in Chapter 1 which describes the research context of the thesis. The review describes soil mapping first from a historical perspective and rudimentary efforts of mapping soils and then tracks the succession of advances that have been made towards the realisation of populated, digital spatial soil information databases where measures of prediction certainties are also expressed. From the findings of the review, in order to deliver comprehensive spatial soil information to end-users, new research was required to investigate: 1) a general method for digital soil mapping the whole-profile (effectively pseudo-3D) distribution of soil properties; 2) a general method for quantifying the total prediction uncertainties of the digital soil maps that describe the whole-profile distribution of soil properties; 3) a method for validating the whole-profile predictions of soil properties and the quantifications of their uncertainties; 4) a systematic framework for scale manipulations or upscaling and downscaling techniques for digital soil mapping as a means of generating soil information products tailored to the needs of soil information users. Chapters 2 to 6 set about investigating how we might go about doing these with a succession of practicable methodologies. Chapter 2 addressed the need for whole-profile mapping of soil property distribution. Equal-area spline depth functions coupled with DSM facilitated continuous mapping the lateral and vertical distribution of soil properties. The spline function is a useful tool for deriving the continuous variation of soil properties from soil profile and core observations and is also suitable to use for a number of different soil properties. Generally, mapping the continuous depth function of soil properties reveals that the accuracy of the models is highest at the soil surface but progressively decreases with increasing soil depth. Chapter 3 complements the investigations made in Chapter 2 where an empirical method of quantifying prediction uncertainties from DSM was devised. This method was applied for quantifying the uncertainties of whole-profile digital soil maps. Prediction uncertainty with the devised empirical method is expressed as a prediction interval of the underlying model errors. The method is practicable in the sense that it accounts for all sources of uncertainty and is computationally efficient. Furthermore the method is amenable in situations where complex spatial soil prediction functions such as regression kriging approaches are used. Proper evaluation of digital soil maps requires testing the predictions and the quantification of the prediction uncertainties. Chapter 4 devised two new criteria in which to properly evaluate digital soil maps when additional soil samples collected by probability sampling are used for validation. The first criterion addresses the accuracy of the predictions in the presence of uncertainties and is the spatial average of the statistical expectation of the Mean Square Error of a simulated random value (MSES). The second criterion addresses the quality of the uncertainties which is estimated as the total proportion of the study area where the (1-α)-prediction interval (PI) covers the true value (APCP). Ideally these criteria will be coupled with conventional measures of map quality so that objective decisions can be made about the reliability and subsequent suitability of a map for a given purpose. It was revealed in Chapter 4, that the quantifications of uncertainty are susceptible to bias as a result of using legacy soil data to construct spatial soil prediction functions. As a consequence, in addition to an increasing uncertainty with soil depth, there is increasing misspecification of the prediction uncertainties. Chapter 2, 3, and 4 thus represent a framework for delivering whole-soil profile predictions of soil properties and their uncertainties, where both have been assessed or validated across mapping domains at a range of spatial scales for addressing field, farm, regional, catchment, national, continental or global soil-related problems. The direction of Chapters 5 and 6 however addresses issues specifically related to tailoring spatial soil information to the scale specifications of the end-user through the use of scale manipulations on existing digital soil maps. What is proposed in Chapter 5 is a scaling framework that takes into account the scaling triplet of digital soil maps—extent, resolution, and support—and recommends pedometric methodologies for scale manipulation based on the scale entities of the source and destination maps. Upscaling and downscaling are descriptors for moving up to coarser or down to finer scales respectively but may be too general for DSM. Subsequently Fine-gridding and coarse-gridding are operations where the grid spacing changes but support remains unchanged. Deconvolution and convolution are operations where the support always changes, which may or may not involve changing the grid spacing. While disseveration and conflation operations occur when the support and grid size are equal and both are then changed equally and simultaneously. There is an increasing richness of data sources describing the physical distribution of the Earth’s resources with improved qualities and resolutions. To take advantage of this, Chapter 6 devises a novel procedure for downscaling, involving disseveration. The method attempts to maintain the mass balance of the fine scaled predictions with the available coarse scaled information, through an iterative algorithm which attempts to reconstruct the variation of a property at a prescribed fine scale through an empirical function using environmental or covariate information. One of the advantages associated with the devised method is that soil property uncertainties at the coarse scale can be incorporated into the downscaling algorithm. Finally Chapter 7 presents a synthesis of the investigations made in Chapters 2 to 6 and summarises the pertinent findings. Directly from the investigations carried out during this project there are opportunities for further work; both in terms of addressing shortcomings that were highlighted but not investigated in the thesis, and more generally for advancing digital soil mapping to an operational status and beyond

    Urban and regional planning models and GIS

    Get PDF

    Mapping three-dimensional geological features from remotely-sensed images and digital elevation models.

    Get PDF
    Accurate mapping of geological structures is important in numerous applications, ranging from mineral exploration through to hydrogeological modelling. Remotely sensed data can provide synoptic views of study areas enabling mapping of geological units within the area. Structural information may be derived from such data using standard manual photo-geologic interpretation techniques, although these are often inaccurate and incomplete. The aim of this thesis is, therefore, to compile a suite of automated and interactive computer-based analysis routines, designed to help a the user map geological structure. These are examined and integrated in the context of an expert system. The data used in this study include Digital Elevation Model (DEM) and Airborne Thematic Mapper images, both with a spatial resolution of 5m, for a 5 x 5 km area surrounding Llyn Cow lyd, Snowdonia, North Wales. The geology of this area comprises folded and faulted Ordo vician sediments intruded throughout by dolerite sills, providing a stringent test for the automated and semi-automated procedures. The DEM is used to highlight geomorphological features which may represent surface expressions of the sub-surface geology. The DEM is created from digitized contours, for which kriging is found to provide the best interpolation routine, based on a number of quantitative measures. Lambertian shading and the creation of slope and change of slope datasets are shown to provide the most successful enhancement of DEMs, in terms of highlighting a range of key geomorphological features. The digital image data are used to identify rock outcrops as well as lithologically controlled features in the land cover. To this end, a series of standard spectral enhancements of the images is examined. In this respect, the least correlated 3 band composite and a principal component composite are shown to give the best visual discrimination of geological and vegetation cover types. Automatic edge detection (followed by line thinning and extraction) and manual interpretation techniques are used to identify a set of 'geological primitives' (linear or arc features representing lithological boundaries) within these data. Inclusion of the DEM data provides the three-dimensional co-ordinates of these primitives enabling a least-squares fit to be employed to calculate dip and strike values, based, initially, on the assumption of a simple, linearly dipping structural model. A very large number of scene 'primitives' is identified using these procedures, only some of which have geological significance. Knowledge-based rules are therefore used to identify the relevant. For example, rules are developed to identify lake edges, forest boundaries, forest tracks, rock-vegetation boundaries, and areas of geomorphological interest. Confidence in the geological significance of some of the geological primitives is increased where they are found independently in both the DEM and remotely sensed data. The dip and strike values derived in this way are compared to information taken from the published geological map for this area, as well as measurements taken in the field. Many results are shown to correspond closely to those taken from the map and in the field, with an error of < 1°. These data and rules are incorporated into an expert system which, initially, produces a simple model of the geological structure. The system also provides a graphical user interface for manual control and interpretation, where necessary. Although the system currently only allows a relatively simple structural model (linearly dipping with faulting), in the future it will be possible to extend the system to model more complex features, such as anticlines, synclines, thrusts, nappes, and igneous intrusions
    • …
    corecore