148 research outputs found

    Trophic State Monitoring Of Lakes And Reservoirs Using Remote Sensing

    Get PDF
    Lakes and reservoirs are important resources that provide water for critical needs, such as drinking water, agriculture, recreation, fisheries, wildlife, and other uses. However, there is increasing concern that anthropogenic eutrophication threatens the usability of these natural resources. Therefore, this research investigates these complex hydrologic ecosystems and recommends a methodology for monitoring the trophic state of lakes and reservoirs using remote sensing data. The Mississippi Department of Environmental Quality provided in situ data for seven Mississippi lakes including, Arkabutla, Bay Springs, Enid, Grenada, Okatibbee, Ross Barnett, and Sardis lakes. This research explored the relationships between the Secchi depth (SD), chlorophyll-a (CHL), and total phosphorus (TP) in situ data and Moderate Resolution Imaging Spectroradiometer (MODIS) spectral reflectance data. This was accomplished by deriving Carlson Trophic State Index values for each in situ measurements and using these TSI(SD), TSI(CHL), and TSI(TP) values to evaluate potential predictive methods. Simple linear regression was performed to quantify the strength of the relationships between the in situ data and MODIS surface reflectance values. However, R-square values were too low and inconsistent to justify additional analyses. Therefore, machine learning models from the Waikato Environment for Knowledge Analysis (WEKA) software workbench were explored and tested. Optimal predictive models and settings were investigated for two meta-learner classifiers, three Bayesian classifiers and three decision tree classifiers. The Classification Via Regression yielded the best results when using large datasets, the all-but-one iteration setting, MODIS A1 individual bands as predictors, and TSI(SD) as targets. For this model and these settings, the percentages of correctly classified instances ranged from 77.74% to 81.98% and kappa values ranged from 0.41 to 0.48. The percentage of correctly classified results by class class for TSI(SD) were 39.80% for hyperturbidity and 85.11% for turbidity. Overall, this research concludes that Moderate Resolution Imaging Spectroradiometer (MODIS) satellite imagery can be used to effectively monitor Mississippi lakes and reservoirs. Additionally, machine learning models were determined to be a viable option for predicting water transparency measurements. It is anticipated that water resource managers can adopt these research findings to complement conventional in situ lake monitoring methods

    The assessment and development of methods in (spatial) sound ecology

    Get PDF
    As vital ecosystems across the globe enter unchartered pressure from climate change industrial land use, understanding the processes driving ecosystem viability has never been more critical. Nuanced ecosystem understanding comes from well-collected field data and a wealth of associated interpretations. In recent years the most popular methods of ecosystem monitoring have revolutionised from often damaging and labour-intensive manual data collection to automated methods of data collection and analysis. Sound ecology describes the school of research that uses information transmitted through sound to infer properties about an area's species, biodiversity, and health. In this thesis, we explore and develop state-of-the-art automated monitoring with sound, specifically relating to data storage practice and spatial acoustic recording and data analysis. In the first chapter, we explore the necessity and methods of ecosystem monitoring, focusing on acoustic monitoring, later exploring how and why sound is recorded and the current state-of-the-art in acoustic monitoring. Chapter one concludes with us setting out the aims and overall content of the following chapters. We begin the second chapter by exploring methods used to mitigate data storage expense, a widespread issue as automated methods quickly amass vast amounts of data which can be expensive and impractical to manage. Importantly I explain how these data management practices are often used without known consequence, something I then address. Specifically, I present evidence that the most used data reduction methods (namely compression and temporal subsetting) have a surprisingly small impact on the information content of recorded sound compared to the method of analysis. This work also adds to the increasing evidence that deep learning-based methods of environmental sound quantification are more powerful and robust to experimental variation than more traditional acoustic indices. In the latter chapters, I focus on using multichannel acoustic recording for sound-source localisation. Knowing where a sound originated has a range of ecological uses, including counting individuals, locating threats, and monitoring habitat use. While an exciting application of acoustic technology, spatial acoustics has had minimal uptake owing to the expense, impracticality and inaccessibility of equipment. In my third chapter, I introduce MAARU (Multichannel Acoustic Autonomous Recording Unit), a low-cost, easy-to-use and accessible solution to this problem. I explain the software and hardware necessary for spatial recording and show how MAARU can be used to localise the direction of a sound to within ±10˚ accurately. In the fourth chapter, I explore how MAARU devices deployed in the field can be used for enhanced ecosystem monitoring by spatially clustering individuals by calling directions for more accurate abundance approximations and crude species-specific habitat usage monitoring. Most literature on spatial acoustics cites the need for many accurately synced recording devices over an area. This chapter provides the first evidence of advances made with just one recorder. Finally, I conclude this thesis by restating my aims and discussing my success in achieving them. Specifically, in the thesis’ conclusion, I reiterate the contributions made to the field as a direct result of this work and outline some possible development avenues.Open Acces

    A synoptic description of coal basins via image processing

    Get PDF
    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology

    Fishery Interaction Modeling of Cetacean Bycatch in the California Drift Gillnet Fishery to Inform a Dynamic Ocean Management Tool

    Get PDF
    Understanding the drivers that lead to interaction between target species in a fishery and marine mammals is a critical aspect in efforts to reduce bycatch. In the California drift gillnet fishery static management approaches and gear changes have reduced bycatch but neither measure ascertains the underlying dynamics causing bycatch events. To avoid further potentially drastic measures such as hard caps, dynamic management approaches that consider the scales relevant to physical dynamics, animal movement and human use could be implemented. A key component to this approach is determining the factors that lead to fisheries interactions. Using 25 years (1990-2014) of National Oceanic and Atmospheric Administration fisheries’ observer data from the California drift gillnet fishery, we model the relative probability of bycatch (presence–absence) of four cetacean species in the California Current System (short-beaked common dolphin Delphinus delphis, northern right whale dolphins Lissodelphis borealis, Risso’s dolphins Grampus griseus, and Pacific white-sided dolphins Lagenorhynchus obliquidens). Due to the nature of protected species bycatch, these are rare-events, which cause a large amount of absences (zeros) in each species’ dataset. Using a data-assimilative configuration of the Regional Ocean Modeling System, we determined the capabilities of a flexible machine-learning algorithm to handle these zero-inflated datasets in order to explore the physical drivers of cetacean bycatch in the California drift gillnet fishery. Results suggest that cetacean bycatch probability has a complex relationship with the physical environment, with mesoscale variability acting as a strong driver. Through the modeling process, we observed varied responses to the range of sample sizes in the zero-inflated datasets, determining the minimum number of presences capable of building an accurate model. The selection of predictor variables and model evaluation statistics were found to play an important role in assessing the biological significance of our species distribution models. These results highlight the statistical capability (and incapability) of modeling techniques to predict the complex nature driving fishery interaction of cetacean bycatch in the California drift gillnet fishery. By determining where fisheries interactions are most likely to occur, we can inform near real-time management approaches to reduce bycatch while still allowing fishermen to meet their catch quotas

    Fast or Accurate? Governing Conflicting Goals in Highly Autonomous Vehicles

    Full text link
    The tremendous excitement around the deployment of autonomous vehicles (AVs) comes from their purported promise. In addition to decreasing accidents, AVs are projected to usher in a new era of equity in human autonomy by providing affordable, accessible, and widespread mobility for disabled, elderly, and low-income populations. However, to realize this promise, it is necessary to ensure that AVs are safe for deployment, and to contend with the risks AV technology poses, which threaten to eclipse its benefits. In this Article, we focus on an aspect of AV engineering currently unexamined in the legal literature, but with critical implications for safety, accountability, liability, and power. Specifically, we explain how understanding the fundamental engineering trade-off between accuracy and speed in AVs is critical for policymakers to regulate the uncertainty and risk inherent in AV systems. We discuss how understanding the trade-off will help create tools that will enable policymakers to assess how the trade-off is being implemented. Such tools will facilitate opportunities for developing concrete, ex ante AV safety standards and conclusive mechanisms for ex post determination of accountability after accidents occur. This will shift the balance of power from manufacturers to the public by facilitating effective regulation, reducing barriers to tort recovery, and ensuring that public values like safety and accountability are appropriately balanced.Comment: Vol. 20, pp. 249-27

    Feature Selection on Permissions, Intents and APIs for Android Malware Detection

    Get PDF
    Malicious applications pose an enormous security threat to mobile computing devices. Currently 85% of all smartphones run Android, Google’s open-source operating system, making that platform the primary threat vector for malware attacks. Android is a platform that hosts roughly 99% of known malware to date, and is the focus of most research efforts in mobile malware detection due to its open source nature. One of the main tools used in this effort is supervised machine learning. While a decade of work has made a lot of progress in detection accuracy, there is an obstacle that each stream of research is forced to overcome, feature selection, i.e., determining which attributes of Android are most effective as inputs into machine learning models. This dissertation aims to address that problem by providing the community with an exhaustive analysis of the three primary types of Android features used by researchers: Permissions, Intents and API Calls. The intent of the report is not to describe a best performing feature set or a best performing machine learning model, nor to explain why certain Permissions, Intents or API Calls get selected above others, but rather to provide a holistic methodology to help guide feature selection for Android malware detection. The experiments used eleven different feature selection techniques covering filter methods, wrapper methods and embedded methods. Each feature selection technique was applied to seven different datasets based on the seven combinations available of Permissions, Intents and API Calls. Each of those seven datasets are from a base set of 119k Android apps. All of the result sets were then validated against three different machine learning models, Random Forest, SVM and a Neural Net, to test applicability across algorithm type. The experiments show that using a combination of Permissions, Intents and API Calls produced higher accuracy than using any of those alone or in any other combination and that feature selection should be performed on the combined dataset, not by feature type and then combined. The data also shows that, in general, a feature set size of 200 or more attributes is required for optimal results. Finally, the feature selection methods Relief, Correlation-based Feature Selection (CFS) and Recursive Feature Elimination (RFE) using a Neural Net are not satisfactory approaches for Android malware detection work. Based on the proposed methodology and experiments, this research provided insights into feature selection – a significant but often overlooked issue in Android malware detection. We believe the results reported herein is an important step for effective feature evaluation and selection in assisting malware detection especially for datasets with a large number of features. The methodology also has the potential to be applied to similar malware detection tasks or even in broader domains such as pattern recognition

    Autoencoder for clinical data analysis and classification : data imputation, dimensional reduction, and pattern recognition

    Get PDF
    Over the last decade, research has focused on machine learning and data mining to develop frameworks that can improve data analysis and output performance; to build accurate decision support systems that benefit from real-life datasets. This leads to the field of clinical data analysis, which has attracted a significant amount of interest in the computing, information systems, and medical fields. To create and develop models by machine learning algorithms, there is a need for a particular type of data for the existing algorithms to build an efficient model. Clinical datasets pose several issues that can affect the classification of the dataset: missing values, high dimensionality, and class imbalance. In order to build a framework for mining the data, it is necessary first to preprocess data, by eliminating patients’ records that have too many missing values, imputing missing values, addressing high dimensionality, and classifying the data for decision support.This thesis investigates a real clinical dataset to solve their challenges. Autoencoder is employed as a tool that can compress data mining methodology, by extracting features and classifying data in one model. The first step in data mining methodology is to impute missing values, so several imputation methods are analysed and employed. Then high dimensionality is demonstrated and used to discard irrelevant and redundant features, in order to improve prediction accuracy and reduce computational complexity. Class imbalance is manipulated to investigate the effect on feature selection algorithms and classification algorithms.The first stage of analysis is to investigate the role of the missing values. Results found that techniques based on class separation will outperform other techniques in predictive ability. The next stage is to investigate the high dimensionality and a class imbalance. However it was found a small set of features that can improve the classification performance, the balancing class does not affect the performance as much as imbalance class
    • …
    corecore