8,088 research outputs found

    Covering and Separation for Permutations and Graphs

    Get PDF
    This is a thesis of two parts, focusing on covering and separation topics of extremal combinatorics and graph theory, two major themes in this area. They entail the existence and properties of collections of combinatorial objects which together either represent all objects (covering) or can be used to distinguish all objects from each other (separation). We will consider a range of problems which come under these areas. The first part will focus on shattering k-sets with permutations. A family of permutations is said to shatter a given k-set if the permutations cover all possible orderings of the k elements. In particular, we investigate the size of permutation families which cover t orders for every possible k-set as well as study the problem of determining the largest number of k-sets that can be shattered by a family with given size. We provide a construction for a small permutation family which shatters every k-set. We also consider constructions of large families which do not shatter any triple. The second part will be concerned with the problem of separating path systems. A separating path system for a graph is a family of paths where, for any two edges, there is a path containing one edge but not the other. The aim is to find the size of the smallest such family. We will study the size of the smallest separating path system for a range of graphs, including complete graphs, complete bipartite graphs, and lattice-type graphs. A key technique we introduce is the use of generator paths - constructed to utilise the symmetric nature of Kn. We continue this symmetric approach for bipartite graphs and study the limitations of the method. We consider lattice-type graphs as an example of the most efficient possible separating systems for any graph

    Connectivity elements and mitigation measures in policy-relevant soil erosion models: A survey across Europe

    Get PDF
    The current use of soil erosion models in Europe was investigated through an exploratory survey of 46 model applications covering 18 European countries. This revealed novel information on erosion model applications, their parameterisation, incorporation of landscape elements and mitigation measures with implications for connectivity and their use in decision-making in Europe. The model application predictions were applied at national, regional, catchment or field scale. The majority of model applications used the USLE or versions thereof, but a range of semi-empirical, decision-tree and process-based models were also used. The majority of model applications were used for policy relevant purposes such as erosion risk assessment or mitigation measure implementation at a range of spatial scales. The analysis identified an evident prevalence towards the use of national or regional data sets and a highly varying parameterisation of model applications. Landscape elements and mitigation measures with effects on connectivity were implemented in most model applications, but not with a focus on modelling connectivity within the landscape. Altogether, the results demonstrate a need for improving connectivity modelling in diverse agricultural landscapes across multiple scales. Models should be chosen dependent on their ability to reflect erosion risk at different spatial scales. Albeit, harmonisation of data sets, parameterisation procedures and validation approaches is needed for certain modelling scenarios to ensure comparability of soil erosion risk assessment and suitable mitigation practices. Furthermore, we recommend that policy-relevant erosion risk maps should be verified by empirical data and thresholds derived from erosion risk maps should be adapted to regional conditions when used for policy guidelines. Hence, comparability, comprehensibility and regional adaptation are essential qualities of policy-relevant erosion maps

    The development of liquid crystal lasers for application in fluorescence microscopy

    Get PDF
    Lasers can be found in many areas of optical medical imaging and their properties have enabled the rapid advancement of many imaging techniques and modalities. Their narrow linewidth, relative brightness and coherence are advantageous in obtaining high quality images of biological samples. This is particularly beneficial in fluorescence microscopy. However, commercial imaging systems depend on the combination of multiple independent laser sources or use tuneable sources, both of which are expensive and have large footprints. This thesis demonstrates the use of liquid crystal (LC) laser technology, a compact and portable alternative, as an exciting candidate to provide a tailorable light source for fluorescence microscopy. Firstly, to improve the laser performance parameters such that high power and high specification lasers could be realised; device fabrication improvements were presented. Studies exploring the effect of alignment layer rubbing depth and the device cell gap spacing on laser performance were conducted. The results were the first of their kind and produced advances in fabrication that were critical to repeatedly realising stable, single-mode LC laser outputs with sufficient power to conduct microscopy. These investigations also aided with the realisation of laser diode pumping of LC lasers. Secondly, the identification of optimum dye concentrations for single and multi-dye systems were used to optimise the LC laser mixtures for optimal performance. These investigations resulted in novel results relating to the gain media in LC laser systems. Collectively, these advancements yielded lasers of extremely low threshold, comparable to the lowest reported thresholds in the literature. A portable LC laser system was integrated into a microscope and used to perform fluorescence microscopy. Successful two-colour imaging and multi-wavelength switching ability of LC lasers were exhibited for the first time. The wavelength selectivity of LC lasers was shown to allow lower incident average powers to be used for comparable image quality. Lastly, wavelength selectivity enabled the LC laser fluorescence microscope to achieve high enough sensitivity to conduct quantitative fluorescence measurements. The development of LC lasers and their suitability to fluorescence microscopy demonstrated in this thesis is hoped to push towards the realisation of commercialisation and application for the technology

    Spatial epidemiology of a highly transmissible disease in urban neighbourhoods: Using COVID-19 outbreaks in Toronto as a case study

    Get PDF
    The emergence of infectious diseases in an urban area involves a complex interaction between the socioecological processes in the neighbourhood and urbanization. As a result, such an urban environment can be the incubator of new epidemics and spread diseases more rapidly in densely populated areas than elsewhere. Most recently, the Coronavirus-19 (COVID-19) pandemic has brought unprecedented challenges around the world. Toronto, the capital city of Ontario, Canada, has been severely impacted by COVID-19. Understanding the spatiotemporal patterns and the key drivers of such patterns is imperative for designing and implementing an effective public health program to control the spread of the pandemic. This dissertation was designed to contribute to the global research effort on the COVID-19 pandemic by conducting spatial epidemiological studies to enhance our understanding of the disease's epidemiology in a spatial context to guide enhancing the public health strategies in controlling the disease. Comprised of three original research manuscripts, this dissertation focuses on the spatial epidemiology of COVID-19 at a neighbourhood scale in Toronto. Each manuscript makes scientific contributions and enhances our knowledge of how interactions between different socioecological processes in the neighbourhood and urbanization can influence spatial spread and patterns of COVID-19 in Toronto with the application of novel and advanced methodological approaches. The findings of the outcomes of the analyses are intended to contribute to the public health policy that informs neighbourhood-based disease intervention initiatives by the public health authorities, local government, and policymakers. The first manuscript analyzes the globally and locally variable socioeconomic drivers of COVID-19 incidence and examines how these relationships vary across different neighbourhoods. In the global model, lower levels of education and the percentage of immigrants were found to have a positive association with increased risk for COVID-19. This study provides the methodological framework for identifying the local variations in the association between risk for COVID-19 and socioeconomic factors in an urban environment by applying a local multiscale geographically weighted regression (MGWR) modelling approach. The MGWR model is an improvement over the methods used in earlier studies of COVID-19 in identifying local variations of COVID-19 by incorporating a correction factor for the multiple testing problem in the geographically weighted regression models. The second manuscript quantifies the associations between COVID-19 cases and urban socioeconomic and land surface temperature (LST) at the neighbourhood scale in Toronto. Four spatiotemporal Bayesian hierarchical models with spatial, temporal, and varying space-time interaction terms are compared. The results of this study identified the seasonal trends of COVID-19 risk, where the spatiotemporal trends show increasing, decreasing, or stable patterns, and identified area-specific spatial risk for targeted interventions. Educational level and high land surface temperature are shown to have a positive association with the risk for COVID-19. In this study, high spatial and temporal resolution satellite images were used to extract LST, and atmospheric corrections methods were applied to these images by adopting a land surface emissivity (LSE) model, which provided a high estimation accuracy. The methodological approach of this work will help researchers understand how to acquire long time-series data of LST at a spatial scale from satellite images, develop methodological approaches for atmospheric correction and create the environmental data with a high estimation accuracy to fit into modelling disease. Applying to policy, the findings of this study can inform the design and implementation of urban planning strategies and programs to control disease risks. The third manuscript developed a novel approach for visualization of the spread of infectious disease outbreaks by incorporating neighbourhood networks and the time-series data of the disease at the neighbourhood level. The findings of the model provide an understanding of the direction and magnitude of spatial risk for the outbreak and guide for the importance of early intervention in order to stop the spread of the outbreak. The manuscript also identified hotspots using incidence rate and disease persistence, the findings of which may inform public health planners to develop priority-based intervention plans in a resource constraint situation

    Forest planning utilizing high spatial resolution data

    Get PDF
    This thesis presents planning approaches adapted for high spatial resolution data from remote sensing and evaluate whether such approaches can enhance the provision of ecosystem services from forests. The presented methods are compared with conventional, stand-level methods. The main focus lies on the planning concept of dynamic treatment units (DTU), where treatments in small units for modelling ecosystem processes and forest management are clustered spatiotemporally to form treatment units realistic in practical forestry. The methodological foundation of the thesis is mainly airborne laser scanning data (raster cells 12.5x12.5 m2), different optimization methods and the forest decision support system Heureka. Paper I demonstrates a mixed-integer programming model for DTU planning, and the results highlight the economic advances of clustering harvests. Paper II and III presents an addition to a DTU heuristic from the literature and further evaluates its performance. Results show that direct modelling of fixed costs for harvest operations can improve plans and that DTU planning enhances the economic outcome of forestry. The higher spatial resolution of data in the DTU approach enables the planning model to assign management with higher precision than if stand-based planning is applied. Paper IV evaluates whether this phenomenon is also valid for ecological values. Here, an approach adapted for cell-level data is compared to a schematic approach, dealing with stand-level data, for the purpose of allocating retention patches. The evaluation of economic and ecological values indicate that high spatial resolution data and an adapted planning approach increased the ecological values, while differences in economy were small. In conclusion, the studies in this thesis demonstrate how forest planning can utilize high spatial resolution data from remote sensing, and the results suggest that there is a potential to increase the overall provision of ecosystem services if such methods are applied

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Ecology of methanotrophs in a landfill methane biofilter

    Get PDF
    Decomposing landfill waste is a significant anthropogenic source of the potent climate-active gas methane (CH₄). To mitigate fugitive methane emissions Norfolk County Council are trialling a landfill biofilter, designed to harness the methane oxidizing potential of methanotrophic bacteria. These methanotrophs can convert CH₄ to CO₂ or biomass and act as CH₄ sinks. The most active CH₄ oxidising regions of the Strumpshaw biofilter were identified from in-situ temperature, CH₄, O₂ and CO₂ profiles. While soil CH₄ oxidation potential was estimated and used to confirm methanotroph activity and determine optimal soil moisture conditions for CH₄ oxidation. It was observed that most CH₄ oxidation occurs in the top 60cm of the biofilter (up to 50% of CH4 input) at temperatures around 50ºC, optimal soil moisture was 10-27.5%. A decrease in in-situ temperature following CH₄ supply interruption suggested the high biofilter temperatures were driven by CH₄ oxidation. The biofilter soil bacterial community was profiled by 16S rRNA gene analysis, with methanotrophs accounting for ~5-10% of bacteria. Active methanotrophs at a range of different incubation temperatures were identified by ¹³CH₄ DNA stable-isotope probing coupled with 16S rRNA gene amplicon and metagenome analysis. These methods identified Methylocella, Methylobacter, Methylocystis and Crenothrix as potential CH₄ oxidisers at the lower temperatures (30ºC/37ºC) observed following system start-up or gas-feed interruption. At higher temperatures typical of established biofilter operation (45ºC/50ºC), Methylocaldum and an unassigned Methylococcaceae species were the dominant active methanotrophs. Finally, novel methanotrophs Methylococcus capsulatus (Norfolk) and Methylocaldum szegediense (Norfolk) were isolated from biofilter soil enrichments. Methylocaldum szegediense (Norfolk) may be very closely related to or the same species as one of the most abundant active methanotrophs in a metagenome from a 50ºC biofilter soil incubation, based on genome-to-MAG similarity. This isolate was capable of growth over a broad temperature range (37-62ºC) including the higher (in-situ) biofilter temperatures (>50ºC)

    Revisiting the capitalization of public transport accessibility into residential land value: an empirical analysis drawing on Open Science

    Get PDF
    Background: The delivery and effective operation of public transport is fundamental for a for a transition to low-carbon emission transport systems’. However, many cities face budgetary challenges in providing and operating this type of infrastructure. Land value capture (LVC) instruments, aimed at recovering all or part of the land value uplifts triggered by actions other than the landowner, can alleviate some of this pressure. A key element of LVC lies in the increment in land value associated with a particular public action. Urban economic theory supports this idea and considers accessibility to be a core element for determining residential land value. Although the empirical literature assessing the relationship between land value increments and public transport infrastructure is vast, it often assumes homogeneous benefits and, therefore, overlooks relevant elements of accessibility. Advancements in the accessibility concept in the context of Open Science can ease the relaxation of such assumptions. Methods: This thesis draws on the case of Greater Mexico City between 2009 and 2019. It focuses on the effects of the main public transport network (MPTN) which is organised in seven temporal stages according to its expansion phases. The analysis incorporates location based accessibility measures to employment opportunities in order to assess the benefits of public transport infrastructure. It does so by making extensive use of the open-source software OpenTripPlanner for public transport route modelling (≈ 2.1 billion origin-destination routes). Potential capitalizations are assessed according to the hedonic framework. The property value data includes individual administrative mortgage records collected by the Federal Mortgage Society (≈ 800,000). The hedonic function is estimated using a variety of approaches, i.e. linear models, nonlinear models, multilevel models, and spatial multilevel models. These are estimated by the maximum likelihood and Bayesian methods. The study also examines possible spatial aggregation bias using alternative spatial aggregation schemes according to the modifiable areal unit problem (MAUP) literature. Results: The accessibility models across the various temporal stages evidence the spatial heterogeneity shaped by the MPTN in combination with land use and the individual perception of residents. This highlights the need to transition from measures that focus on the characteristics of transport infrastructure to comprehensive accessibility measures which reflect such heterogeneity. The estimated hedonic function suggests a robust, positive, and significant relationship between MPTN accessibility and residential land value in all the modelling frameworks in the presence of a variety of controls. The residential land value increases between 3.6% and 5.7% for one additional standard deviation in MPTN accessibility to employment in the final set of models. The total willingness to pay (TWTP) is considerable, ranging from 0.7 to 1.5 times the equivalent of the capital costs of the bus rapid transit Line-7 of the Metrobús system. A sensitivity analysis shows that the hedonic model estimation is sensitive to the MAUP. In addition, the use of a post code zoning scheme produces the closest results compared to the smallest spatial analytical scheme (0.5 km hexagonal grid). Conclusion: The present thesis advances the discussion on the capitalization of public transport on residential land value by adopting recent contributions from the Open Science framework. Empirically, it fills a knowledge gap given the lack of literature around this topic in this area of study. In terms of policy, the findings support LVC as a mechanism of considerable potential. Regarding fee-based LVC instruments, there are fairness issues in relation to the distribution of charges or exactions to households that could be addressed using location based measures. Furthermore, the approach developed for this analysis serves as valuable guidance for identifying sites with large potential for the implementation of development based instruments, for instance land readjustments or the sale/lease of additional development rights

    Self-Supervised Pre-training for 3D Point Clouds via View-Specific Point-to-Image Translation

    Full text link
    The past few years have witnessed the great success and prevalence of self-supervised representation learning within the language and 2D vision communities. However, such advancements have not been fully migrated to the field of 3D point cloud learning. Different from existing pre-training paradigms designed for deep point cloud feature extractors that fall into the scope of generative modeling or contrastive learning, this paper proposes a translative pre-training framework, namely PointVST, driven by a novel self-supervised pretext task of cross-modal translation from 3D point clouds to their corresponding diverse forms of 2D rendered images. More specifically, we begin with deducing view-conditioned point-wise embeddings through the insertion of the viewpoint indicator, and then adaptively aggregate a view-specific global codeword, which can be further fed into subsequent 2D convolutional translation heads for image generation. Extensive experimental evaluations on various downstream task scenarios demonstrate that our PointVST shows consistent and prominent performance superiority over current state-of-the-art approaches as well as satisfactory domain transfer capability. Our code will be publicly available at https://github.com/keeganhk/PointVST

    Pre-processing training data improves accuracy and generalisability of convolutional neural network based landscape semantic segmentation

    Full text link
    In this paper, we trialled different methods of data preparation for Convolutional Neural Network (CNN) training and semantic segmentation of land use land cover (LULC) features within aerial photography over the Wet Tropics and Atherton Tablelands, Queensland, Australia. This was conducted through trialling and ranking various training patch selection sampling strategies, patch and batch sizes and data augmentations and scaling. We also compared model accuracy through producing the LULC classification using a single pass of a grid of patches and averaging multiple grid passes and three rotated version of each patch. Our results showed: a stratified random sampling approach for producing training patches improved the accuracy of classes with a smaller area while having minimal effect on larger classes; a smaller number of larger patches compared to a larger number of smaller patches improves model accuracy; applying data augmentations and scaling are imperative in creating a generalised model able to accurately classify LULC features in imagery from a different date and sensor; and producing the output classification by averaging multiple grids of patches and three rotated versions of each patch produced and more accurate and aesthetic result. Combining the findings from the trials, we fully trained five models on the 2018 training image and applied the model to the 2015 test image with the output LULC classifications achieving an average kappa of 0.84 user accuracy of 0.81 and producer accuracy of 0.87. This study has demonstrated the importance of data pre-processing for developing a generalised deep-learning model for LULC classification which can be applied to a different date and sensor. Future research using CNN and earth observation data should implement the findings of this study to increase LULC model accuracy and transferability
    • …
    corecore