8 research outputs found

    Sustainability assessment of wastewater reuse in a Portuguese military airbase

    Get PDF
    Funding Information: This research was funded by School for International Training, World Learning, Vermont, United States. The authors acknowledge the Portuguese Foundation for Science and Technology (FCT) for the support given to CENSE through the strategic project UIDB/04085/2020. The authors would also like to thank to the Air Force members, namely the Air Base No.5 Commander João Vicente and his team, for the availability to support this work. Publisher Copyright: © 2022 Elsevier B.V.The current water-scarcity crisis that is being felt in Europe, namely in the southern region, has leveraged the development and implementation of national and regional water management plans. These policies aim to promote efficient wastewater reuse in industrial and urban sectors. Thus, stakeholders are now seeking strategies to enhance the sustainability of their wastewater treatment processes. The present work details the evaluation of the wastewater treatment methods used at an Air Force Base located in Portugal. In addition, this study also intended to determine how wastewater reuse can be implemented and add value to the environmental protection mission of the military airbase. Hence, an assessment of wastewater treatment practices was carried out, considering primary and secondary treatments. The chemical, physical, and biological indicators of samples collected over two consecutive years were analyzed to determine trends and fluctuations. The results revealed that the overall effectiveness of nutrient removal is low due to the oversized nature of the treatment plant, the age of the facility, and the composition of the wastewater. The effluent produced meets standards for non-potable reuse and could be used on base for aircraft maintenance and the cleaning of facilities. Nonetheless, the effectiveness of the plant could be improved by implementing a more advanced tertiary wastewater treatment to decrease the concentration of undesired compounds (e.g., total nitrogen), enabling the reuse of water in a broader range of activities.publishersversionpublishe

    Doctor of Philosophy

    Get PDF
    dissertationGraphics processing units (GPUs) are highly parallel processors that are now commonly used in the acceleration of a wide range of computationally intensive tasks. GPU programs often suffer from data races and deadlocks, necessitating systematic testing. Conventional GPU debuggers are ineffective at finding and root-causing races since they detect errors with respect to the specific platform and inputs as well as thread schedules. The recent formal and semiformal analysis based tools have improved the situation much, but they still have some problems. Our research goal is to aply scalable formal analysis to refrain from platform constraints and exploit all relevant inputs and thread schedules for GPU programs. To achieve this objective, we create a novel symbolic analysis, test and test case generator tailored for C++ GPU programs, the entire framework consisting of three stages: GKLEE, GKLEEp, and SESA. Moreover, my thesis not only presents that our framework is capable of uncovering many concurrency errors effectively in real-world CUDA programs such as latest CUDA SDK kernels, Parboil and LoneStarGPU benchmarks, but also demonstrates a high degree of test automation is achievable in the space of GPU programs through SMT-based symbolic execution, picking representative executions through thread abstraction, and combined static and dynamic analysis

    A rigorous introduction to linear models

    Full text link
    This book is meant to provide an introduction to linear models and the theories behind them. Our goal is to give a rigorous introduction to the readers with prior exposure to ordinary least squares. In machine learning, the output is usually a nonlinear function of the input. Deep learning even aims to find a nonlinear dependence with many layers, which require a large amount of computation. However, most of these algorithms build upon simple linear models. We then describe linear models from different perspectives and find the properties and theories behind the models. The linear model is the main technique in regression problems, and the primary tool for it is the least squares approximation, which minimizes a sum of squared errors. This is a natural choice when we're interested in finding the regression function which minimizes the corresponding expected squared error. This book is primarily a summary of purpose, significance of important theories behind linear models, e.g., distribution theory and the minimum variance estimator. We first describe ordinary least squares from three different points of view, upon which we disturb the model with random noise and Gaussian noise. Through Gaussian noise, the model gives rise to the likelihood so that we introduce a maximum likelihood estimator. It also develops some distribution theories via this Gaussian disturbance. The distribution theory of least squares will help us answer various questions and introduce related applications. We then prove least squares is the best unbiased linear model in the sense of mean squared error, and most importantly, it actually approaches the theoretical limit. We end up with linear models with the Bayesian approach and beyond

    Accountants\u27 index. Twenty-sixth supplement, January-December 1977, volume 1: A-L

    Get PDF
    https://egrove.olemiss.edu/aicpa_accind/1029/thumbnail.jp

    Machine Learning Techniques To Mitigate Nonlinear Impairments In Optical Fiber System

    Get PDF
    The upcoming deployment of 5/6G networks, online services like 4k/8k HDTV (streamers and online games), the development of the Internet of Things concept, connecting billions of active devices, as well as the high-speed optical access networks, impose progressively higher and higher requirements on the underlying optical networks infrastructure. With current network infrastructures approaching almost unsustainable levels of bandwidth utilization/ data traffic rates, and the electrical power consumption of communications systems becoming a serious concern in view of our achieving the global carbon footprint targets, network operators and system suppliers are now looking for ways to respond to these demands while also maximizing the returns of their investments. The search for a solution to this predicted ªcapacity crunchº led to a renewed interest in alternative approaches to system design, including the usage of high-order modulation formats and high symbol rates, enabled by coherent detection, development of wideband transmission tools, new fiber types (such as multi-mode and ±core), and finally, the implementation of advanced digital signal processing (DSP) elements to mitigate optical channel nonlinearities and improve the received SNR. All aforementioned options are intended to boost the available optical systems’ capacity to fulfill the new traffic demands. This thesis focuses on the last of these possible solutions to the ªcapacity crunch," answering the question: ªHow can machine learning improve existing optical communications by minimizing quality penalties introduced by transceiver components and fiber media nonlinearity?". Ultimately, by identifying a proper machine learning solution (or a bevy of solutions) to act as a nonlinear channel equalizer for optical transmissions, we can improve the system’s throughput and even reduce the signal processing complexity, which means we can transmit more using the already built optical infrastructure. This problem was broken into four parts in this thesis: i) the development of new machine learning architectures to achieve appealing levels of performance; ii) the correct assessment of computational complexity and hardware realization; iii) the application of AI techniques to achieve fast reconfigurable solutions; iv) the creation of a theoretical foundation with studies demonstrating the caveats and pitfalls of machine learning methods used for optical channel equalization. Common measures such as bit error rate, quality factor, and mutual information are considered in scrutinizing the systems studied in this thesis. Based on simulation and experimental results, we conclude that neural network-based equalization can, in fact, improve the channel quality of transmission and at the same time have computational complexity close to other classic DSP algorithms

    Kernels, in a nutshell

    No full text
    A classical result in algebraic specification states that a total function defined on an initial algebra is a homomorphism if and only if the kernel of that function is a congruence. We expand on the discussion of that result from an earlier paper: extending it from total to partial functions, simplifying the proofs using relational calculus, and generalising the setting to regular categories. </p

    Kernels, in a nutshell

    Get PDF
    A classical result in algebraic specification states that a total function defined on an initial algebra is a homomorphism if and only if the kernel of that function is a congruence. We expand on the discussion of that result from an earlier paper: extending it from total to partial functions, simplifying the proofs using relational calculus, and generalising the setting to regular categories. </p
    corecore