74 research outputs found

    Economics for environmental management: a practical approach towards circular economy

    Get PDF
    Circular Economy has the potential of being a solution to face current environmental problems, by incorporating reusage and recycling of production materials and/or the final product. However, the current markets seem unable to fully tap into this type of solution on their own, requiring state and/or governmental intervention. This study gathers some of the most recommended policies from current literature, which range from helping to disseminate clear information about Circular Economy, promoting collaborations between firms, to the implementation of stricter laws. Shifts in the market may also be done by financial and fiscal policy means, such as increasing indirect taxes for wastes and pollution, increasing taxes on unoptimized linear production methods, granting positive incentives to firms trying to turn their production into a circular system. Nevertheless, to implement these policies, it is crucial to consider adapting them at each specific point of the production chain, considering the local firm properties/diversity, and how agents behave in response to such incentives. This dissertation also demonstrates how to use basic economics, to attempt a general quantification of how much an incentive or tax should be, as to support a market shift towards Circular Economy. Econometric models using real data prove that both incentives and taxes are significant governmental intervention tools that improve development towards sustainability. Nonetheless, Circular Economy also has limits and risks, therefore, assessing the costs of implementing it is crucial for determining if it is the correct solution for each specific situation.Economia Circular tem o potencial de ser uma solução para os atuais problemas ambientais, ao incorporar materiais e produtos finais que podem ser reutilizados e/ou reciclados. Porém, o atual mercado não é capaz de alcançar esta solução por si mesmo, requerendo intervenção do estado e/ou governo. Este estudo recolhe algumas das políticas mais recomendadas na literatura atual, que vão desde ajudar a divulgar informação clara sobre a economia circular, promover colaborações entre empresas, ou até implementação de leis. Mudanças no mercado também podem ser causadas usando políticas fiscais e financeiras, tais como aumentar taxas indiretas aos desperdícios e poluição, aumentar taxas diretas a produções lineares não otimizadas, fornecer incentivos positivos a empresas que tentem transformar a sua produção num sistema circular. Todavia, para implementar estas políticas, é necessário adaptá-las a cada ponto específico na cadeia de produção, considerar propriedades/diversidade das empresas locais, e as possíveis respostas comportamentais dos agentes face aos incentivos. Esta dissertação também demonstra como usar os básicos da economia para tentar quantificar, no geral, quanto deve ser o valor do incentivo ou taxa, para que o mercado mude em favor da Economia Circular. Modelos econométricos com dados reais provam que seja incentivos ou taxas são ferramentas de intervenção governamental que significativamente promovem o desenvolvimento sustentável. Não obstante, Economia Circular também tem os seus limites e riscos, por isso é pertinente calcular o seu custo de implementação para determinar se realmente é a solução correta para cada situação em específico

    Wave powered desalination

    Get PDF

    Short-Term Multi-Horizon Line Loss Rate Forecasting of a Distribution Network Using Attention-GCN-LSTM

    Full text link
    Accurately predicting line loss rates is vital for effective line loss management in distribution networks, especially over short-term multi-horizons ranging from one hour to one week. In this study, we propose Attention-GCN-LSTM, a novel method that combines Graph Convolutional Networks (GCN), Long Short-Term Memory (LSTM), and a three-level attention mechanism to address this challenge. By capturing spatial and temporal dependencies, our model enables accurate forecasting of line loss rates across multiple horizons. Through comprehensive evaluation using real-world data from 10KV feeders, our Attention-GCN-LSTM model consistently outperforms existing algorithms, exhibiting superior performance in terms of prediction accuracy and multi-horizon forecasting. This model holds significant promise for enhancing line loss management in distribution networks

    Experimental and numerical study of wave-induced porous flow in rubble-mound breakwaters

    Get PDF

    Model uncertainty and expected return proxies

    Get PDF
    Over the last two decades, alternative expected return proxies have been proposed with substantially lower variation than realized returns. This helped to reduce parameter uncertainty and to identify many seemingly robust relations between expected returns and variables of interest, which would have gone unnoticed with the use of realized returns. In this study, I argue that these findings could be spurious due to the ignorance of model uncertainty: because a researcher does not know which of the many proposed proxies is measured with the least error, any inference conditional on only one proxy can lead to overconfident decisions. As a solution, I introduce a Bayesian model averaging (BMA) framework to directly incorporate model uncertainty into the statistical analysis. I employ this approach to three examples from the implied cost of capital (ICC) literature and show that the incorporation of model uncertainty can severely widen the coverage regions, thereby leveling the playing field between realized returns and alternative expected return proxies

    Unevenly Spaced Time Series and Climate Econometrics

    Get PDF
    This thesis is a collection of four chapters aimed at exploring and developing methods for unevenly spaced and cyclical time series analysis, and how this methods can be used to study environmental and climate variables. As such Chapter 1 proposes a novel and very robust approach to estimate the mean and the covariance function of a stationary linear time series with unevenly spaced observations. It presents an expression for the sample mean estimator, its variance and the autocorrelation function and establishes asymptotic properties and the central limit theorem. Chapter 2 builds on this introducing a new model fitting approach for unevenly spaced data. After missing data have been accounted for, the dynamics parameters of an AR time series with unevenly spaced observations can be estimated with parametric rate and confidence intervals can be easily constructed. Pivoting these results, we develop a successful time series tool for estimation and forecasting of time series with cyclically varying parameters. More specifically, we introduce an effective estimation technique to tackle periodicity in the parameters of such models and test for changes in the cycles. To illustrate the robustness and flexibility of the method, Chapter 3 illustrates an application of the estimation methodology to daily temperatures of Central England data. This estimation technique allows us to uncover cyclical patterns in the data without imposing restrictive assumptions. Using the Central England Temperature (CET) time series (1772 - today) we find with a high level of accuracy that temperature intra-year average and persistence have increased in the sample 1850 - 2020 compared to 1772 - 1850. Finally, Chapter 4 identifies the effects of climate change on the macroeconomy using an original panel data set for 24 OECD countries over the sample 1990-2019 and a multivariate empirical macroeconomic framework for business cycle analysis. We find evidence of significant macroeconomic effects over the business cycle: physical risks act as negative demand shocks while transition risks as downward supply movements. The disruptive effects on the economy are exacerbated for countries without carbon tax or with a high exposure to natural disasters

    Improving the Clinical Use of Magnetic Resonance Spectroscopy for the Analysis of Brain Tumours using Machine Learning and Novel Post-Processing Methods

    Get PDF
    Magnetic Resonance Spectroscopy (MRS) provides unique and clinically relevant information for the assessment of several diseases. However, using the currently available tools, MRS processing and analysis is time-consuming and requires profound expert knowledge. For these two reasons, MRS did not gain general acceptance as a mainstream diagnostic technique yet, and the currently available clinical tools have seen little progress during the past years. MRS provides localized chemical information non-invasively, making it a valuable technique for the assessment of various diseases and conditions, namely brain, prostate and breast cancer, and metabolic diseases affecting the brain. In brain cancer, MRS is normally used for: (1.) differentiation between tumors and non-cancerous lesions, (2.) tumor typing and grading, (3.) differentiation between tumor-progression and radiation necrosis, and (4.) identification of tumor infiltration. Despite the value of MRS for these tasks, susceptibility differences associated with tissue-bone and tissue-air interfaces, as well as with the presence of post-operative paramagnetic particles, affect the quality of brain MR spectra and consequently reduce their clinical value. Therefore, the proper quality management of MRS acquisition and processing is essential to achieve unambiguous and reproducible results. In this thesis, special emphasis was placed on this topic. This thesis addresses some of the major problems that limit the use of MRS in brain tumors and focuses on the use of machine learning for the automation of the MRS processing pipeline and for assisting the interpretation of MRS data. Three main topics were investigated: (1.) automatic quality control of MRS data, (2.) identification of spectroscopic patterns characteristic of different tissue-types in brain tumors, and (3.) development of a new approach for the detection of tumor-related changes in GBM using MRSI data. The first topic tackles the problem of MR spectra being frequently affected by signal artifacts that obscure their clinical information content. Manual identification of these artifacts is subjective and is only practically feasible for single-voxel acquisitions and in case the user has an extensive experience with MRS. Therefore, the automatic distinction between data of good or bad quality is an essential step for the automation of MRS processing and routine reporting. The second topic addresses the difficulties that arise while interpreting MRS results: the interpretation requires expert knowledge, which is not available at every site. Consequently, the development of methods that enable the easy comparison of new spectra with known spectroscopic patterns is of utmost importance for clinical applications of MRS. The third and last topic focuses on the use of MRSI information for the detection of tumor-related effects in the periphery of brain tumors. Several research groups have shown that MRSI information enables the detection of tumor infiltration in regions where structural MRI appears normal. However, many of the approaches described in the literature make use of only a very limited amount of the total information contained in each MR spectrum. Thus, a better way to exploit MRSI information should enable an improvement in the detection of tumor borders, and consequently improve the treatment of brain tumor patients. The development of the methods described was made possible by a novel software tool for the combined processing of MRS and MRI: SpectrIm. This tool, which is currently distributed as part of the jMRUI software suite (www.jmrui.eu), is ubiquitous to all of the different methods presented and was one of the main outputs of the doctoral work. Overall, this thesis presents different methods that, when combined, enable the full automation of MRS processing and assist the analysis of MRS data in brain tumors. By allowing clinical users to obtain more information from MRS with less effort, this thesis contributes to the transformation of MRS into an important clinical tool that may be available whenever its information is of relevance for patient management

    CONTRIBUTIONS TO K-MEANS CLUSTERING AND REGRESSION VIA CLASSIFICATION ALGORITHMS

    Get PDF
    The dissertation deals with clustering algorithms and transforming regression prob-lems into classification problems. The main contributions of the dissertation are twofold; first, to improve (speed up) the clustering algorithms and second, to develop a strict learn-ing environment for solving regression problems as classification tasks by using support vector machines (SVMs). An extension to the most popular unsupervised clustering meth-od, k-means algorithm, is proposed, dubbed k-means2 (k-means squared) algorithm, appli-cable to ultra large datasets. The main idea is based on using a small portion of the dataset in the first stage of the clustering. Thus, the centers of such a smaller dataset are computed much faster than if computing the centers based on the whole dataset. These final centers of the first stage are naturally much closer to the locations of the final centers rendering a great reduction in the total computational cost. For large datasets the speed up in computa-tion exhibited a trend which is shown to be high and rising with the increase in the size of the dataset. The total transient time for the fast stage was found to depend largely on the portion of the dataset selected in the stage. For medium size datasets it has been shown that an 8-10% portion of data used in the fast stage is a reasonable choice. The centers of the 8-10% samples computed during the fast stage may oscillate towards the final centers\u27 positions of the fast stage along the centers\u27 movement path. The slow stage will start with the final centers of the fast phase and the paths of the centers in the second stage will be much shorter than the ones of a classic k-means algorithm. Additionally, the oscillations of the slow stage centers\u27 trajectories along the path to the final centers\u27 positions are also greatly minimized. In the second part of the dissertation, a novel approach of posing a solution of re-gression problems as the multiclass classification tasks within the common framework of kernel machines is proposed. Based on such an approach both the nonlinear (NL) regression problems and NL multiclass classification tasks will be solved as multiclass classification problems by using SVMs. The accuracy of an approximating classification (hyper)Surface (averaged over several benchmarking data sets used in this study) to the data points over a given high-dimensional input space created by a nonlinear multiclass classifier is slightly superior to the solution obtained by regression (hyper)Surface. In terms of the CPU time needed for training (i.e. for tuning the hyperparameters of the models), the nonlinear SVM classifier also shows significant advantages. Here, the comparisons between the solutions obtained by an SVM solving given regression problem as a classic SVM regressor and as the SVM classifier have been performed. In order to transform a regression problem into a classification task, four possible discretizations of a continuous output (target) vector y are introduced and compared. A very strict double (nested) cross-validation technique has been used for measuring the performances of regression and multiclass classification SVMs. In order to carry out fair comparisons, SVMs are used for solving both tasks - regression and multiclass classification. The readily available and most popular benchmarking SVM tool, LibSVM, was used in all experiments. The results in solving twelve benchmarking regression tasks shown here will present SVM regression and classification algorithms as strongly competing models where each approach shows merits for a specific class of high-dimensional function approximation problems
    corecore