221 research outputs found

    Predicción ordinal utilizando metodologías de aprendizaje automático: Aplicaciones

    Get PDF
    Artificial Intelligence is part of our everyday life, not only as consumers but also in most of the productive areas since companies can optimize most of their processes with all the different tools that it can provide. There is one topic that has been especially useful in the artificial intelligence implementation process which is machine learning, as it can be used in most of the practical applications that appear in real-life problems. Machine learning is the part of artificial intelligence that focuses on developing models that are able to learn a function that transforms input data into a desired output. One of the most important parts in machine learning is the model, and one of the most successful models in the state-of-the-art approaches is the artificial neural network. This is why the current thesis, for its first challenge, will study how to improve them to be able to learn more complex problems without needing to apply computationally costly training algorithms. The next important step to improve the model’s performance is to optimize the algorithms that are used to let them learn how to transform the inputs into the desired outputs, and the second challenge of this thesis is to optimize the computational cost of evolutionary algorithms, which are one of the best options to optimize ANNs due to their flexibility when training them. Ordinal classification (also known as ordinal regression) is an area of machine learning that can be applied to many real-life problems since it takes into account the order of the classes, which is an important fact in many real-life problems. In the area of social sciences, we will study how potential countries are helping the poorer ones the most, and then we will perform a deeper study to classify the level of globalisation of a country. These studies will be performed by applying the models and algorithms that were developed in the first stage of the thesis. After these first works, continuing with the ordinal classification approaches, we focused on the area of medicine, where there are many examples of applications of these techniques, e.g., any disease that may have progression is usually classified in different stages depending on its severity from low to high. In our case, this thesis will study how a treatment (liver transplantation) can affect different patients (survival time of the graft), and therefore decide which patient is the most appropriate for that specific treatment. The last chapter of the thesis will delve in ordinal classification to achieve ordinal prediction of time series. Time series have been usually processed with classical statistical techniques since machine learning models that focused on time series were too costly. However, currently, with the arrival of powerful computation machines together with the evolution of models such as recurrent neural networks, classic statistical techniques can hardly be competitive versus machine learning. In areas such as economics, social sciences, meteorology or medicine, time series are the main source of information, and they need to be correctly processed to be useful. The most common consideration when dealing with time series is to learn from past values to predict future ones, and the works in this last chapter will focus on performing ordinal predictions of WPREs in wind farms, creating novel models and methodologies. The thesis will conclude with a work that implements a deep neural network to predict WPREs in multiple wind farms at the same time; therefore, this model would allow predicting WPREs in a global area instead of in a specific geographical point

    Performance Enhancement of Power System Operation and Planning through Advanced Advisory Mechanisms

    Get PDF
    abstract: This research develops decision support mechanisms for power system operation and planning practices. Contemporary industry practices rely on deterministic approaches to approximate system conditions and handle growing uncertainties from renewable resources. The primary purpose of this research is to identify soft spots of the contemporary industry practices and propose innovative algorithms, methodologies, and tools to improve economics and reliability in power systems. First, this dissertation focuses on transmission thermal constraint relaxation practices. Most system operators employ constraint relaxation practices, which allow certain constraints to be relaxed for penalty prices, in their market models. A proper selection of penalty prices is imperative due to the influence that penalty prices have on generation scheduling and market settlements. However, penalty prices are primarily decided today based on stakeholder negotiations or system operator’s judgments. There is little to no methodology or engineered approach around the determination of these penalty prices. This work proposes new methods that determine the penalty prices for thermal constraint relaxations based on the impact overloading can have on the residual life of the line. This study evaluates the effectiveness of the proposed methods in the short-term operational planning and long-term transmission expansion planning studies. The second part of this dissertation investigates an advanced methodology to handle uncertainties associated with high penetration of renewable resources, which poses new challenges to power system reliability and calls attention to include stochastic modeling within resource scheduling applications. However, the inclusion of stochastic modeling within mathematical programs has been a challenge due to computational complexities. Moreover, market design issues due to the stochastic market environment make it more challenging. Given the importance of reliable and affordable electric power, such a challenge to advance existing deterministic resource scheduling applications is critical. This ongoing and joint research attempts to overcome these hurdles by developing a stochastic look-ahead commitment tool, which is a stand-alone advisory tool. This dissertation contributes to the derivation of a mathematical formulation for the extensive form two-stage stochastic programming model, the utilization of Progressive Hedging decomposition algorithm, and the initial implementation of the Progressive Hedging subproblem along with various heuristic strategies to enhance the computational performance.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Forecasting: theory and practice

    Get PDF
    Forecasting has always been in the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The lack of a free-lunch theorem implies the need for a diverse set of forecasting methods to tackle an array of applications. This unique article provides a non-systematic review of the theory and the practice of forecasting. We offer a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts, including operations, economics, finance, energy, environment, and social good. We do not claim that this review is an exhaustive list of methods and applications. The list was compiled based on the expertise and interests of the authors. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of the forecasting theory and practice

    Using flight data in Bayesian networks and other methods to quantify airline operational risks.

    Get PDF
    The risk assessment methods used in airline operations are usually qualitative rather than quantitative, despite the routine collection of vast amounts of safety data through programmes such as flight data monitoring (FDM). The overall objective of this research is to exploit airborne recorded flight data to provide enhanced operational safety knowledge and quantitative risk assessments. Runway veer-off at landing, accounting for over 10% of air transport incidents and accidents, is used as an example risk. Literature on FDM, risk assessment and veer-off accidents is reviewed, leading to the identification of three potential areas for further examination: variability in operational parameters as a measure of risk; measures of workload derived from flight data as a measure of risk; and Bayesian networks. Methods relating to variability and workload are briefly explored and preliminary results are presented, before the main methods of the thesis relating to Bayesian networks are introduced. The literature shows that Bayesian networks are a suitable method for quantifying risk and a causal network for lateral deviation at landing is developed based on accident investigation data. Flight data from over 300,000 flights is used to provide empirical probabilities for causal factors and data for some causal factors is modelled to estimate the probabilities of extreme events. As an alternative to predefining the Bayesian network structure from accident data, a series of networks are learnt from flight data and an assessment is made of the performance of different learning algorithms, such as Bayesian Search and Greedy Thick Thinning. Finally, a network with parameters and structure learnt from flight data is adapted to incorporate causal knowledge from accident data, and the performance of the resulting “combined” network is assessed. All three types of network were able to make use of flight data to calculate relative probabilities of a lateral deviation event, given different scenarios of causal factors present, and for different airports, however the “combined” approach is preferred due to the relative ease of running scenarios for different airports and the avoidance of the lengthy process of modelling data for causal factor nodes. The preferred method provides airlines with a practicable way to use their existing flight data to quantify operational risks. The resulting quantitative risk assessments could be used to provide pilots with enhanced pre-flight briefings and provide airlines with up-to-date risk information of operations to different airports, and enhanced safety oversight.PhD in Transport System
    corecore