336,095 research outputs found

    Optimisation of the weighting functions of an H<sub>∞</sub> controller using genetic algorithms and structured genetic algorithms

    No full text
    In this paper the optimisation of the weighting functions for an H&lt;sub&gt;∞&lt;/sub&gt; controller using genetic algorithms and structured genetic algorithms is considered. The choice of the weighting functions is one of the key steps in the design of an H&lt;sub&gt;∞&lt;/sub&gt; controller. The performance of the controller depends on these weighting functions since poorly chosen weighting functions will provide a poor controller. One approach that can solve this problem is the use of evolutionary techniques to tune the weighting parameters. The paper presents the improved performance of structured genetic algorithms over conventional genetic algorithms and how this technique can assist with the identification of appropriate weighting functions' orders

    Higher education reform: getting the incentives right

    Get PDF
    This study is a joint effort by the Netherlands Bureau for Economic Policy Analysis (CPB) and the Center for Higher Education Policy Studies. It analyses a number of `best practicesÂż where the design of financial incentives working on the system level of higher education is concerned. In Chapter 1, an overview of some of the characteristics of the Dutch higher education sector is presented. Chapter 2 is a refresher on the economics of higher education. Chapter 3 is about the Australian Higher Education Contribution Scheme (HECS). Chapter 4 is about tuition fees and admission policies in US universities. Chapter 5 looks at the funding of Danish universities through the so-called taximeter-model, that links funding to student performance. Chapter 6 deals with research funding in the UK university system, where research assessments exercises underlie the funding decisions. In Chapter 7 we study the impact of university-industry ties on academic research by examining the US policies on increasing knowledge transfer between universities and the private sector. Finally, Chapter 8 presents food for thought for Dutch policymakers: what lessons can be learned from our international comparison

    Selecting the rank of truncated SVD by Maximum Approximation Capacity

    Full text link
    Truncated Singular Value Decomposition (SVD) calculates the closest rank-kk approximation of a given input matrix. Selecting the appropriate rank kk defines a critical model order choice in most applications of SVD. To obtain a principled cut-off criterion for the spectrum, we convert the underlying optimization problem into a noisy channel coding problem. The optimal approximation capacity of this channel controls the appropriate strength of regularization to suppress noise. In simulation experiments, this information theoretic method to determine the optimal rank competes with state-of-the art model selection techniques.Comment: 7 pages, 5 figures; Will be presented at the IEEE International Symposium on Information Theory (ISIT) 2011. The conference version has only 5 pages. This version has an extended appendi

    A dynamic organic Rankine cycle using a zeotropic mixture as the working fluid with composition tuning to match changing ambient conditions

    Get PDF
    Air-cooled condensers are widely used for Organic Rankine Cycle (ORC) power plants where cooling water is unavailable or too costly, but they are then vulnerable to changing ambient air temperatures especially in continental climates, where the air temperature difference between winter and summer can be over 40 °C. A conventional ORC system using a single component working fluid has to be designed according to the maximum air temperature in summer and thus operates far from optimal design conditions for most of the year, leading to low annual average efficiencies. This research proposes a novel dynamic ORC that uses a binary zeotropic mixture as the working fluid, with mechanisms in place to adjust the mixture composition dynamically during operation in response to changing heat sink conditions, significantly improving the overall efficiency of the plant. The working principle of the dynamic ORC concept is analysed. The case study results show that the annual average thermal efficiency can be improved by up to 23% over a conventional ORC when the heat source is 100 °C, while the evaluated increase of the capital cost is less than 7%. The dynamic ORC power plants are particularly attractive for low temperature applications, delivering shorter payback periods compared to conventional ORC systems

    Development of dry coal feeders

    Get PDF
    Design and fabrication of equipment of feed coal into pressurized environments were investigated. Concepts were selected based on feeder system performance and economic projections. These systems include: two approaches using rotating components, a gas or steam driven ejector, and a modified standpipe feeder concept. Results of development testing of critical components, design procedures, and performance prediction techniques are reviewed

    Bulk Scheduling with the DIANA Scheduler

    Full text link
    Results from the research and development of a Data Intensive and Network Aware (DIANA) scheduling engine, to be used primarily for data intensive sciences such as physics analysis, are described. In Grid analyses, tasks can involve thousands of computing, data handling, and network resources. The central problem in the scheduling of these resources is the coordinated management of computation and data at multiple locations and not just data replication or movement. However, this can prove to be a rather costly operation and efficient sing can be a challenge if compute and data resources are mapped without considering network costs. We have implemented an adaptive algorithm within the so-called DIANA Scheduler which takes into account data location and size, network performance and computation capability in order to enable efficient global scheduling. DIANA is a performance-aware and economy-guided Meta Scheduler. It iteratively allocates each job to the site that is most likely to produce the best performance as well as optimizing the global queue for any remaining jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results indicate that considerable performance improvements can be gained by adopting the DIANA scheduling approach.Comment: 12 pages, 11 figures. To be published in the IEEE Transactions in Nuclear Science, IEEE Press. 200

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea
    • 

    corecore