10 research outputs found
Recommended from our members
Experimental investigation of an interior search method within a simple framework
A steepest gradient method for solving Linear Programming (LP) problems, followed by a procedure for purifying a non-basic solution to an improved extreme point solution have been embedded within an otherwise simplex based optimiser. The algorithm is designed to be hybrid in nature and exploits many aspects of sparse matrix and revised simplex technology. The interior search step terminates at a boundary point which is usually non-basic. This is then followed by a series of minor pivotal steps which lead to a basic feasible solution with a superior objective function value. It is concluded that the procedures discussed in this paper are likely to have three possible applications, which are
(i) improving a non-basic feasible solution to a superior extreme point solution,
(iii) an improved starting point for the revised simplex method, and
(iii) an efficient implementation of the multiple price strategy of the revised simplex method
Calculation of chemical and phase equilibria
Bibliography: pages 167-169.The computation of chemical and phase equilibria is an essential aspect of chemical engineering design and development. Important applications range from flash calculations to distillation and pyrometallurgy. Despite the firm theoretical foundations on which the theory of chemical equilibrium is based there are two major difficulties that prevent the equilibrium state from being accurately determined. The first of these hindrances is the inaccuracy or total absence of pertinent thermodynamic data. The second is the complexity of the required calculation. It is the latter consideration which is the sole concern of this dissertation
Combined optimization algorithms applied to pattern classification
Accurate classification by minimizing the error on test samples is the main
goal in pattern classification. Combinatorial optimization is a well-known
method for solving minimization problems, however, only a few examples of
classifiers axe described in the literature where combinatorial optimization is
used in pattern classification. Recently, there has been a growing interest
in combining classifiers and improving the consensus of results for a greater
accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination
of simulated annealing, a powerful combinatorial optimization method
that produces high quality results, with the classical perceptron algorithm.
This combination is called LSA machine. Our analysis aims at finding paradigms
for problem-dependent parameter settings that ensure high classifica,
tion results. Our computational experiments on a large number of benchmark
problems lead to results that either outperform or axe at least competitive to
results published in the literature. Apart from paxameter settings, our analysis
focuses on a difficult problem in computation theory, namely the network
complexity problem. The depth vs size problem of neural networks is one of
the hardest problems in theoretical computing, with very little progress over
the past decades. In order to investigate this problem, we introduce a new
recursive learning method for training hidden layers in constant depth circuits.
Our findings make contributions to a) the field of Machine Learning, as the
proposed method is applicable in training feedforward neural networks, and to
b) the field of circuit complexity by proposing an upper bound for the number
of hidden units sufficient to achieve a high classification rate. One of the major
findings of our research is that the size of the network can be bounded by
the input size of the problem and an approximate upper bound of 8 + √2n/n
threshold gates as being sufficient for a small error rate, where n := log/SL
and SL is the training set
Algorithm engineering : string processing
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text
Automated retinal layer segmentation and pre-apoptotic monitoring for three-dimensional optical coherence tomography
The aim of this PhD thesis was to develop segmentation algorithm adapted and optimized to retinal OCT data that will provide objective 3D layer thickness which might be used to improve diagnosis and monitoring of retinal pathologies. Additionally, a 3D stack registration method was produced by modifying an existing algorithm. A related project was to develop a pre-apoptotic retinal monitoring based on the changes in texture parameters of the OCT scans in order to enable treatment before the changes become irreversible; apoptosis refers to the programmed cell death that can occur in retinal tissue and lead to blindness. These issues can be critical for the examination of tissues within the central nervous system. A novel statistical model for segmentation has been created and successfully applied to a large data set. A broad range of future research possibilities into advanced pathologies has been created by the results obtained. A separate model has been created for choroid segmentation located deep in retina, as the appearance of choroid is very different from the top retinal layers. Choroid thickness and structure is an important index of various pathologies (diabetes etc.). As part of the pre-apoptotic monitoring project it was shown that an increase in proportion of apoptotic cells in vitro can be accurately quantified. Moreover, the data obtained indicates a similar increase in neuronal scatter in retinal explants following axotomy (removal of retinas from the eye), suggesting that UHR-OCT can be a novel non-invasive technique for the in vivo assessment of neuronal health. Additionally, an independent project within the computer science department in collaboration with the school of psychology has been successfully carried out, improving analysis of facial dynamics and behaviour transfer between individuals. Also, important improvements to a general signal processing algorithm, dynamic time warping (DTW), have been made, allowing potential application in a broad signal processing field.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Automated retinal layer segmentation and pre-apoptotic monitoring for three-dimensional optical coherence tomography
The aim of this PhD thesis was to develop segmentation algorithm adapted and optimized to retinal OCT data that will provide objective 3D layer thickness which might be used to improve diagnosis and monitoring of retinal pathologies. Additionally, a 3D stack registration method was produced by modifying an existing algorithm. A related project was to develop a pre-apoptotic retinal monitoring based on the changes in texture parameters of the OCT scans in order to enable treatment before the changes become irreversible; apoptosis refers to the programmed cell death that can occur in retinal tissue and lead to blindness. These issues can be critical for the examination of tissues within the central nervous system.
A novel statistical model for segmentation has been created and successfully applied to a large data set. A broad range of future research possibilities into advanced pathologies has been created by the results obtained. A separate model has been created for choroid segmentation located deep in retina, as the appearance of choroid is very different from the top retinal layers. Choroid thickness and structure is an important index of various pathologies (diabetes etc.).
As part of the pre-apoptotic monitoring project it was shown that an increase in proportion of apoptotic cells in vitro can be accurately quantified. Moreover, the data obtained indicates a similar increase in neuronal scatter in retinal explants following axotomy (removal of retinas from the eye), suggesting that UHR-OCT can be a novel non-invasive technique for the in vivo assessment of neuronal health.
Additionally, an independent project within the computer science department in collaboration with the school of psychology has been successfully carried out, improving analysis of facial dynamics and behaviour transfer between individuals. Also, important improvements to a general signal processing algorithm, dynamic time warping (DTW), have been made, allowing potential application in a broad signal processing field
Train scheduling with application to the UK rail network
Nowadays, transforming the railway industry for better performance and making the best usage of the current capacity are the key issues in many countries. Operational research methods and in particular scheduling techniques have a substantial potential to offer algorithmic solutions to improve railway operation and control. This thesis looks at train scheduling and rescheduling problems in a microscopic level with regard to the track topology. All of the timetable components are fixed and we aim to minimize delay by considering a tardiness objective function and only allowing changes to the order and to the starting times of trains on blocks. Various operational and safety constraints should be considered. We have achieved further developments in the field including generalizations to the existing models in order to obtain a generic model that includes important additional constraints. We make use of the analogy between the train scheduling problem and job shop scheduling problem. The model is customized to the UK railway network and signaling system. Introduced solution methods are inspired by the successful results of the shifting bottleneck to solve the job shop scheduling problems. Several solution methods such as mathematical programming and different variants of the shifting bottleneck are investigated. The proposed methods are implemented on a real-world case study based on London Bridge area in the South East of the UK. It is a dense network of interconnected lines and complicated with regard to stations and junctions structure. Computational experiments show the efficiency and limitations of the mathematical programming model and one variant of the proposed shifting bottleneck algorithms. This study also addresses train routing and rerouting problems in a mesoscopic level regarding relaxing some of the detailed constraints. The aim is to make the best usage of routing options in the network to minimize delay propagation. In addition to train routes, train entry times and orders on track segment are defined. Hence, the routing and scheduling decisions are combined in the solutions arising from this problem. Train routing and rerouting problems are formulated as modified job shop problems to include the main safety and operational constraints. Novel shifting bottleneck algorithms are provided to solve the problem. Computational results are reported on the same case study based on London Bridge area and the results show the efficiency of one variant of the developed shifting bottleneck algorithms in terms of solution quality and runtime
Task and contingency planning under uncertainty
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Engineering, 1995.Includes bibliographical references (leaves 204-213).by Volkan C. Kubali.Sc.D
Combining SOA and BPM Technologies for Cross-System Process Automation
This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation