101 research outputs found
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2
Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making
A case study using LARSYS for analysis of LANDSAT data
Techniques are described for analysis of LANDSAT multispectral using the LARSYS data processing system
Informed selection and use of training examples for knowledge refinement.
Knowledge refinement tools seek to correct faulty rule-based systems by identifying and repairing faults indicated by training examples that provide evidence of faults. This thesis proposes mechanisms that improve the effectiveness and efficiency of refinement tools by the best use and selection of training examples. The refinement task is sufficiently complex that the space of possible refinements demands a heuristic search. Refinement tools typically use hill-climbing search to identify suitable repairs but run the risk of getting caught in local optima. A novel contribution of this thesis is solving the local optima problem by converting the hill-climbing search into a best-first search that can backtrack to previous refinement states. The thesis explores how different backtracking heuristics and training example ordering heuristics affect refinement effectiveness and efficiency. Refinement tools rely on a representative set of training examples to identify faults and influence repair choices. In real environments it is often difficult to obtain a large set of training examples, since each problem-solving task must be labelled with the expert's solution. Another novel aspect introduced in this thesis is informed selection of examples for knowledge refinement, where suitable examples are selected from a set of unlabelled examples, so that only the subset requires to be labelled. Conversely, if a large set of labelled examples is available, it still makes sense to have mechanisms that can select a representative set of examples beneficial for the refinement task, thereby avoiding unnecessary example processing costs. Finally, an experimental evaluation of example utilisation and selection strategies on two artificial domains and one real application are presented. Informed backtracking is able to effectively deal with local optima by moving search to more promising areas, while informed ordering of training examples reduces search effort by ensuring that more pressing faults are dealt with early on in the search. Additionally, example selection methods achieve similar refinement accuracy with significantly fewer examples
Recommended from our members
The identification and classification of variability in stellar sources observed with SuperWasp
The purpose of this thesis was to create an automated classifier for periodic stellar objects in the Wide- Angle Search for Planets Survey (SuperW ASP) and to use the classified stars to investigate three phenomena: differentiation of Beta Lyrae and W UMa eclipsing binary stars using eclipse-depth ratio; identification of RR Lyrae stars exhibiting the Blazhko effect; and, the presence of the Oosterhoff dichotomy in the Milky Way.
During this work, period/amplitude ranges and distribution maps were created for the classified stars in stellar classes Algol, Beta Lyrae, W UMa, Delta Scuti and RR Lyrae (RRAB) and comparison made with published equivalents. SuperW ASP objects known in the General Catalogue of Variable Stars (GCVS) were also assessed to identify differences.
The automated system contained three neural networks (NNs) that processed parameters defining the' shape of the phase-folded light-curve and they were trained with representative sets of eclipsing binary, pulsating and sinusoidal-like stars. The system, installed at Leicester University processed 4.3 million object/periods from the SuperWASP database, of which 1.1 million were given prospective classifications. From these, approximately 64 thousand objects consisting of eclipsing binary and pulsating stars were assessed manually to confirm the given period/classifications and roughly half were classified correctly. The reasons for the misclassifications were identified and recommendations made on improving the results.
The manually confirmed objects consisted of 12,882 Algols, 5,226 Beta Lyrae, 2,875 W UMa, 1,979 Delta Scutis and 8,322 RR Lyraes (RRAB), where significant numbers of each were unknowrr in SIMBAD or the GCVS. Many objects had periods and/or amplitudes outside published ranges with the surprising result that the majority of Beta Lyraes had periods shorter than published.
A separation range for eclipse-depth ratio was identified but a cross-over point existed where differentiation was not possible. A number of new RRAB Blazhko stars were identified and the amplitude range between peaks calculated. The presence of the Oosterhoff dichotomy in the Milky Way galaxy was supported, but the causative factors could not be confirmed.
Comparison of the SuperW ASP periods with the GCVS resulted in 649 variable stars being added to the GCVS catalogue where the period was unknown in the GCVS and also revision of the variability period of 194 GCVS variable stars was suggested. For comparison of classification, sub-classes were suggested for 333 unconfirmed objects in the GCVS (e.g. CEP:, EA:, RR etc.) and re-classification was suggested for 197 GCVS objects with suspected incorrect classes
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making
Using customised image processing for noise reduction to extract data from early 20th century African newspapers
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering, 2017The images from the African articles dataset presented challenges to the Optical Character Recognition (OCR) tool. Despite successful binerisation in the Image Processing step of the pipeline, noise remained in the foreground of the images. This noise caused the OCR tool to misinterpret the text from the images and thus needed removal from the foreground. The technique involved the application of the Maximally Stable Extremal Region (MSER) algorithm, borrowed from Scene-Text Detection, and supervised machine learning classifiers. The algorithm creates regions from the foreground elements. Regions are classifiable into noise and characters based on the characteristics of their shapes. Classifiers were trained to recognise noise and characters. The technique is useful for a researcher wanting to process and analyse the large dataset. They could semi-automate the foreground noise-removal process using this technique. This would allow for better quality OCR output, for use in the Text Analysis step of the pipeline. Better OCR quality means less compromises would be required at the Text Analysis step. These concessions can lead to false results when searching noisy text. Fewer compromises means simpler, less error-prone analysis and more trustworthy results. The technique was tested against specifically selected images from the dataset which exhibited noise. It involved a number of steps. Training regions were selected and manually classified. After training and running many classifiers, the highest performing classifier was selected. The classifier categorised regions from all images. New images were created by removing noise regions from the original images. To discover whether an improvement in the OCR output was achieved, a text comparison was conducted. OCR text was generated from both the original and processed images. The two outputs of each image were compared for similarity against the test text. The test text was a manually created version of the expected OCR output per image. The similarity test for both original and processed images produced a score. A change in the similarity score indicated whether the technique had successfully removed noise or not. The test results showed that blotches in the foreground could be removed, and OCR output improved. Bleed-through and page fold noise was not removable. For images affected by noise blotches, this technique can be applied and hence less concessions will be needed when processing the text generated from those images.CK201
A descriptive model for determining optimal human performance in systems. Volume 3 - An approach for determining the optimal role of man and allocation of functions in an aerospace system
Optimal role of man in space, allocation of men and machines in aerospace systems, and descriptive model for determining optimal human performanc
The 1993 Goddard Conference on Space Applications of Artificial Intelligence
This publication comprises the papers presented at the 1993 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, MD on May 10-13, 1993. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed
- …