3,624 research outputs found

    Chromosome classification and speech recognition using inferred Markov networks with empirical landmarks.

    Get PDF
    by Law Hon Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves 67-70).Chapter 1 --- Introduction --- p.1Chapter 2 --- Automated Chromosome Classification --- p.4Chapter 2.1 --- Procedures in Chromosome Classification --- p.6Chapter 2.2 --- Sample Preparation --- p.7Chapter 2.3 --- Low Level Processing and Measurement --- p.9Chapter 2.4 --- Feature Extraction --- p.11Chapter 2.5 --- Classification --- p.15Chapter 3 --- Inference of Markov Networks by Dynamic Programming --- p.17Chapter 3.1 --- Markov Networks --- p.18Chapter 3.2 --- String-to-String Correction --- p.19Chapter 3.3 --- String-to-Network Alignment --- p.21Chapter 3.4 --- Forced Landmarks in String-to-Network Alignment --- p.31Chapter 4 --- Landmark Finding in Markov Networks --- p.34Chapter 4.1 --- Landmark Finding without a priori Knowledge --- p.34Chapter 4.2 --- Chromosome Profile Processing --- p.37Chapter 4.3 --- Analysis of Chromosome Networks --- p.39Chapter 4.4 --- Classification Results --- p.45Chapter 5 --- Speech Recognition using Inferred Markov Networks --- p.48Chapter 5.1 --- Linear Predictive Analysis --- p.48Chapter 5.2 --- TIMIT Speech Database --- p.50Chapter 5.3 --- Feature Extraction --- p.51Chapter 5.4 --- Empirical Landmarks in Speech Networks --- p.52Chapter 5.5 --- Classification Results --- p.55Chapter 6 --- Conclusion --- p.57Chapter 6.1 --- Suggested Improvements --- p.57Chapter 6.2 --- Concluding remarks --- p.61Appendix A --- p.63Reference --- p.6

    Handwritten digit recognition by bio-inspired hierarchical networks

    Full text link
    The human brain processes information showing learning and prediction abilities but the underlying neuronal mechanisms still remain unknown. Recently, many studies prove that neuronal networks are able of both generalizations and associations of sensory inputs. In this paper, following a set of neurophysiological evidences, we propose a learning framework with a strong biological plausibility that mimics prominent functions of cortical circuitries. We developed the Inductive Conceptual Network (ICN), that is a hierarchical bio-inspired network, able to learn invariant patterns by Variable-order Markov Models implemented in its nodes. The outputs of the top-most node of ICN hierarchy, representing the highest input generalization, allow for automatic classification of inputs. We found that the ICN clusterized MNIST images with an error of 5.73% and USPS images with an error of 12.56%

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF
    The dynamic response of end-clamped monolithic beams and sandwich beams has been measured by loading the beams at mid-span using metal foam projectiles. The AISI 304 stainless-steel sandwich beams comprise two identical face sheets and either prismatic Y-frame or corrugated cores. The resistance to shock loading is quantified by the permanent transverse deflection at mid-span of the beams as a function of projectile momentum. The prismatic cores are aligned either longitudinally along the beam length or transversely. It is found that the sandwich beams with a longitudinal core orientation have a higher shock resistance than the monolithic beams of equal mass. In contrast, the performance of the sandwich beams with a transverse core orientation is very similar to that of the monolithic beams. Three-dimensional finite element (FE) simulations are in good agreement with the measured responses. The FE calculations indicate that strain concentrations in the sandwich beams occur at joints within the cores and between the core and face sheets; the level of maximum strain is similar for the Y-frame and corrugated core beams for a given value of projectile momentum. The experimental and FE results taken together reveal that Y-frame and corrugated core sandwich beams of equal mass have similar dynamic performances in terms of rear-face deflection, degree of core compression and level of strain within the beam

    Radical Artificial Intelligence: A Postmodern Approach

    Get PDF

    A Formal Model of Ambiguity and its Applications in Machine Translation

    Get PDF
    Systems that process natural language must cope with and resolve ambiguity. In this dissertation, a model of language processing is advocated in which multiple inputs and multiple analyses of inputs are considered concurrently and a single analysis is only a last resort. Compared to conventional models, this approach can be understood as replacing single-element inputs and outputs with weighted sets of inputs and outputs. Although processing components must deal with sets (rather than individual elements), constraints are imposed on the elements of these sets, and the representations from existing models may be reused. However, to deal efficiently with large (or infinite) sets, compact representations of sets that share structure between elements, such as weighted finite-state transducers and synchronous context-free grammars, are necessary. These representations and algorithms for manipulating them are discussed in depth in depth. To establish the effectiveness and tractability of the proposed processing model, it is applied to several problems in machine translation. Starting with spoken language translation, it is shown that translating a set of transcription hypotheses yields better translations compared to a baseline in which a single (1-best) transcription hypothesis is selected and then translated, independent of the translation model formalism used. More subtle forms of ambiguity that arise even in text-only translation (such as decisions conventionally made during system development about how to preprocess text) are then discussed, and it is shown that the ambiguity-preserving paradigm can be employed in these cases as well, again leading to improved translation quality. A model for supervised learning that learns from training data where sets (rather than single elements) of correct labels are provided for each training instance and use it to learn a model of compound word segmentation is also introduced, which is used as a preprocessing step in machine translation
    • …
    corecore