460 research outputs found

    USA v. Torres

    Get PDF
    USDC for the Eastern District of Pennsylvani

    A FRAMEWORK FOR INTELLIGENT VOICE-ENABLED E-EDUCATION SYSTEMS

    Get PDF
    Although the Internet has received significant attention in recent years, voice is still the most convenient and natural way of communicating between human to human or human to computer. In voice applications, users may have different needs which will require the ability of the system to reason, make decisions, be flexible and adapt to requests during interaction. These needs have placed new requirements in voice application development such as use of advanced models, techniques and methodologies which take into account the needs of different users and environments. The ability of a system to behave close to human reasoning is often mentioned as one of the major requirements for the development of voice applications. In this paper, we present a framework for an intelligent voice-enabled e-Education application and an adaptation of the framework for the development of a prototype Course Registration and Examination (CourseRegExamOnline) module. This study is a preliminary report of an ongoing e-Education project containing the following modules: enrollment, course registration and examination, enquiries/information, messaging/collaboration, e-Learning and library. The CourseRegExamOnline module was developed using VoiceXML for the voice user interface(VUI), PHP for the web user interface (WUI), Apache as the middle-ware and MySQL database as back-end. The system would offer dual access modes using the VUI and WUI. The framework would serve as a reference model for developing voice-based e-Education applications. The e-Education system when fully developed would meet the needs of students who are normal users and those with certain forms of disabilities such as visual impairment, repetitive strain injury (RSI), etc, that make reading and writing difficult

    CONTROL OF SECURITIES SELLING

    Get PDF
    President Roosevelt in his inaugural address stated as one of the most important immediate necessities of the country a strict supervision of all banking and credits and investments. This statement is in line with his campaign criticism of the failure of the Republican national administration to check the inordinate inflation of security prices in 1929. There is no doubt that the President\u27s program in this respect received a sympathetic hearing throughout the country. Many state legislatures are now considering changes in state laws regulating securities. It is interesting that some States with rigid blue sky laws seem to be quite as dissatisfied as other States which have so-called anti-fraud laws. President Whitney, of the New York Stock Exchange, has joined in the demand for new legislation. In his address before the Cleveland Chamber of Commerce on February 28th, he urged the adoption of a federal corporation law, or failing that, of uniform state laws, strictly regulating the issuance of securities, requiring full disclosure of corporate :finance and severely punishing corporate frauds. When the general public and the experts agree that something is rotten in the state of our security selling, it would seem to be time to consider the present state of regulatory statutes and how they may be modified or improved- always realizing, however, that it is impossible to create honesty by statute, and that the problem of investment is more one of education than of legislation

    A Compression-Based Toolkit for Modelling and Processing Natural Language Text

    Get PDF
    A novel compression-based toolkit for modelling and processing natural language text is described. The design of the toolkit adopts an encoding perspective—applications are considered to be problems in searching for the best encoding of different transformations of the source text into the target text. This paper describes a two phase ‘noiseless channel model’ architecture that underpins the toolkit which models the text processing as a lossless communication down a noise-free channel. The transformation and encoding that is performed in the first phase must be both lossless and reversible. The role of the verification and decoding second phase is to verify the correctness of the communication of the target text that is produced by the application. This paper argues that this encoding approach has several advantages over the decoding approach of the standard noisy channel model. The concepts abstracted by the toolkit’s design are explained together with details of the library calls. The pseudo-code for a number of algorithms is also described for the applications that the toolkit implements including encoding, decoding, classification, training (model building), parallel sentence alignment, word segmentation and language segmentation. Some experimental results, implementation details, memory usage and execution speeds are also discussed for these applications

    Societies

    Get PDF

    Daily Eastern News: June 22, 1994

    Get PDF
    https://thekeep.eiu.edu/den_1994_jun/1003/thumbnail.jp

    Daily Eastern News: June 22, 1994

    Get PDF
    https://thekeep.eiu.edu/den_1994_jun/1003/thumbnail.jp

    Domain-Specific Knowledge Acquisition for Conceptual Sentence Analysis

    Get PDF
    The availability of on-line corpora is rapidly changing the field of natural language processing (NLP) from one dominated by theoretical models of often very specific linguistic phenomena to one guided by computational models that simultaneously account for a wide variety of phenomena that occur in real-world text. Thus far, among the best-performing and most robust systems for reading and summarizing large amounts of real-world text are knowledge-based natural language systems. These systems rely heavily on domain-specific, handcrafted knowledge to handle the myriad syntactic, semantic, and pragmatic ambiguities that pervade virtually all aspects of sentence analysis. Not surprisingly, however, generating this knowledge for new domains is time-consuming, difficult, and error-prone, and requires the expertise of computational linguists familiar with the underlying NLP system. This thesis presents Kenmore, a general framework for domain-specific knowledge acquisition for conceptual sentence analysis. To ease the acquisition of knowledge in new domains, Kenmore exploits an on-line corpus using symbolic machine learning techniques and robust sentence analysis while requiring only minimal human intervention. Unlike most approaches to knowledge acquisition for natural language systems, the framework uniformly addresses a range of subproblems in sentence analysis, each of which traditionally had required a separate computational mechanism. The thesis presents the results of using Kenmore with corpora from two real-world domains (1) to perform part-of-speech tagging, semantic feature tagging, and concept tagging of all open-class words in the corpus; (2) to acquire heuristics for part-ofspeech disambiguation, semantic feature disambiguation, and concept activation; and (3) to find the antecedents of relative pronouns
    corecore