973 research outputs found

    Performed Being: Word Art as a Human Inheritance

    Get PDF
    The study of the oral tradition presently lies at the crossroads of several new lines of research that promise to transform the shape of literary criticism and critical theory forever. The nature of this change may perhaps be indicated by an analogy with the revolution in the study of biology which was wrought by the theory of evolution.--Page 66.Frederick Turner (University of Texas, Dallas), former editor of the Kenyon Review, is at home in anthropology and modern science as well as literary studies. He also is a wellpublished poet, whose book-length epic poem The New World appeared in 1985. His essays range from an examination of refl exivity in Thoreau to a study of space and time in Chinese verse, and on to the collection entitled Natural Classicism (1985)

    Toward Future Muir Biographies: Problems and Prospects

    Get PDF

    Toward an Evolutionary Ontology of Beauty

    Get PDF
    Symposium: Rules for Art in Oral TraditionProceedings from the 1988 Modern Language Association sectio

    Differential spectrophotometric analysis of intravenous admixtures containing metaraminol with selected corticosteroids

    Get PDF
    Intravenous admixtures consisting of one or more drug formulations, diluted in a large volume of intravenous solution, are used extensively in current medical practice. Development of reliable methods to predict the compatibility of the admixture components is essential as a guide for their selection. Because the majority of the reports published, to date, have used a visible change as the criterion for judging compatibility, it was the purpose of this study to demonstrate the use of spectrophotometric analysis to detect the occurrence of chemical interactions between two components of an intravenous admixture

    Taming the Wilde: Collaborating with Expertise for Faster, Better, Smarter Collection Analysis

    Get PDF
    The importance of collection assessment and evaluation has been a hot topic due to increasing budget restrictions and the need to prove worth to stakeholders through evidence‐based evaluations. More robust collection analyses, like comparisons of holdings usage to ILL requests, and gap analyses, are increasingly embraced by the library community. Less thought, however, has been given to how to best conduct these analyses to ensure that the cleanest data is used and that the data tells the right story. The data to do these types of analyses often reside in complex systems and web‐environments, which may not be fully understood by the collection managers or subject librarians. The University of Houston Libraries embarked on a largescale gap analysis of the collection by subject area. The key component to success was quickly, accurately, and properly mining the data sources such as Sierra and the electronic resource management system. Our collection team contends that collaboration with expertise in the Resource Discovery Systems Department allowed the team to more quickly develop complete and accurate datasets, and helped to shape the analysis conducted. This paper discusses the challenges of defining project scope, the process of forming methodology, and the challenges of collecting the data. It will also review how experts were able to contribute to each step of this process. Finally it will outline some initial findings of the analysis, and how this research was accomplished in a realistic time frame

    Malware classification using self organising feature maps and machine activity data

    Get PDF
    In this article we use machine activity metrics to automatically distinguish between malicious and trusted portable executable software samples. The motivation stems from the growth of cyber attacks using techniques that have been employed to surreptitiously deploy Advanced Persistent Threats (APTs). APTs are becoming more sophisticated and able to obfuscate much of their identifiable features through encryption, custom code bases and in-memory execution. Our hypothesis is that we can produce a high degree of accuracy in distinguishing malicious from trusted samples using Machine Learning with features derived from the inescapable footprint left behind on a computer system during execution. This includes CPU, RAM, Swap use and network traffic at a count level of bytes and packets. These features are continuous and allow us to be more flexible with the classification of samples than discrete features such as API calls (which can also be obfuscated) that form the main feature of the extant literature. We use these continuous data and develop a novel classification method using Self Organizing Feature Maps to reduce over fitting during training through the ability to create unsupervised clusters of similar ‘behaviour’ that are subsequently used as features for classification, rather than using the raw data. We compare our method to a set of machine classification methods that have been applied in previous research and demonstrate an increase of between 7.24% and 25.68% in classification accuracy using our method and an unseen dataset over the range of other machine classification methods that have been applied in previous research

    The West and American Ideals

    Get PDF
    "All that was buoyant and creative in American life would be lost if we gave up the respect for distinct personality, and variety in genius, and came to the dead level of common standards.
    • 

    corecore