2,728 research outputs found

    Plantation Schools: A History of Rural Black One-Room Schools in the Mid-South and the Mississippi Delta from Reconstruction to 1968

    Get PDF
    This dissertation addressed rural black one-room schools in the Mid-South and Mississippi Delta with particular emphasis on Bolivar and Marshall Counties in Mississippi and Fayette County in Tennessee. From Reconstruction to 1968, one-room or one-teacher schools were the predominate model used to educate black students in the Lower South. Influenced by an agrarian economy and white plantation elites, rural black schools provided a minimal eight-grade education that was disproportionately funded and staffed in comparison to white schools. While African Americans constituted the majority of the population in these regions, black education was never considered a priority among white-controlled school boards, state education administrators or elected officials. In fact, the substandard education provided to black students was deemed adequate by whites who realized that under-educating African Americans maintained their political and socio-economic status. Due to the lack of economic and occupational diversity that emerged after the Civil War, the educational experiences of blacks in the states of the Upper South differed from those of blacks who lived in the cotton plantation regions of the Lower South. The political, economic and social limitations imposed upon rural one-room schools affected the quality, duration and type of education that millions of African Americans received in the Lower South.By comparison, blacks who lived in regions of the Upper South and Border States exercised a degree of autonomy whereby they could control the day to day operations of their schools. An understanding of the historical correlation between rural black one-room schools, Jim Crow, cotton tenancy, and migration is crucial because these factors defined and shaped the lives of the blacks in both the Upper and Lower South. Although the landmark Brown vs. Board of Education decision was a sweet victory, it soured in the Lower South as whites initiated numerous campaigns to maintain separate school systems and perpetuate its antebellum ideology.One-room schools survived in many Tennessee and Mississippi counties until court ordered school desegregation was implemented during the mid-1960s. For nearly one hundred years, African Americans endured an inferior school system that superimposed white supremacy and Jim Crow as the foundation for black education

    Space station attached payload program support

    Get PDF
    The USRA is providing management and technical support for the peer review of the Space Station Freedom Attached Payload proposals. USRA is arranging for consultants to evaluate proposals, arranging meeting facilities for the reviewers to meet in Huntsville, Alabama and management of the actual review meetings. Assistance in developing an Experiment Requirements Data Base and Engineering/Technical Assessment support for the MSFC Technical Evaluation Team is also being provided. The results of the project will be coordinated into a consistent set of reviews and reports by USRA. The strengths and weaknesses analysis provided by the peer panel reviewers will by used NASA personnel in the selection of experiments for implementation on the Space Station Freedom

    Heat transfer coefficients for condensing vapors on a horizontal tube

    Get PDF
    Film coefficients of heat transfer for vapors condensing on a smooth horizontal tube have been experimentally determined by many investigators. The so-called Thermocouple Method and the Wilson Method, have been used and generally accepted as the two methods for determining these film coefficients. A number of different organic vapors have been studied on various condensing surfaces. Nusselt has also developed a theoretical equation, hN = 0.725 4√kf3ρf2λg/DoμfΔtcf for evaluating the film coefficients of vapors condensing on hori?zontal tubes. The earlier investigators have shown that theoretical values, hN do not check with experimentally determined values, he, and that many discrepancies exist among different experimentally determined values. Advocates of the Wilson Method have discredited the accuracy of the Thermocouple Method and vice versa. With these points in mind, the experimental work in this thesis was undertaken. Film coefficients of heat transfer were experimentally determined for single vapor systems of each of four alcohols condensing on a smooth horizontal tube. Both methods of evaluating the heat transfer coefficients were used. Methyl Alcohol, Isopropyl Alcohol, n-Propyl Alcohol, and n-Butyl Alcohol were studied. A single piece of equipment designed to eliminate problems noted by earlier authors of similar work was used for the entire investigation. Data was taken for both of the methods simultaneously and under identical operational conditions. The experimental results of this investigation showed film coefficients of heat transfer by both accepted methods to be of the same order of magnitude for each particular compound. It can be concluded, therefore, that the discrepancies in earlier data are probably due to other factors and not the use of either the \u27Wilson Method or the Thermocouple Method. The results also showed definite evidence that he for vapors condensing on a horizontal tube decreases for increasing molecular weight within the homologous alcohol series

    Alien Registration- Brown, Maurice S. (Baldwin, Cumberland County)

    Get PDF
    https://digitalmaine.com/alien_docs/32881/thumbnail.jp

    Compact Native Code Generation for Dynamic Languages on Micro-core Architectures

    Get PDF
    Micro-core architectures combine many simple, low memory, low power-consuming CPU cores onto a single chip. Potentially providing significant performance and low power consumption, this technology is not only of great interest in embedded, edge, and IoT uses, but also potentially as accelerators for data-center workloads. Due to the restricted nature of such CPUs, these architectures have traditionally been challenging to program, not least due to the very constrained amounts of memory (often around 32KB) and idiosyncrasies of the technology. However, more recently, dynamic languages such as Python have been ported to a number of micro-cores, but these are often delivered as interpreters which have an associated performance limitation. Targeting the four objectives of performance, unlimited code-size, portability between architectures, and maintaining the programmer productivity benefits of dynamic languages, the limited memory available means that classic techniques employed by dynamic language compilers, such as just-in-time (JIT), are simply not feasible. In this paper we describe the construction of a compilation approach for dynamic languages on micro-core architectures which aims to meet these four objectives, and use Python as a vehicle for exploring the application of this in replacing the existing micro-core interpreter. Our experiments focus on the metrics of performance, architecture portability, minimum memory size, and programmer productivity, comparing our approach against that of writing native C code. The outcome of this work is the identification of a series of techniques that are not only suitable for compiling Python code, but also applicable to a wide variety of dynamic languages on micro-cores.Comment: Preprint of paper accepted to ACM SIGPLAN 2021 International Conference on Compiler Construction (CC 2021

    Benchmarking micro-core architectures for detecting disasters at the edge

    Get PDF
    Leveraging real-time data to detect disasters such as wildfires, extreme weather, earthquakes, tsunamis, human health emergencies, or global diseases is an important opportunity. However, much of this data is generated in the field and the volumes involved mean that it is impractical for transmission back to a central data-centre for processing. Instead, edge devices are required to generate insights from sensor data streaming in, but an important question given the severe performance and power constraints that these must operate under is that of the most suitable CPU architecture. One class of device that we believe has a significant role to play here is that of micro-cores, which combine many simple low-power cores in a single chip. However, there are many to choose from, and an important question is which is most suited to what situation. This paper presents the Eithne framework, designed to simplify benchmarking of micro-core architectures. Three benchmarks, LINPACK, DFT and FFT, have been implemented atop of this framework and we use these to explore the key characteristics and concerns of common micro-core designs within the context of operating on the edge for disaster detection. The result of this work is an extensible framework that the community can use help develop and test these devices in the future.Comment: Preprint of paper accepted to IEEE/ACM Second International Workshop on the use of HPC for Urgent Decision Making (UrgentHPC

    High level programming abstractions for leveraging hierarchical memories with micro-core architectures

    Get PDF
    Micro-core architectures combine many low memory, low power computing cores together in a single package. These are attractive for use as accelerators but due to limited on-chip memory and multiple levels of memory hierarchy, the way in which programmers offload kernels needs to be carefully considered. In this paper we use Python as a vehicle for exploring the semantics and abstractions of higher level programming languages to support the offloading of computational kernels to these devices. By moving to a pass by reference model, along with leveraging memory kinds, we demonstrate the ability to easily and efficiently take advantage of multiple levels in the memory hierarchy, even ones that are not directly accessible to the micro-cores. Using a machine learning benchmark, we perform experiments on both Epiphany-III and MicroBlaze based micro-cores, demonstrating the ability to compute with data sets of arbitrarily large size. To provide context of our results, we explore the performance and power efficiency of these technologies, demonstrating that whilst these two micro-core technologies are competitive within their own embedded class of hardware, there is still a way to go to reach HPC class GPUs.Comment: Accepted manuscript of paper in Journal of Parallel and Distributed Computing 13

    Estimating the correlation between operational risk loss categories over different time horizons

    Full text link
    Operational risk is challenging to quantify because of the broad range of categories (fraud, technological issues, natural disasters) and the heavy-tailed nature of realized losses. Operational risk modeling requires quantifying how these broad loss categories are related. We focus on the issue of loss frequencies having different time scales (e.g., daily, yearly, monthly basis), specifically on estimating the statistics of losses on arbitrary time horizons. We present a frequency model where mathematical techniques can be feasibly applied to analytically calculate the mean, variance, and co-variances that are accurate compared to more time-consuming Monte Carlo simulations. We show that the analytic calculations of cumulative loss statistics in an arbitrary time window are feasible here and would otherwise be intractable due to temporal correlations. Our work has potential value because these statistics are crucial for approximating correlations of losses via copulas. We systematically vary all model parameters to demonstrate the accuracy of our methods for calculating all first and second order statistics of aggregate loss distributions. Finally, using combined data from a consortium of institutions, we show that different time horizons can lead to a large range of loss statistics that can significantly affect calculations of capital requirements.Comment: 27 pages, 11 figures, 6 table
    corecore