916,660 research outputs found

    A Note on Extreme Sets

    Full text link
    In decomposition theory, extreme sets have been studied extensively due to its connection to perfect matchings in a graph. In this paper, we first define extreme sets with respect to degree-matchings and next investigate some of their properties. In particular, we prove the generalized Decomposition Theorem and give a characterization for the set of all extreme vertices in a graph

    On the modeling of tensile index from larger data sets

    Get PDF
    The objective of this study is to analyze and foresee potential outliers in pulp and handsheet properties for larger data sets. The method is divided into two parts comprising a generalized Extreme Studentized Deviate (ESD) procedure for laboratory data followed by an analysis of the findings using a multivariable model based on internal variables (i. e. process variables like consistency and fiber residence time inside the refiner) as predictors. The process data used in this has been obtained from CD-82 refiners and from a laboratory test program perspective, the test series were extensive. In the procedure more than 290 samples were analyzed to get a stable outlier detection. Note, this set was obtained from pulp at one specific operating condition. When comparing such "secured data sets" with process data it is shown that an extended procedure must be performed to get data sets which cover different operating points. Here 100 pulp samples at different process conditions were analyzed. It is shown that only about 60 percent of all tensile index measurements were accepted in the procedure which indicates the need to oversample when performing extensive trials to get reliable pulp and handsheet properties in TMP and CTMP processes

    Review of Middlemarch: A Study of Provincial Life

    Get PDF
    I read Middlemarch for the first time in the Everyman\u27s Library edition of 1930, a trim book in two volumes with a note by Leslie Stephen by way of Introduction. The note was taken from the Essay on George Eliot in Hours in a Library, and is less than helpful to the reader. Stephen notices the high moral ideal George Eliot sets before us, but laments the absence of charm, or magic, which he found in her earlier works. The new Middlemarch from Everyman\u27s Library is an elegant book in one volume, convenient in size and moderately priced. There are no notes on the text, but there is a Select Biography and a useful Chronology. The Introduction is by E. S. Shaffer, Reader in English and Comparative Literature in the School of Modem Languages, University of East Anglia. The new Introduction differs widely from that of Leslie Stephen, reflecting the changes in George Eliot criticism since Stephen\u27 s day. Dr. Shaffer sets the tone in her opening paragraph, where she places George Eliot with the best nineteenth-century European writers of both sexes. There is no seeking after charm or magic; the study of provincial life, in fiction, was a serious and grand theme which spread across Europe in George Eliot\u27s lifetime. Comparisons are made with Balzac\u27s Human Comedy where the melodrama is more marked, and set against a background of extreme social unrest which did not accompany the political changes in England, except in outbursts here and there

    Evaluation of machine learning architectures on the quantification of epistemic and aleatoric uncertainties in complex dynamical systems

    Full text link
    Machine learning methods for the construction of data-driven reduced order model models are used in an increasing variety of engineering domains, especially as a supplement to expensive computational fluid dynamics for design problems. An important check on the reliability of surrogate models is Uncertainty Quantification (UQ), a self assessed estimate of the model error. Accurate UQ allows for cost savings by reducing both the required size of training data sets and the required safety factors, while poor UQ prevents users from confidently relying on model predictions. We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks: Ensemble neural networks (ENN), Bayesian neural networks (BNN), Dropout neural networks (D-NN), and Gaussian neural networks (G-NN). We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties. We apply these metrics to two model data sets, representative of complex dynamical systems: an ocean engineering problem in which a ship traverses irregular wave episodes, and a dispersive wave turbulence system with extreme events, the Majda-McLaughlin-Tabak model. We present conclusions concerning model architecture and hyperparameter tuning.Comment: Submitted for publication to "Computer Methods in Applied Mechanics and Engineering." 25 pages, 20 figures. arXiv admin note: text overlap with arXiv:1505.05424 by other author

    Inequality in South Africa: a possible solution within the labour market

    Get PDF
    This study sets out to identify the most effective way in which persistently and unacceptably high levels of inequality can be reduced in South Africa. Three alternative approaches were identified from the literature and their impact explored statistically. They are: the introduction of a ‘Social Solidarity Grant’; a decrease in unemployment by 5%; and a narrowing of the skill premium through an expansion of tertiary education. It is important to note that the study makes no attempt at explaining how these outcomes might be implemented or achieved. Rather, it sets out to determine only the effect that such policies may have on measured inequality. It was found that while the introduction of a new grant had a significant effect on inequality, this effect however, was once-off. The grant would be financed by individuals in the top decile through tax increases, which would be a complicated endeavour. Both job creation and a narrowing of the skills premium were significantly effective in decreasing inequality. The narrowing of the skills premium showed more promise due to its accelerating effectiveness in decreasing inequality over time and the fact that it directly addresses the problem of wage differentials. It was noted that the extreme levels of poverty and unemployment in South Africa may dampen enthusiasm for policies that narrow the skills premium to reduce inequality. These characteristics make job creation a more popular policy option because of the positive impact on poverty and unemployment as well as on inequality

    Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    Get PDF
    A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm
    • …
    corecore