3,072 research outputs found

    4. The g Beyond Factor Analysis

    Get PDF
    The problem of g, essentially , concerns two very fundamental questions: (1) Why are scores on various mental ability tests positively correlated? and (2) Why do people differ in performance on such tests? SOME DEFINITIONS To insure that we are talking the same language, we must review a few definitions. Clarity, explicitness, and avoidance of excess meaning or connotative overtones are virtues of a definition. Aside from these properties, a definition per se affords nothing to argue about. It has nothing to do with truth or reality; it is a formality needed for communication. A mental ability test consists of a number of items. An item is a task on which a person\u27s performance can be objectively scored, that is, classified (e.g., right or wrong, 1 or 0) , or graded on a scale (e.g., poor, fair, good, excellent, or 0, 1, 2,3), or counted (e.g., number of digits recalled, number of puzzle pieces fitted together within a time limit) , or measured on a ratio scale (e.g. , reaction time to a stimulus or the time interval between the presentation of a task and its completion). Objectively scored means that there is a high degree of agreement between observers or scorers or pointer readings in assigning a score to a person\u27s performance on an item. An item measures an ability if performance on the item can be objectively scored such that a higher score represents better performance in the sense of being more accurate, more correct, quicker, more efficient, or in closer conformance to some standard-regardless of any value judgment concerning the aesthetic, moral, social, or practical worth of the optimum performance on the particular task . An item measures a mental (or cognitive) ability if very little or none of the individual differences variance in task performance is associated with individual differences in physical capacity, such as sensory acuity or muscular strength, and if differences in item difficulty (percent passing) are uncorrelated with differences in physical capacities per se. In order for items to show individual differences in a given group of people, the items must vary in difficulty; that is, items without variance (0% or 100% passing) are obviously nonfunctional in a test intended to show individual differences. A test, like any scientific measurement, requires a standard procedure. This includes the condition that the requirements of the tasks composing the test must be understood by the testee through suitable instructions by the tester; and the fundaments of the task (i .e., the elements that it comprises) must already be familiar to the testee. Also, the testee must be motivated to perform the task. These conditions can usually be assured by the testee\u27s demonstrating satisfactory performance on easy exemplaries of the same item types as those in the test proper. Mental ability tests (henceforth called simply tests) that meet all these conditions can be made up in great variety, involving different sensory and response modalities, different media (e.g., words, numbers, symbols, pictures of familiar things, and objects), different types of task requirements (e.g ., discrimination, generalization, recall, naming, comparison, decision, inference), and a wide range of task complexity. The variety of possible items and even item types seems limited only by the ingenuity of the inventors of test items

    The revival of early literature in England and Scotland from Percy to Scott 1765-1802

    Get PDF
    The history of the revival of early literature in the eighteenth century is not the record of a list of publications which gradually brought knowledge and appreciation where ignorance and contempt had been the rule. The reasons for the wholesale neglect of early literature were deep and intricate, and the change which finally abolished those reasons was correspondingly complex. The change was not a mere addition to the amount of knowledge, though that enters in; nor can it be explained by the vague mention of that portmanteau word Romanticism, which is usually taken to explain all eighteenth century literary anomalies. The disparagement of the poetry of earlier centuries was a natural corollary to the philosophy of the age applied to the history of literature. For early literature to come into its own, it was not enough that it be resurrected; the whole general literary attitude had to change, or the newly revealed literature of the past would be still-born. The revival, by one of those accidental, but inevitable coincidences, so frequent in literary history, was an active ingredient in bringing on the fundamental change that enabled it to survive.he revival of early literature, as in all rediscoveries of literary periods, was a combination of new knowledge and new appreciation. Usually the two phases are so interlocked that it is difficult to determine which gives the first impetus, but in this case, where the early stages of the movement were very slow and long drawn out, it is evident that the works of scholarship came before the appreciation. The progress of the revival throughout the eighteenth century bears the same relation to similar revivals in modern times as a slow motion picture bears to a picture taken at normal speed. For the literary historian this chapter in English literature is particularly fruitful, for each step can be analysed and the slow progress traced in detail. A whole century elapsed between the publication of the early works bringing new knowledge of older literature and any widespread appreciation outside that of scholars. As a rule, the scholars themselves did not have interest in the literature as an impetus for their work. Very few of the early antiquarians and research men had any respect for the literary quality of the works they revived and annotated. The moving power behind many of the learned works on early literature was the century's love of pure learning for its own sake

    Should Initial Mastectomy Rates Increase?

    Get PDF
    Should the rate of mastectomy increase

    Isolated bone cell types : functional characterization and PTH-induced in vitro differentiation

    Get PDF

    Computer simulation of surface water hydrology and salinity with an application to studies of Colorado River management

    Get PDF
    Management of a large river basin requires information regarding the interactions of variables describing the system. A method has been developed to determine these interactions so that the resources management within a given river basin can proceed in an optimal way. The method can be used as a planning tool to display how different management alternatives affect the behavior of the river system. Direct application is made to the Colorado River Basin. The Colorado River has a relatively low and highly variable streamflow. Allocated rights to the consumptive use of the river water exceed the present long-term average flow. The naturally high total dissolved solids concentration of the river water continues to increase due to the activities of man. Current management policies in the basin have been the products of compromises between the seven states and two countries which are traversed by the river or its tributaries. The anticipated use of the scarce supply of water in the extraction and processing of energy resources in the basin underwrites the need for planning tools which can illuminate many possible management alternatives and their effects upon water supply, water quality, power production, and the other concerns of the Colorado River water users. A computer simulation model has been developed and used to simulate the effects of various management alternatives upon water conservation, water quality, and power production. The model generates synthetic sequences of streamflows and total dissolved solids (TDS) concentrations. The flows of water and TDS are then routed through the major reservoirs of the system, Lakes Powell and Mead. Characteristics of system behavior are examined from simulations using different streamflow sequences, upstream depletion levels, and reservoir operating policies. Reservoir evaporation, discharge, discharge salinity, and power generating capacity are examined. Simulation outputs show that the probability with which Lake Powell fails to supply a specified target discharge is highly variable. Simulations employing different streamflow sequences result in probabilities of reservoir failure which differ by as much as 0.1. Three levels of Upper Colorado River Basin demands are imposed on the model: 3.8 MAF/yr (4.7 km^3/yr), 4.6 MAF/yr (5.7 km^3/yr), and 5.5 MAF/yr (6.8 km^3/yr). Two levels of water demand are imposed below Lake Mead: 8.25 MAF/yr (10.2 km^3/yr) and 7.0 MAF/yr (6.8 km^3/yr). Although the effects of reservoir operations upon water quality are made uncertain by a lack of knowledge regarding the chemical limnology of Lake Powell, two possible lake chemistry models have been developed, and the predicted impacts of changes in reservoir operation upon water quality are presented. The current criteria for the operations of Lakes Powell and Mead are based upon 75 years of compromises and agreements between the various water interests in the Colorado River Basin. Simulations show that Lake Powell will be unable to conform to these operating constraints at the higher levels of water demand. An alternative form of reservoir operation is defined and compared to the existing policy on the basis of reliability of water supply, conservation of water, impact upon water quality, and the effect upon power generation. Ignoring the current institutional operating constraints, and attempting only to provide a reliable supply of water at the locations of water demand, is shown to be a superior management policy. This alternate policy results in the conservation of as much as 0.25 MAF/yr (0.3 km^3/yr) of water. The impact of the alternate operating policy upon hydroelectric power generation and the potential use of the conserved water for development of energy resources is discussed

    Secondary electron emission characteristics of molybdenum-masked, ion-textured OFHC copper

    Get PDF
    A method for producing a uniform, highly textured surface on oxygen-free, high conductivity (OFHC) copper by ion bombardment using sputtered molybdenum as a texture-inducing masking film was developed and used to provide samples for study. The purpose was to develop a basically OFHC copper surface having very low secondary electron emission characteristics. Surfaces having low secondary electron emission are a requirement for the electrodes of very high efficiency multistage depressed collectors (MDC's). Such MDC's are used in microwave amplifier traveling wave tubes for space communications and other applications. OFHC copper is the material most commonly used for MDC electrodes because it has high thermal conductivity, it is easy to machine, and its fabrication and brazing procedures are well established. However, its untreated surface displays relatively very high levels of secondary electron emissions. Textured OFHC copper samples were tested for true secondary electron emission and relative reflected primary electron yield at primary electron beam energy levels from 200 to 2000 eV and at direct (0 deg) to oblique (60 deg) beam impingement angles. The test results for three of the samples, each of which was processed in a slightly different way, are compared with each other and with test results for a machined OFHC copper sample. Although the textured samples are not represented here as having been processed optimally, their measured secondary electron emission characteristics are significantly lower than those of the untreated OFHC copper sample over the range of conditions studied. Importantly, the relative reflected primary electron yield of one of the textured samples is conspicuously lower than that of the others. Clearly, with further development, the molybdenum-masked ion-textured OFHC copper surface will be a promising material for high-efficiency MDC electrodes

    Darwinian Selection and Non-existence of Nash Equilibria

    Full text link
    We study selection acting on phenotype in a collection of agents playing local games lacking Nash equilibria. After each cycle one of the agents losing most games is replaced by a new agent with new random strategy and game partner. The network generated can be considered critical in the sense that the lifetimes of the agents is power law distributed. The longest surviving agents are those with the lowest absolute score per time step. The emergent ecology is characterized by a broad range of behaviors. Nevertheless, the agents tend to be similar to their opponents in terms of performance.Comment: 4 pages, 5 figure

    Absorbing boundaries in the conserved Manna model

    Full text link
    The conserved Manna model with a planar absorbing boundary is studied in various space dimensions. We present a heuristic argument that allows one to compute the surface critical exponent in one dimension analytically. Moreover, we discuss the mean field limit that is expected to be valid in d>4 space dimensions and demonstrate how the corresponding partial differential equations can be solved.Comment: 8 pages, 4 figures; v1 was changed by replacing the co-authors name "L\"ubeck" with "Lubeck" (metadata only

    Nipple-Sparing Mastectomy in 99 Patients With a Mean Follow-up of 5 Years

    Get PDF
    Background. The safety and practicality of nipple-sparing mastectomy (NSM) are controversial. Methods. Review of a large breast center's experience identified 99 women who underwent intended NSM with subareolar biopsy and breast reconstruction for primary breast cancer. Outcome was assessed by biopsy status, postoperative nipple necrosis or removal, cancer recurrence, and cancer-specific death. Results. NSM was attempted for invasive cancer (64 breasts, 24 with positive lymph nodes), noninvasive cancer (35 breasts), and/or contralateral prophylaxis (50 breasts). Twenty-two nipples (14%) were removed because of positive subareolar biopsy results (frozen or permanent section). Seven patients underwent a pre-NSM surgical delay procedure because of increased risk for nipple necrosis. Reconstruction used transverse rectus abdominis myocutaneous flaps (56 breasts), latissimus flaps with expander (35 breasts), or expander alone (58 breasts). Of 127 retained nipples, 8 (6%) became necrotic and 2 others (2%) were removed at patient request. There was no nipple necrosis when NSM was performed after a surgical delay procedure. At a mean follow-up of 60.2 months, all 3 patients with recurrence had biopsy-proven subareolar disease and had undergone nipple removal at original mastectomy. There were no deaths. Conclusions. Five-year recurrence rate is low when NSM margins (frozen section and permanent) are negative. Nipple necrosis can be minimized by incisions that maximize perfusion of surrounding skin and by avoiding long flaps. A premastectomy surgical delay procedure improves nipple survival in high-risk patients. NSM can be performed safely with all types of breast reconstruction

    Entangled Economy: an ecosystems approach to modeling systemic level dynamics

    Full text link
    We present a model of an economy inspired by individual based model approaches in evolutionary ecology. We demonstrate that evolutionary dynamics in a space of companies interconnected through a correlated interaction matrix produces time dependencies of the total size of the economy total number of companies, companies age and capital distribution that compares well with statistics for USA. We discuss the relevance of our modeling framework to policy making.Comment: 25 pages, 11 figure
    corecore