2,348 research outputs found

    4. The g Beyond Factor Analysis

    Get PDF
    The problem of g, essentially , concerns two very fundamental questions: (1) Why are scores on various mental ability tests positively correlated? and (2) Why do people differ in performance on such tests? SOME DEFINITIONS To insure that we are talking the same language, we must review a few definitions. Clarity, explicitness, and avoidance of excess meaning or connotative overtones are virtues of a definition. Aside from these properties, a definition per se affords nothing to argue about. It has nothing to do with truth or reality; it is a formality needed for communication. A mental ability test consists of a number of items. An item is a task on which a person\u27s performance can be objectively scored, that is, classified (e.g., right or wrong, 1 or 0) , or graded on a scale (e.g., poor, fair, good, excellent, or 0, 1, 2,3), or counted (e.g., number of digits recalled, number of puzzle pieces fitted together within a time limit) , or measured on a ratio scale (e.g. , reaction time to a stimulus or the time interval between the presentation of a task and its completion). Objectively scored means that there is a high degree of agreement between observers or scorers or pointer readings in assigning a score to a person\u27s performance on an item. An item measures an ability if performance on the item can be objectively scored such that a higher score represents better performance in the sense of being more accurate, more correct, quicker, more efficient, or in closer conformance to some standard-regardless of any value judgment concerning the aesthetic, moral, social, or practical worth of the optimum performance on the particular task . An item measures a mental (or cognitive) ability if very little or none of the individual differences variance in task performance is associated with individual differences in physical capacity, such as sensory acuity or muscular strength, and if differences in item difficulty (percent passing) are uncorrelated with differences in physical capacities per se. In order for items to show individual differences in a given group of people, the items must vary in difficulty; that is, items without variance (0% or 100% passing) are obviously nonfunctional in a test intended to show individual differences. A test, like any scientific measurement, requires a standard procedure. This includes the condition that the requirements of the tasks composing the test must be understood by the testee through suitable instructions by the tester; and the fundaments of the task (i .e., the elements that it comprises) must already be familiar to the testee. Also, the testee must be motivated to perform the task. These conditions can usually be assured by the testee\u27s demonstrating satisfactory performance on easy exemplaries of the same item types as those in the test proper. Mental ability tests (henceforth called simply tests) that meet all these conditions can be made up in great variety, involving different sensory and response modalities, different media (e.g., words, numbers, symbols, pictures of familiar things, and objects), different types of task requirements (e.g ., discrimination, generalization, recall, naming, comparison, decision, inference), and a wide range of task complexity. The variety of possible items and even item types seems limited only by the ingenuity of the inventors of test items

    Computer simulation of surface water hydrology and salinity with an application to studies of Colorado River management

    Get PDF
    Management of a large river basin requires information regarding the interactions of variables describing the system. A method has been developed to determine these interactions so that the resources management within a given river basin can proceed in an optimal way. The method can be used as a planning tool to display how different management alternatives affect the behavior of the river system. Direct application is made to the Colorado River Basin. The Colorado River has a relatively low and highly variable streamflow. Allocated rights to the consumptive use of the river water exceed the present long-term average flow. The naturally high total dissolved solids concentration of the river water continues to increase due to the activities of man. Current management policies in the basin have been the products of compromises between the seven states and two countries which are traversed by the river or its tributaries. The anticipated use of the scarce supply of water in the extraction and processing of energy resources in the basin underwrites the need for planning tools which can illuminate many possible management alternatives and their effects upon water supply, water quality, power production, and the other concerns of the Colorado River water users. A computer simulation model has been developed and used to simulate the effects of various management alternatives upon water conservation, water quality, and power production. The model generates synthetic sequences of streamflows and total dissolved solids (TDS) concentrations. The flows of water and TDS are then routed through the major reservoirs of the system, Lakes Powell and Mead. Characteristics of system behavior are examined from simulations using different streamflow sequences, upstream depletion levels, and reservoir operating policies. Reservoir evaporation, discharge, discharge salinity, and power generating capacity are examined. Simulation outputs show that the probability with which Lake Powell fails to supply a specified target discharge is highly variable. Simulations employing different streamflow sequences result in probabilities of reservoir failure which differ by as much as 0.1. Three levels of Upper Colorado River Basin demands are imposed on the model: 3.8 MAF/yr (4.7 km^3/yr), 4.6 MAF/yr (5.7 km^3/yr), and 5.5 MAF/yr (6.8 km^3/yr). Two levels of water demand are imposed below Lake Mead: 8.25 MAF/yr (10.2 km^3/yr) and 7.0 MAF/yr (6.8 km^3/yr). Although the effects of reservoir operations upon water quality are made uncertain by a lack of knowledge regarding the chemical limnology of Lake Powell, two possible lake chemistry models have been developed, and the predicted impacts of changes in reservoir operation upon water quality are presented. The current criteria for the operations of Lakes Powell and Mead are based upon 75 years of compromises and agreements between the various water interests in the Colorado River Basin. Simulations show that Lake Powell will be unable to conform to these operating constraints at the higher levels of water demand. An alternative form of reservoir operation is defined and compared to the existing policy on the basis of reliability of water supply, conservation of water, impact upon water quality, and the effect upon power generation. Ignoring the current institutional operating constraints, and attempting only to provide a reliable supply of water at the locations of water demand, is shown to be a superior management policy. This alternate policy results in the conservation of as much as 0.25 MAF/yr (0.3 km^3/yr) of water. The impact of the alternate operating policy upon hydroelectric power generation and the potential use of the conserved water for development of energy resources is discussed

    Accounting Hall of Fame 1998 induction: Arthur Ramer Wyatt

    Get PDF
    For Arthur Ramer Wyatt\u27s Induction, there were: Remarks by Donald E. Kieso, Northern Illinois University; Remarks by Jerry J. Weygandt, University of Wisconsin; Citation written by Daniel L. Jensen, The Ohio State University read by Donald E. Kieso and Jerry J. Weygandt; Response by Arthur R. Wyatt, Arthur Andersen & Co., retired, and University of Illinois

    Mechanisms of Self-Organization and Finite Size Effects in a Minimal Agent Based Model

    Full text link
    We present a detailed analysis of the self-organization phenomenon in which the stylized facts originate from finite size effects with respect to the number of agents considered and disappear in the limit of an infinite population. By introducing the possibility that agents can enter or leave the market depending on the behavior of the price, it is possible to show that the system self-organizes in a regime with a finite number of agents which corresponds to the stylized facts. The mechanism to enter or leave the market is based on the idea that a too stable market is unappealing for traders while the presence of price movements attracts agents to enter and speculate on the market. We show that this mechanism is also compatible with the idea that agents are scared by a noisy and risky market at shorter time scales. We also show that the mechanism for self-organization is robust with respect to variations of the exit/entry rules and that the attempt to trigger the system to self-organize in a region without stylized facts leads to an unrealistic dynamics. We study the self-organization in a specific agent based model but we believe that the basic ideas should be of general validity.Comment: 14 pages, 7 figure

    Origin of last-glacial loess in the western Yukon-Tanana Upland, central Alaska, USA

    Get PDF
    Loess is widespread over Alaska, and its accumulation has traditionally been associated with glacial periods. Surprisingly, loess deposits securely dated to the last glacial period are rare in Alaska, and paleowind reconstructions for this time period are limited to inferences from dune orientations. We report a rare occurrence of loess deposits dating to the last glacial period, ~19 ka to ~12 ka, in the Yukon-Tanana Upland. Loess in this area is very coarse grained (abundant coarse silt), with decreases in particle size moving south of the Yukon River, implying that the drainage basin of this river was the main source. Geochemical data show, however, that the Tanana River valley to the south is also a likely distal source. The occurrence of last-glacial loess with sources to both the south and north is explained by both regional, synoptic-scale winds from the northeast and opposing katabatic winds that could have developed from expanded glaciers in both the Brooks Range to the north and the Alaska Range to the south. Based on a comparison with recent climate modeling for the last glacial period, seasonality of dust transport may also have played a role in bringing about contributions from both northern and southern sources

    4. The School Develops

    Get PDF
    Between 1947 and 1953, when M.P. Catherwood left the deanship to become New York’s industrial commissioner, the ILR School developed into a full fledged enterprise. These pages attempt to capture some of the excitement of this period of the school’s history, which was characterized by vigor, growth, and innovation. Includes: Alumni Recall Their Lives as Students; The Faculty Were Giants; Alice Cook: Lifelong Scholar, Consummate Teacher; Frances Perkins; Visits and Visitors; Tenth Anniversary: Reflection and Change; The Emergence of Departments at ILR; Development of International Programs and Outreach

    Three-dimensional sound propagation models using the parabolic-equation approximation and the split-step Fourier method

    Get PDF
    Author Posting. Š IMACS, 2012. This article is posted here by permission of World Scientific Publishing for personal use, not for redistribution. The definitive version was published in Journal of Computational Acoustics 21 (2013): 1250018, doi:10.1142/S0218396X1250018X.The split-step Fourier method is used in three-dimensional parabolic-equation (PE) models to compute underwater sound propagation in one direction (i.e. forward). The method is implemented in both Cartesian (x, y, z) and cylindrical (r, θ, z) coordinate systems, with forward defined as along x and radial coordinate r, respectively. The Cartesian model has uniform resolution throughout the domain, and has errors that increase with azimuthal angle from the x axis. The cylindrical model has consistent validity in each azimuthal direction, but a fixed cylindrical grid of radials cannot produce uniform resolution. Two different methods to achieve more uniform resolution in the cylindrical PE model are presented. One of the methods is to increase the grid points in azimuth, as a function of r, according to nonaliased sampling theory. The other is to make use of a fixed arc-length grid. In addition, a point-source starter is derived for the three-dimensional Cartesian PE model. Results from idealized seamount and slope calculations are shown to compare and verify the performance of the three methods.This work was sponsored by the Office of Naval Research under the grants N00014-10-1-0040 and N00014-11-1-0701

    AAPM Medical Physics Practice Guideline 2.a: Commissioning and quality assurance of X-ray–based image-guided radiotherapy systems

    Get PDF
    The American Association of Physicists in Medicine (AAPM) is a nonprofit professional society whose primary purposes are to advance the science, education, and professional practice of medical physics. The AAPM has more than 8,000 members and is the principal organization of medical physicists in the United States. The AAPM will periodically define new practice guidelines for medical physics practice to help advance the science of medical physics and to improve the quality of service to patients throughout the United States. Existing medical physics practice guidelines will be reviewed for the purpose of revision or renewal, as appropriate, on their fifth anniversary or sooner. Each medical physics practice guideline represents a policy statement by the AAPM, has undergone a thorough consensus process in which it has been subjected to extensive review, and requires the approval of the Professional Council. The medical physics practice guidelines recognize that the safe and effective use of diagnostic and therapeutic radiology requires specific training, skills, and techniques, as described in each document. Reproduction or modification of the published practice guidelines and technical standards by those entities not providing these services is not authorized. 1

    Corporate governance and financial constraints on strategic turnarounds

    Get PDF
    The paper extends the Robbins and Pearce (1992) two-stage turnaround response model to include governance factors. In addition to the retrenchment and recovery, the paper proposes the addition of a realignment stage, referring specifically to the re-alignment of expectations of principal and agent groups. The realignment stage imposes a threshold that must be crossed before the retrenchment and hence recovery stage can be entered. Crossing this threshold is problematic to the extent that the interests of governance-stakeholder groups diverge in a crisis situation. The severity of the crisis impacts on the bases of strategy contingent asset valuation leading to the fragmentation of stakeholder interests. In some cases the consequence may be that management are prevented from carrying out turnarounds by governance constraints. The paper uses a case study to illustrate these dynamics, and like the Robbins and Pearce study, it focuses on the textile industry. A longitudinal approach is used to show the impact of the removal of governance constraints. The empirical evidence suggests that such financial constraints become less serious to the extent that there is a functioning market for corporate control. Building on governance research and turnaround literature, the paper also outlines the general case necessary and sufficient conditions for successful turnarounds

    Combining visible near-infrared spectroscopy and water vapor sorption for soil specific surface area estimation

    Get PDF
    Abstract The soil specific surface area (SSA) is a fundamental property governing a range of soil processes relevant to engineering, environmental, and agricultural applications. A method for SSA determination based on a combination of visible near‐infrared spectroscopy (vis‐NIRS) and vapor sorption isotherm measurements was proposed. Two models for water vapor sorption isotherms (WSIs) were used: the Tuller–Or (TO) and the Guggenheim–Anderson–de Boer (GAB) model. They were parameterized with sorption isotherm measurements and applied for SSA estimation for a wide range of soils (N = 270) from 27 countries. The generated vis‐NIRS models were compared with models where the SSA was determined with the ethylene glycol monoethyl ether (EGME) method. Different regression techniques were tested and included partial least squares (PLS), support vector machines (SVM), and artificial neural networks (ANN). The effect of dataset subdivision based on EGME values on model performance was also tested. Successful calibration models for SSATO and SSAGAB were generated and were nearly identical to that of SSAEGME. The performance of models was dependent on the range and variation in SSA values. However, the comparison using selected validation samples indicated no significant differences in the estimated SSATO, SSAGAB, and SSAEGME, with an average standardized RMSE (SRMSE = RMSE/range) of 0.07, 0.06 and 0.07, respectively. Small differences among the regression techniques were found, yet SVM performed best. The results of this study indicate that the combination of vis‐NIRS with the WSI as a reference technique for vis‐NIRS models provides SSA estimations akin to the EGME method
    • …
    corecore