7,002 research outputs found
Recommended from our members
California Carbon Offsets and Working Forest Conservation Easements
California’s cap-and-trade system is a vital laboratory for testing the effectiveness of this market-driven approach in meeting greenhouse gas emission reduction goals and the use of forestry-based carbon offsets within these systems generally. Based on this experience, this Article explores one of the primary challenges, layering offsets with working forest conservation easements, which currently limits opportunities to effectively use these tools in concert. Ultimately, this market may need to foster and rely on natural linkages with working forest conservation easements to develop these offsets and to better ensure that the critical societal objectives of these projects are being met
An application of dynamic programming to pattern recognition
technique which has found only limited usage in the Industrial and Business Management fields which are home to many other of the OR techniques. Its main application has been in problems o
Recommended from our members
A prototypical approach to machine learning
This paper presents an overview of a research programme on machine learning which is based on the fundamental process of categorization. This work draws upon the psychological theory of prototypical concepts . This theory is that concepts learnt naturally from interaction with the environment (basic categories) are not structured or defined in logical terms but are clustered in accordance with their similaritry to a central prototype, representing the "most typical" member.
A structure of a computer model designed to achieve categorization is outlined and the knowledge representational forms and developmental learning associated with this approach are discussed
Recommended from our members
Rational expectations modelling in O.R
The conventional OR approach to managing a system is, in outline, firstly to create a model of the existing system, secondly, to investigate changes in the model which improve or control the behaviour of the model and thirdly, to implement these changes in the system. It is assumed that the model incorporating these changes will be a valid representation of the system after the changes, in as far as the original model was a valid representation of the original system, and can thus be used to assess the benefits and disbenefits arising from the changes
Recommended from our members
A.I. approaches in statistics
The role of pattern recognition and knowledge representation methods from artificial Intelligence with in statistics is considered. Two areas of potential use are identified and one, data exploration, is used to illustrate the possibilities. A method is presented to identify and seperate overlapping groups within cluster analysis, using an A.I. approach. The potential of such 'intelligent' approaches is stressed
Recommended from our members
Knowledge representation for basic visual categories
This paper reports work on a model of machine learning which is based on the psychological theory of prototypical concepts. This theory is that concepts learnt naturally from interaction with the environment (basic categories) are not structured or defined in logical terms but are clustered in accordance with their similarity to a central prototype, representing the "most typical'' member
Causes of the 1980s Slump in Europe
macroeconomics, slump, unemployment, Europe, EEC, macroeconomic theory
La familia de falacias "enseñando para el examen"
This article explains the various meanings and ambiguities of the phrase “teaching to the test” (TttT), describes its history and use as a pejorative, and outlines the policy implications of the popular, but fallacious, belief that “high stakes” testing induces TttT which, in turn, produces “test score inflation” or artificial test score gains. The history starts with the infamous “Lake Wobegon Effect” test score scandal in the US in the 1980s. John J. Cannell, a medical doctor, discovered that all US states administering national norm-referenced tests claimed their students’ average scores exceeded the national average, a mathematical impossibility. Cannell blamed educator cheating and lax security for the test score inflation, but education insiders managed to convince many that high stakes was the cause, despite the fact that Cannell’s tests had no stakes. Elevating the high stakes causes TttT, which causes test score inflation fallacy to dogma has served to divert attention from the endemic lax security with “internally administered” tests that should have encouraged policy makers to require more external controls in test administrations. The fallacy is partly responsible for promoting the ruinous practice of test preparation drilling on test format and administering practice tests as a substitute for genuine subject matter preparation. Finally, promoters of the fallacy have encouraged the practice of “auditing” allegedly untrustworthy high-stakes test score trends with score trends from allegedly trustworthy low-stakes tests, despite an abundance of evidence that low-stakes test scores are far less reliable, largely due to student disinterestEste artículo explica los diversos significados y ambigüedades de la frase "enseñar para el examen" (TttT: teaching to the test en inglés), describe su historia y su uso como un peyorativo, y describe las implicaciones políticas de la creencia popular, pero falaz, que las pruebas de a “gran escala” inducen TttT que, a su vez, produce una "inflación en la calificación obtenida en el examen" o ganancias em cuanto a los puntos obtenidos en la prueba. La historia comienza con el infame escándalo de la puntuación de la prueba "Lake Wobegon Effect" en los Estados Unidos en los años ochenta. John J. Cannell, un médico, descubrió que todos los estados de los Estados Unidos que administraban pruebas nacionales con referencias normativas afirmaban que los puntajes promedio de sus estudiantes excedían el promedio nacional, una imposibilidad matemática. Cannell atribuyó a los educadores el engaño y la seguridad laxa por la inflación de la puntuación de los exámenes, pero los expertos en educación lograron convencer a muchos de que las pruebas a gran escala eran la causa, a pesar de que las pruebas de Cannell no tenían ninguna fiabilidad. Exagerar las pruebas a gran escala hace que TttT hace que la falla de la inflación de la puntuación de la prueba al dogma haya servido para desviar la atención de la seguridad laxa endémica con pruebas "internamente administradas" que deberían haber alentado a los responsables políticos a exigir más controles externos en las administraciones de las pruebas. La falacia es en parte responsable de promover la práctica ruinosa en la preparación de las pruebas en el formato de prueba y la administración de pruebas prácticas como un sustituto de la preparación de la materia original. Por último, los promotores de la falacia han fomentado la práctica de "auditar" tendencias de determinadas puntuación en las pruebas a gran escala con las tendencias de puntuación presuntamente confiables de las pruebas de baja exigencia, a pesar de la abundancia de pruebas donde las puntuaciones de las pruebas a menor escala son mucho menos confiables debido al desinterés de los estudiante
- …