1,520,419 research outputs found

    Measuring is more than assigning numbers

    Get PDF
    'Measurement is fundamental to research-related activities in social science (hence this Handbook). In my own field of education research, perhaps the most discussed element of education lies in test scores. Examination results are measurements, the number of students attaining a particular standard in a test is a measurement; indeed the standard of a test is a measurement. The allocation of places at school, college or university, student:teacher ratios, funding plans, school timetables, staff workloads, adult participation rates, and the stratification of educational outcomes by sex, social class, ethnicity or geography for example, are all based on measurements. Good and careful work has been done in all of these areas (Nuttall 1987). However, the concept of measurement itself remains under-examined, and is often treated in an uncritical way. In saying this I mean more than the usual lament about qualitative:quantitative schism or the supposed reluctance of social scientists to engage with numeric analysis (Gorard et al. 2004a). I mean that even where numeric analysis is being conducted, the emphasis is on collecting, collating, analysing, and reporting the kinds of data generated by measurement, with the process of measurement and the rigor of the measurement instrument being somewhat taken for granted by many commentators. Issues that are traditionally considered by social scientists include levels of measurement, reliability, validity, and the creation of complex indices (as illustrated in some of the chapters contained in this volume). But these matters are too often dealt with primarily as technical matters – such as how to assess reliability or which statistical test to use with which combination of levels of measurement. The process of quantification itself is just assumed'

    Teaching and Professional Fellowship Development Report 2006/7 : To design and implement a pre-course primer for Foundation students

    Full text link
    To design and implement a pre-course ‘primer’ for students entering (or considering an application to) Camberwell Foundation (ft/pt); an interconnected online (VLE) and printed journal-forum-directory started at the point an offer is made. Also to implement a ‘stripped down’ model accessible to schools, colleges and other communities to consolidate, enhance and develop existent in/out/reach work (open days, portfolio advice days, teachers’ days, summer schools). The motive is to widen access, raise achievement and support the aspirations and needs of an increasingly diverse cohor

    Finite size scaling as a cure for supercell approximation errors in calculations of neutral native defects in InP

    Get PDF
    The relaxed and unrelaxed formation energies of neutral antisites and interstitial defects in InP are calculated using ab initio density functional theory and simple cubic supercells of up to 512 atoms. The finite size errors in the formation energies of all the neutral defects arising from the supercell approximation are examined and corrected for using finite size scaling methods, which are shown to be a very promising approach to the problem. Elastic errors scale linearly, whilst the errors arising from charge multipole interactions between the defect and its images in the periodic boundary conditions have a linear plus a higher order term, for which a cubic provides the best fit. These latter errors are shown to be significant even for neutral defects. Instances are also presented where even the 512 atom supercell is not sufficiently converged. Instead, physically relevant results can be obtained only by finite size scaling the results of calculations in several supercells, up to and including the 512 atom cell and in extreme cases possibly even including the 1000 atom supercell.Comment: 13 pages, 11 figures. Errata in tables I and III correcte

    Identification of γ-ray emission from 3C 345 and NRAO 512

    Get PDF
    For more than 15 years, since the days of the Energetic Gamma-Ray Experiment Telescope (EGRET) on board the Compton Gamma-Ray Observatory (CGRO; 1991−2000), it has remained an open question why the prominent blazar 3C 345 was not reliably detected at γ-ray energies ≥ 20 MeV. Recently a bright γ-ray source (0FGL J1641.4+3939/1FGL J1642.5+3947), potentially associated with 3C 345, was detected by the Large Area Telescope (LAT) on Fermi. Multiwavelength observations from radio bands to X-rays (mainly GASP-WEBT and Swift) of possible counterparts (3C 345, NRAO 512, B3 1640 + 396) were combined with 20 months of Fermi-LAT monitoring data (August 2008 − April 2010) to associate and identify the dominating γ-ray emitting counterpart of 1FGL J1642.5+3947. The source 3C 345 is identified as the main contributor for this γ-ray emitting region. However, after November 2009 (15 months), a significant excess of photons from the nearby quasar NRAO 512 started to contribute and thereafter was detected with increasing γ-ray activity, possibly adding flux to 1FGL J1642.5+3947. For the same time period and during the summer of 2010, an increase of radio, optical and X-ray activity of NRAO 512 was observed. No γ-ray emission from B3   1640 + 396 was detected

    In vivo laser Doppler holography of the human retina

    Full text link
    The eye offers a unique opportunity for non-invasive exploration of cardiovascular diseases. Optical angiography in the retina requires sensitive measurements, which hinders conventional full-field laser Doppler imaging schemes. To overcome this limitation, we used digital holography to perform laser Doppler perfusion imaging of the human retina in vivo with near-infrared light. Wideband measurements of the beat frequency spectrum of optical interferograms recorded with a 39 kHz CMOS camera are analyzed by short-time Fourier transformation. Power Doppler images and movies drawn from the zeroth moment of the power spectrum density reveal blood flows in retinal and choroidal vessels over 512 ×\times 512 pixels covering 2.4 ×\times 2.4 mm2^2 on the retina with a 13 ms temporal resolution.Comment: 5 pages, 5 figure

    Do the ECB and the Fed really need to cooperate? Optimal monetary policy in a two-country world.

    Get PDF
    A two-country model with monopolistic competition and price stickiness is employed to investigate the implications for macroeconomic stability and the welfare properties of three international policy arrangements: (a) cooperative, (b) non-cooperative and (c) monetary union. I characterize the conditions under which there is scope for policy cooperation and quantify the costs of non cooperation and monetary union. The non-cooperative equilibrium may be suboptimal because of beggar-thy-neighbor and beggar-thyself effects, while monetary union may be suboptimal because of the sluggishness of relative prices. Both the costs of policy competition and of a monetary union are sensitive to the values assumed for the intertemporal and international demand elasticity and the degree of openness of the economy. Independently of the calibration scenario adopted, the ECB has little to gain by coordinating with the Fed.

    Code loops in dimension at most 8

    Full text link
    Code loops are certain Moufang 22-loops constructed from doubly even binary codes that play an important role in the construction of local subgroups of sporadic groups. More precisely, code loops are central extensions of the group of order 22 by an elementary abelian 22-group VV in the variety of loops such that their squaring map, commutator map and associator map are related by combinatorial polarization and the associator map is a trilinear alternating form. Using existing classifications of trilinear alternating forms over the field of 22 elements, we enumerate code loops of dimension d=dim(V)8d=\mathrm{dim}(V)\le 8 (equivalently, of order 2d+15122^{d+1}\le 512) up to isomorphism. There are 767767 code loops of order 128128, and 8082680826 of order 256256, and 937791557937791557 of order 512512
    corecore