356 research outputs found
Scholars Forum: A New Model For Scholarly Communication
Scholarly journals have flourished for over 300 years because they successfully address a broad range of authors' needs: to communicate findings to colleagues, to establish precedence of their work, to gain validation through peer review, to establish their reputation, to know the final version of their work is secure, and to know their work will be accessible by future scholars. Eventually, the development of comprehensive paper and then electronic indexes allowed past work to be readily identified and cited. Just as postal service made it possible to share scholarly work regularly and among a broad readership, the Internet now provides a distribution channel with the power to reduce publication time and to expand traditional print formats by supporting multi-media options and threaded discourse.
Despite widespread acceptance of the web by the academic and research community, the incorporation of advanced network technology into a new paradigm for scholarly communication by the publishers of print journals has not materialized. Nor have journal publishers used the lower cost of distribution on the web to make online versions of journals available at lower prices than print versions. It is becoming increasingly clear to the scholarly community that we must envision and develop for ourselves a new, affordable model for disseminating and preserving results, that synthesizes digital technology and the ongoing needs of scholars.
In March 1997, with support from the Engineering Information Foundation, Caltech sponsored a Conference on Scholarly Communication to open a dialogue around key issues and to consider the feasibility of alternative undertakings. A general consensus emerged recognizing that the certification of scholarly articles through peer review could be "decoupled" from the rest of the publishing process, and that the peer review process is already supported by the universities whose faculty serve as editors, members of editorial boards, and referees.
In the meantime, pressure to enact regressive copyright legislation has added another important element. The ease with which electronic files may be copied and forwarded has encouraged publishers and other owners of copyrighted material to seek means for denying access to anything they own in digital form to all but active subscribers or licensees. Furthermore, should publishers retain the only version of a publication in a digital form, there is a significant risk that this material may eventually be lost through culling little-used or unprofitable back-files, through not investing in conversion expense as technology evolves, through changes in ownership, or through catastrophic physical events. Such a scenario presents an intolerable threat to the future of scholarship
Two-Dimensional Topology of the 2dF Galaxy Redshift Survey
We study the topology of the publicly available data released by the 2dFGRS.
The 2dFGRS data contains over 100,000 galaxy redshifts with a magnitude limit
of b_J=19.45 and is the largest such survey to date. The data lie over a wide
range of right ascension (75 degree strips) but only within a narrow range of
declination (10 degree and 15 degree strips). This allows measurements of the
two-dimensional genus to be made.
The NGP displays a slight meatball shift topology, whereas the SGP displays a
bubble like topology. The current SGP data also have a slightly higher genus
amplitude. In both cases, a slight excess of overdense regions are found over
underdense regions. We assess the significance of these features using mock
catalogs drawn from the Virgo Consortium's Hubble Volume LCDM z=0 simulation.
We find that differences between the NGP and SGP genus curves are only
significant at the 1 sigma level. The average genus curve of the 2dFGRS agrees
well with that extracted from the LCDM mock catalogs.
We compare the amplitude of the 2dFGRS genus curve to the amplitude of a
Gaussian random field with the same power spectrum as the 2dFGRS and find,
contradictory to results for the 3D genus of other samples, that the amplitude
of the GRF genus curve is slightly lower than that of the 2dFGRS. This could be
due to a a feature in the current data set or the 2D genus may not be as
sensitive as the 3D genus to non-linear clustering due to the averaging over
the thickness of the slice in 2D. (Abridged)Comment: Submitted to ApJ A version with Figure 1 in higher resolution can be
obtained from http://www.physics.drexel.edu/~hoyle
Crash Safety Assurances Strategies for Future Plastic and Composite Intensive Vehicles (PCIVs)
This report addresses outstanding safety issues and research needs for Plastics and Composite Intensive Vehicles (PCIVs) to facilitate their safe deployment by 2020. PCIVs have the potential to revolutionize the automotive sector; however, the use of plastics and composite materials in automotive structures requires an in-depth knowledge of their unique performance characteristics in the crash and safety environment. Included in this report is a proposed definition of the PCIV, a review of potential safety benefits, lessons-learned, and progress to date towards crashworthiness of PCIVs as well as proposed safety performance specifications and research needs
Recommended from our members
Does empathy predict altruism in the wild?
Why do people act altruistically? One theory is that empathy is a driver of morality. Experimental studies of this are often confined to laboratory settings, which often lack ecological validity. In the present study we investigated whether empathy traits predict if people will act altruistically in a real-world setting, "in the wild". We staged a situation in public that was designed to elicit helping, and subsequently measured empathic traits in those who either stopped to help or walked past and did not help. Results show that a higher number of empathic traits are a significant and positive predictor for altruistic behavior in a real-life situation. This supports the theory that the act of doing good is correlated with empathy.This work was supported by the Autism Research Trust and the Medical Research Council; Pinsent Darwin Trust, Medical Research Council and Cambridge Trust; National Institute for Health Research
Analysis of caecal mucosal inflammation and immune modulation during Anoplocephala perfoliata infection of horses
Have Anglo-Saxon concepts really influenced the development of European qualifications policy?
This paper considers how far Anglo-Saxon conceptions of have influenced European Union vocational education and training policy, especially given the disparate approaches to VET across Europe. Two dominant approaches can be identified: the dual system (exemplified by Germany); and output based models (exemplified by the NVQ ‘English style’). Within the EU itself, the design philosophy of the English output-based model proved in the first instance influential in attempts to develop tools to establish equivalence between vocational qualifications across Europe, resulting in the learning outcomes approach of the European Qualifications Framework, the credit-based model of European VET Credit System and the task-based construction of occupation profiles exemplified by European Skills, Competences and Occupations. The governance model for the English system is, however, predicated on employer demand for ‘skills’ and this does not fit well with the social partnership model encompassing knowledge, skills and competences that is dominant in northern Europe. These contrasting approaches have led to continual modifications to the tools, as these sought to harmonise and reconcile national VET requirements with the original design. A tension is evident in particular between national and regional approaches to vocational education and training, on the one hand, and the policy tools adopted to align European vocational education and training better with the demands of the labour market, including at sectoral level, on the other. This paper explores these tensions and considers the prospects for the successful operation of these tools, paying particular attention to the European Qualifications Framework, European VET Credit System and European Skills, Competences and Occupations tool and the relationships between them and drawing on studies of the construction and furniture industries
Comparison of the performance of photonic band-edge liquid crystal lasers using different dyes as the gain medium
The primary concern of this work is to study the emission characteristics of a series of chiral nematic liquid crystal lasers doped with different laser dyes (DCM, pyrromethene 580, and pyrromethene 597) at varying concentrations by weight (0.5-2 wt %) when optically pumped at 532 nm. Long-wavelength photonic band-edge laser emission is characterized in terms of threshold energy and slope efficiency. At every dye concentration investigated, the pyrromethene 597-doped lasers exhibit the highest slope efficiency (ranging from 15% to 32%) and the DCM-doped lasers the lowest (ranging from 5% to 13%). Similarly, the threshold was found to be, in general, higher for the DCM-doped laser samples in comparison to the pyrromethene-doped laser samples. These results are then compared with the spectral properties, quantum efficiencies and, where possible, fluorescence lifetimes of the dyes dispersed in a common nematic host. In accordance with the low thresholds and high slope efficiencies, the results show that the molar extinction coefficients and quantum efficiencies are considerably larger for the pyrromethene dyes in comparison to DCM, when dispersed in the liquid crystal host.open191
Multi-channel whole-head OPM-MEG: Helmet design and a comparison with a conventional system
© 2020 The Authors Magnetoencephalography (MEG) is a powerful technique for functional neuroimaging, offering a non-invasive window on brain electrophysiology. MEG systems have traditionally been based on cryogenic sensors which detect the small extracranial magnetic fields generated by synchronised current in neuronal assemblies, however, such systems have fundamental limitations. In recent years, non-cryogenic quantum-enabled sensors, called optically-pumped magnetometers (OPMs), in combination with novel techniques for accurate background magnetic field control, have promised to lift those restrictions offering an adaptable, motion-robust MEG system, with improved data quality, at reduced cost. However, OPM-MEG remains a nascent technology, and whilst viable systems exist, most employ small numbers of sensors sited above targeted brain regions. Here, building on previous work, we construct a wearable OPM-MEG system with ‘whole-head’ coverage based upon commercially available OPMs, and test its capabilities to measure alpha, beta and gamma oscillations. We design two methods for OPM mounting; a flexible (EEG-like) cap and rigid (additively-manufactured) helmet. Whilst both designs allow for high quality data to be collected, we argue that the rigid helmet offers a more robust option with significant advantages for reconstruction of field data into 3D images of changes in neuronal current. Using repeat measurements in two participants, we show signal detection for our device to be highly robust. Moreover, via application of source-space modelling, we show that, despite having 5 times fewer sensors, our system exhibits comparable performance to an established cryogenic MEG device. While significant challenges still remain, these developments provide further evidence that OPM-MEG is likely to facilitate a step change for functional neuroimaging
- …