1,740 research outputs found

    Wave Damping by Magnetohydrodynamic Turbulence and Its Effect on Cosmic-Ray Propagation in the Interstellar Medium

    Get PDF
    Cosmic rays scatter off magnetic irregularities (Alfvén waves) with which they are resonant, that is, waves of wavelength comparable to their gyroradii. These waves may be generated either by the cosmic rays themselves, if they stream faster than the Alfvén speed, or by sources of MHD turbulence. Waves excited by streaming cosmic rays are ideally shaped for scattering, whereas the scattering efficiency of MHD turbulence is severely diminished by its anisotropy. We show that MHD turbulence has an indirect effect on cosmic-ray propagation by acting as a damping mechanism for cosmic-ray-generated waves. The hot ("coronal") phase of the interstellar medium is the best candidate location for cosmic-ray confinement by scattering from self-generated waves. We relate the streaming velocity of cosmic rays to the rate of turbulent dissipation in this medium for the case in which turbulent damping is the dominant damping mechanism. We conclude that cosmic rays with up to 10^2 GeV could not stream much faster than the Alfvén speed but 10^6 GeV cosmic rays would stream unimpeded by self-generated waves, unless the coronal gas were remarkably turbulence-free

    Spoke formation under moving plasma clouds

    Full text link
    Goertz and Morfill (1983) propose that spokes on Saturn's rings form under radially moving plasma clouds produced by meteoroid impacts. We demonstrate that the speed at which a plasma cloud can move relative to the ring material is bounded from above by the difference between the Keplerian and corotation velocities. The radial orientation of new spokes requires radial speeds that are at least an order of magnitude larger than this upper limit, thus the model advanced by Goertz and Morfill fails to make radial spokes.Comment: 15 pages, 2 figures, Icarus in pres

    Interventions for improving upper limb function after stroke

    Get PDF
    Background: Improving upper limb function is a core element of stroke rehabilitation needed to maximise patient outcomes and reduce disability. Evidence about effects of individual treatment techniques and modalities is synthesised within many reviews. For selection of effective rehabilitation treatment, the relative effectiveness of interventions must be known. However, a comprehensive overview of systematic reviews in this area is currently lacking. Objectives: To carry out a Cochrane overview by synthesising systematic reviews of interventions provided to improve upper limb function after stroke. Methods: Search methods: We comprehensively searched the Cochrane Database of Systematic Reviews; the Database of Reviews of Effects; and PROSPERO (an international prospective register of systematic reviews) (June 2013). We also contacted review authors in an effort to identify further relevant reviews. Selection criteria: We included Cochrane and non‐Cochrane reviews of randomised controlled trials (RCTs) of patients with stroke comparing upper limb interventions with no treatment, usual care or alternative treatments. Our primary outcome of interest was upper limb function; secondary outcomes included motor impairment and performance of activities of daily living. When we identified overlapping reviews, we systematically identified the most up‐to‐date and comprehensive review and excluded reviews that overlapped with this. Data collection and analysis: Two overview authors independently applied the selection criteria, excluding reviews that were superseded by more up‐to‐date reviews including the same (or similar) studies. Two overview authors independently assessed the methodological quality of reviews (using a modified version of the AMSTAR tool) and extracted data. Quality of evidence within each comparison in each review was determined using objective criteria (based on numbers of participants, risk of bias, heterogeneity and review quality) to apply GRADE (Grades of Recommendation, Assessment, Development and Evaluation) levels of evidence. We resolved disagreements through discussion. We systematically tabulated the effects of interventions and used quality of evidence to determine implications for clinical practice and to make recommendations for future research. Main results: Our searches identified 1840 records, from which we included 40 completed reviews (19 Cochrane; 21 non‐Cochrane), covering 18 individual interventions and dose and setting of interventions. The 40 reviews contain 503 studies (18,078 participants). We extracted pooled data from 31 reviews related to 127 comparisons. We judged the quality of evidence to be high for 1/127 comparisons (transcranial direct current stimulation (tDCS) demonstrating no benefit for outcomes of activities of daily living (ADLs)); moderate for 49/127 comparisons (covering seven individual interventions) and low or very low for 77/127 comparisons. Moderate‐quality evidence showed a beneficial effect of constraint‐induced movement therapy (CIMT), mental practice, mirror therapy, interventions for sensory impairment, virtual reality and a relatively high dose of repetitive task practice, suggesting that these may be effective interventions; moderate‐quality evidence also indicated that unilateral arm training may be more effective than bilateral arm training. Information was insufficient to reveal the relative effectiveness of different interventions. Moderate‐quality evidence from subgroup analyses comparing greater and lesser doses of mental practice, repetitive task training and virtual reality demonstrates a beneficial effect for the group given the greater dose, although not for the group given the smaller dose; however tests for subgroup differences do not suggest a statistically significant difference between these groups. Future research related to dose is essential. Specific recommendations for future research are derived from current evidence. These recommendations include but are not limited to adequately powered, high‐quality RCTs to confirm the benefit of CIMT, mental practice, mirror therapy, virtual reality and a relatively high dose of repetitive task practice; high‐quality RCTs to explore the effects of repetitive transcranial magnetic stimulation (rTMS), tDCS, hands‐on therapy, music therapy, pharmacological interventions and interventions for sensory impairment; and up‐to‐date reviews related to biofeedback, Bobath therapy, electrical stimulation, reach‐to‐grasp exercise, repetitive task training, strength training and stretching and positioning. Authors' conclusions: Large numbers of overlapping reviews related to interventions to improve upper limb function following stroke have been identified, and this overview serves to signpost clinicians and policy makers toward relevant systematic reviews to support clinical decisions, providing one accessible, comprehensive document, which should support clinicians and policy makers in clinical decision making for stroke rehabilitation. Currently, no high‐quality evidence can be found for any interventions that are currently used as part of routine practice, and evidence is insufficient to enable comparison of the relative effectiveness of interventions. Effective collaboration is urgently needed to support large, robust RCTs of interventions currently used routinely within clinical practice. Evidence related to dose of interventions is particularly needed, as this information has widespread clinical and research implications

    An algorithm was developed to assign GRADE levels of evidence to comparisons within systematic reviews

    Get PDF
    Objectives: One recommended use of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach is supporting quality assessment of evidence of comparisons included within a Cochrane overview of reviews. Within our overview, reviewers found that current GRADE guidance was insufficient to make reliable and consistent judgments. To support our ratings, we developed an algorithm to grade quality of evidence using concrete rules. Methods: Using a pragmatic, exploratory approach, we explored the challenges of applying GRADE levels of evidence and developed an algorithm to applying GRADE levels of evidence in a consistent and transparent approach. Our methods involved application of algorithms and formulas to samples of reviews, expert panel discussion, and iterative refinement and revision. Results: The developed algorithm incorporated four key criteria: number of participants, risk of bias of trials, heterogeneity, and methodological quality of the review. A formula for applying GRADE level of evidence from the number of downgrades assigned by the algorithm was agreed. Conclusion: Our algorithm which assigns GRADE levels of evidence using a set of concrete rules was successfully applied within our Cochrane overview. We propose that this methodological approach has implications for assessment of quality of evidence within future evidence syntheses

    Paralegal Students’ and Paralegal Instructors’ Perceptions of Synchronous and Asynchronous Online Paralegal Course Effectiveness: A Comparative Study

    Get PDF
    To improve online learning pedagogy within the field of paralegal education, this study investigated how paralegal students and paralegal instructors perceived the effectiveness of synchronous and asynchronous online paralegal courses.  This study intended to inform paralegal instructors and course developers how to better design, deliver, and evaluate effective online course instruction in the field of paralegal studies.Survey results were analyzed using independent samples t-test and correlational analysis, and indicated that overall, paralegal students and paralegal instructors positively perceived synchronous and asynchronous online paralegal courses.  Paralegal instructors reported statistically significant higher perceptions than paralegal students: (1) of instructional design and course content in synchronous online paralegal courses; and (2) of technical assistance, communication, and course content in asynchronous online paralegal courses.  Instructors also reported higher perceptions of the effectiveness of universal design, online instructional design, and course content in synchronous online paralegal courses than in asynchronous online paralegal courses.  Paralegal students reported higher perceptions of asynchronous online paralegal course effectiveness regarding universal design than paralegal instructors.  No statistically significant differences existed between paralegal students’ perceptions of the effectiveness of synchronous and asynchronous online paralegal courses. A strong, negative relationship existed between paralegal students’ age and their perceptions of effective synchronous paralegal courses, which were statistically and practically significant.  Lastly, this study provided practical applicability and opportunities for future research. Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12, 3-22.  Retrieved from https://files.eric.ed.gov/fulltext/EJ837483.pdf Akyol, Z., Garrison, D. R., & Ozden, M. Y. (2009). Online and blended communities of inquiry: Exploring the developmental and perceptional differences. The International Review of Research in Open and Distributed Learning, 10(6), 65-83.  Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/765/1436 Allen, I. E., & Seaman, J. (2014). Grade change: Tracking online education in the United States. Babson Park, MA:  Babson Survey Research Group and Quahog Research Group, LLC.  Retrieved from https://www.utc.edu/learn/pdfs/online/sloanc-report-2014.pdf Alreck, P. L., & Settle, R. B. (2004). The Survey Research Handbook (3rd ed.) New York, NY: McGraw-Hill Irwin. American Association for Paralegal Education (2013, Oct.).  AAfPE core competencies for paralegal programs.  Retrieved from https://cdn.ymaws.com/www.aafpe.org/resource/resmgr/Docs/AAfPECoreCompetencies.pdf American Bar Association, Standing Committee on Paralegals.  (2017). https://www.americanbar.org/groups/paralegals.html American Bar Association, Standing Committee on Paralegals (2013, September).  Guidelines for the approval of paralegal education programs.  Retrieved from https://www.americanbar.org/content/dam/aba/administrative/paralegals/ls_prlgs_2013_paralegal_guidelines.authcheckdam.pdf Astani, M., Ready, K. J., & Duplaga, E. A. (2010). Online course experience matters: Investigating students’ perceptions of online learning. Issues in Information Systems, 11(2), 14-21.  Retrieved from http://iacis.org/iis/2010/14-21_LV2010_1526.pdf Bailey, C. J., & Card, K. A. (2009). Effective pedagogical practices for online teaching: Perception of experienced instructors. The Internet and Higher Education, 12, 152-155. doi: 10.1016/j.iheduc.2009.08.002 Bernard, R., Abrami, P., Borokhovski, E., Wade, C., Tamim , R., Surkes, M., & Bethel, E. (2009).  A meta-analysis of three types of interaction treatments in distance education.  Review of Educational Research, 79, 1243-1289.  doi: 10.3102/0034654309333844 Cherry, S. J., & Flora, B. H. (2017). Radiography faculty engaged in online education: Perceptions of effectiveness, satisfaction, and technological self-efficacy. Radiologic Technology, 88(3), 249-262.  http://www.radiologictechnology.org/ Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Taylor & Francis Group. Colorado, J. T., & Eberle, J. (2010). Student demographics and success in online learning environments.  Emporia State Research Studies, 46(1), 4-10.  Retrieved from https://esirc.emporia.edu/bitstream/handle/123456789/380/205.2.pdf?sequence=1 Dutcher, C. W., Epps, K. K., & Cleaveland, M. C. (2015). Comparing business law in online and face to face formats: A difference in student learning perception. Academy of Educational Leadership Journal, 19, 123-134.  http://www.abacademies.org/journals/academy-of-educational-leadership-journal-home.html Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.  Retrieved from http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPower3-BRM-Paper.pdf Field, A. (2009).  Discovery statistics using SPSS. (3rd ed.).  Thousand Oaks, CA:  Sage Publications, Inc. Gall M., Borg, W., & Gall, J. (1996). Educational research: An introduction (6th ed.). White Plains, NY: Longman Press. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of distance education, 15(1), 7-23.  Retrieved from http://cde.athabascau.ca/coi_site/documents/Garrison_Anderson_Archer_CogPres_Final.pdf Green, S. B., & Salkind, N. J. (2005). Using SPSS for Windows and Macintosh: Internal consistency estimates of reliability. Upper Saddle River, NJ: Pearson Prentice Hall. Harrell, I. L. (2008). Increasing the Success of Online Students. Inquiry, 13(1), 36-44.  Retrieved from http://files.eric.ed.gov/fulltext/EJ833911.pdf Horspool, A., & Lange, C. (2012). Applying the scholarship of teaching and learning: student perceptions, behaviours and success online and face-to-face. Assessment & Evaluation in Higher Education, 37, 73-88.  doi: 10.1080/02602938.2010.496532 Inman, E., Kerwin, M., & Mayes, L. (1999). Instructor and student attitudes toward distance learning. Community College Journal of Research & Practice, 23, 581-591.  doi:10.1080/106689299264594 Institute of Legal Executives (ILEX).  https://www.cilexcareers.org.uk/ Johnson, J. & Taggart, G. (1996).  Computer assisted instruction in paralegal education: Does it help? Journal of Paralegal Education and Practice, 12, 1-21. Johnstone, Q. & Flood, J. (1982).  Paralegals in English and American law offices.  Windsor YB Access to Justice 2, 152. Jones, S. J. (2012). Reading between the lines of online course evaluations: Identifiable actions that improve student perceptions of teaching effectiveness and course value. Journal of Asynchronous Learning Networks, 16(1), 49-58.  doi:http://dx.doi.org/10.24059/olj.v16i1.227 Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and psychological measurement, 30, 607-610.  http://journals.sagepub.com/home/epm Liu, S., Gomez, J., Khan, B., & Yen, C. J. (2007). Toward a learner-oriented community college online course dropout framework. International Journal on ELearning, 6(4), 519-542.  https://www.learntechlib.org/j/IJEL/ Lloyd, S. A., Byrne, M. M., & McCoy, T. S. (2012). Faculty-perceived barriers of online education. Journal of online learning and teaching, 8(1), 1-12.  Retrieved from http://jolt.merlot.org/vol8no1/lloyd_0312.pdf Lockee, B., Burton, J., & Potter, K. (2010, March). Organizational perspectives on quality in distance learning. In D. Gibson & B. Dodge (Eds.), Proceedings of SITE 2010—Society for Information Technology & Teacher Education International Conference (pp. 659-664). San Diego, CA:  Association for the Advancement of Computing in Education (AACE).  https://www.learntechlib.org/p/33419/ Lowerison, G., Sclater, J., Schmid, R. F., & Abrami, P. C. (2006). Student perceived effectiveness of computer technology use in post-secondary classrooms. Computers & Education, 47(4), 465-489.  doi:10.1016/j.compedu.2004.10.014  Retrieved from https://pdfs.semanticscholar.org/fc9c/13f0187d3967217aa82cc96c188427e29ec9.pdf Martins, L. L., & Kellermanns, F. W. (2004). A model of business school students' acceptance of a web-based course management system. Academy of Management Learning & Education, 3(1), 7-26.  doi: 10.5465/AMLE.2004.12436815 Mayes, J. T. (2001). Quality in an e-University. Assessment & Evaluation in Higher Education, 26, 465-473.  doi:10.1080/02602930120082032 McCabe, S. (2007).  A brief history of the paralegal profession.  Michigan Bar Journal, 86(7), 18-21.  Retrieved from https://www.michbar.org/file/barjournal/article/documents/pdf4article1177.pdf McMillan, J. H. (2008). Educational Research: Fundamentals for the customer.  Boston, MA:  Pearson Education, Inc. Myers, C. B., Bennett, D., Brown, G., & Henderson, T. (2004). Emerging online learning environments and student learning: An analysis of faculty perceptions. Educational Technology & Society, 7(1), 78-86.  Retrieved from http://www.ifets.info/journals/7_1/9.pdf Myers, K. (2002). Distance education: A primer.  Journal of Paralegal Education & Practice, 18, 57-64. Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill. Otter, R. R., Seipel, S., Graeff, T., Alexander, B., Boraiko, C., Gray, J., Petersen, K., & Sadler, K. (2013). Comparing student and faculty perceptions of online and traditional courses. The Internet and Higher Education, 19, 27-35.  doi:10.1016/j.iheduc.2013.08.001 Popham, W. J. (2000). Modern educational measurement: Practical guidelines for educational leaders. Boston, MA:  Allyn & Bacon. Rich, A. J., & Dereshiwsky, M. I. (2011). Assessing the comparative effectiveness of teaching undergraduate intermediate accounting in the online classroom format. Journal of College Teaching and Learning, 8(9), 19.  https://www.cluteinstitute.com/ojs/index.php/TLC/ Robinson, C., & Hullinger, H. (2008).  New benchmarks in higher education:  Student engagement in online learning.  The Journal of Education for Business, 84(2), 101-109.  Retrieved from http://anitacrawley.net/Resources/Articles/New%20Benchmarks%20in%20Higher%20Education.pdf Salkind, N. J. (2008). Statistics for people who think they hate statistics. Los Angeles, CA: Sage Publications. Santos, J. (1999, April). Cronbach's Alpha: A tool for assessing the reliability of scales.  Journal of Extension, 37, 2. Retrieved from https://www.joe.org/joe/1999april/tt3.php Seok, S., DaCosta, B., Kinsell, C., & Tung, C. K. (2010). Comparison of instructors' and students' perceptions of the effectiveness of online courses. Quarterly Review of Distance Education, 11(1), 25.  Retrieved from http://online.nuc.edu/ctl_en/wp-content/uploads/2015/08/Online-education-effectiviness.pdf Sheridan, K., & Kelly, M. A. (2010). The indicators of instructor presence that are important to students in online courses. Journal of Online Learning and Teaching, 6(4), 767-779.  Retrieved from http://jolt.merlot.org/vol6no4/sheridan_1210.pdf Shook, B. L., Greer, M. J., & Campbell, S. (2013). Student perceptions of online instruction. International Journal of Arts & Sciences, 6(4), 337.  Retrieved from https://s3.amazonaws.com/academia.edu.documents/34496977/Ophoff.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1508119686&Signature=J1lJ8VO0xardd%2FwH35pGj14UeBg%3D&response-content-disposition=inline%3B%20filename%3DStudent_Perceptions_of_Online_Learning.pdf Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education, 7, 59-70.  doi:10.1016/j.iheduc.2003.11.003 Steiner, S. D., & Hyman, M. R. (2010). Improving the student experience: Allowing students enrolled in a required course to select online or face-to-face instruction. Marketing Education Review, 20, 29-34.  doi:10.2753/MER1052-8008200105 Stoel, L., & Hye Lee, K. (2003). Modeling the effect of experience on student acceptance of web-based courseware. Internet Research, 13(5), 364-374.  http://www.emeraldinsight.com/loi/intr Taggart, G., & Bodle, J. H. (2003). Example of assessment of student outcomes data from on-line paralegal courses: Lessons learned. Journal of Paralegal Education & Practice, 19, 29-36. Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students' perceptions of online learning: A comparative study. Journal of Information Systems Education, 20, 29-40.  http://jise.org/ Tung, C.K. (2007).  Perceptions of students and instructors of online and web-enhanced course effectiveness in community colleges (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database (Publication No. AAT 3284232). Vodanovich, S. J.  & Piotrowski, C., & (2000). Are the reported barriers to Internet-based instruction warranted? A synthesis of recent research. Education, 121(1), 48-53.  http://www.projectinnovation.com/education.html Ward, M. E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. The International Review of Research in Open and Distributed Learning, 11, 57-77.  Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/867/1610? Wilkes, R. B., Simon, J. C., & Brooks, L. D. (2006). A comparison of faculty and undergraduate students' perceptions of online courses and degree programs. Journal of Information Systems Education, 17, 131-140. http://jise.org/

    The Peter Principle Revisited: A Computational Study

    Full text link
    In the late sixties the Canadian psychologist Laurence J. Peter advanced an apparently paradoxical principle, named since then after him, which can be summarized as follows: {\it 'Every new member in a hierarchical organization climbs the hierarchy until he/she reaches his/her level of maximum incompetence'}. Despite its apparent unreasonableness, such a principle would realistically act in any organization where the mechanism of promotion rewards the best members and where the mechanism at their new level in the hierarchical structure does not depend on the competence they had at the previous level, usually because the tasks of the levels are very different to each other. Here we show, by means of agent based simulations, that if the latter two features actually hold in a given model of an organization with a hierarchical structure, then not only is the Peter principle unavoidable, but also it yields in turn a significant reduction of the global efficiency of the organization. Within a game theory-like approach, we explore different promotion strategies and we find, counterintuitively, that in order to avoid such an effect the best ways for improving the efficiency of a given organization are either to promote each time an agent at random or to promote randomly the best and the worst members in terms of competence.Comment: final version published on Physica A, 10 pages, 4 figures, 1 table (for on-line supplementary material see the link: http://www.ct.infn.it/cactus/peter-links.html

    Bayesian naturalness of the CMSSM and CNMSSM

    Full text link
    The recent discovery of the 125.5 GeV Higgs boson at the LHC has fueled interest in the next-to-minimal supersymmetric standard model (NMSSM) as it may require less fine-tuning than the minimal model to accommodate such a heavy Higgs. To this end we present Bayesian naturalness priors to quantify fine-tuning in the (N)MSSM. These priors arise automatically as Occam razors in Bayesian model comparison and generalize the conventional Barbieri-Giudice measure. In this paper we show that the naturalness priors capture features of both the Barbieri-Giudice fine-tuning measure and a simple ratio measure that has been used in the literature. We also show that according to the naturalness prior the constrained version of the NMSSM is less tuned than the CMSSM.Comment: 8 pages and 5 figure
    corecore