6,016 research outputs found

    A Review of High School Level Astronomy Student Research Projects over the last two decades

    Get PDF
    Since the early 1990s with the arrival of a variety of new technologies, the capacity for authentic astronomical research at the high school level has skyrocketed. This potential, however, has not realized the bright-eyed hopes and dreams of the early pioneers who expected to revolutionise science education through the use of telescopes and other astronomical instrumentation in the classroom. In this paper, a general history and analysis of these attempts is presented. We define what we classify as an Astronomy Research in the Classroom (ARiC) project and note the major dimensions on which these projects differ before describing the 22 major student research projects active since the early 1990s. This is followed by a discussion of the major issues identified that affected the success of these projects and provide suggestions for similar attempts in the future.Comment: Accepted for Publication in PASA. 26 page

    Algoritmilise mõtlemise oskuste hindamise mudel

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneTehnoloogia on kõikjal meie ümber ja arvutiteadus pole enam ainult eraldi distsipliin teadlastele, vaid omab aina laiemat rolli ka teistel aladel. Huvi algoritmilise mõtlemise arendamise vastu kasvab kõigil haridustasemetel alates eelkoolist lõpetades ülikooliga. Sellega seoses vajame aina enam üldhariduskoolide tasemel uuringuid, et omada paremat ülevaadet algoritmilise mõtlemise oskustest, et luua praktiline mudel algoritmilise mõtlemise hindamiseks. Algoritmilist mõtlemist kirjeldatakse paljudes artiklites, kuid sageli pole need omavahel kooskõlas ja puudub ühine arusaamine algoritmilise mõtlemise oskuste dimensioonidest. Doktoritöö sisaldab süstemaatilist kirjanduse analüüsi, kus mõjukamate artiklite sünteesimisel jõutakse kolmeetapilise algoritmilise mõtlemise oskuste mudelini. See mudel koosneb järgnevatest etappidest: i) probleemi defineerimine, ii) probleemi lahendamine ja iii) lahenduse analüüsimine. Need kolm etappi sisaldavad kümmet algoritmilise mõtlemise alamoskust: probleemi formuleerimine, abstrahheerimine, reformuleerimine, osadeks võtmine, andmete kogumine ja analüüs, algoritmiline disain, paralleliseerimine ja itereerimine, automatiseerimine, üldistamine ning tulemuse hindamine. Selleks, et algoritmilist mõtlemist süstemaatiliselt arendada, on vaja mõõtevahendit vastavate oskuste mõõtmiseks põhikoolis. Doktoritöö uurib informaatikaviktoriini Kobrase ülesannete abil, milliseid algoritmilise mõtlemise osaoskusi on võimalik eraldada Kobrase viktoriini tulemustest lähtuvalt ilmnes kaks algoritmilise mõtlemise oskust: algoritmiline disain ja mustrite äratundmine. Lisaks põhikoolile kasutati ülesandeid ka gümnaasiumis millga kinnitati, et kohendatud kujul saab neid ülesandeid kasutada algoritmilise mõtlemise oskuste hindamiseks ka gümnaasiumisgümnaasiumitasemel. Viimase asjana pakutakse doktoritöös välja teoreetilisi ja empiirilisi tulemusi kokkuvõttev algoritmilise mõtlemise oskusi hindav mudel.In the modernizing world, computer science is not only a separate discipline for scientists but has an essential role in many fields. There is an increasing interest in developing computational thinking (CT) skills at various education levels – from kindergarten to university. Therefore, at the comprehensive school level, research is needed to have an understanding of the dimensions of CT skills and to develop a model for assessing CT skills. CT is described in several articles, but these are not in line with each other, and there is missing a common understanding of the dimensions of the skills that should be in the focus while developing and assessing CT skills. In this doctoral study, through a systematic literature review, an overview of the dimensions of CT presented in scientific papers is given. A model for assessing CT skills in three stages is proposed: i) defining the problem, ii) solving the problem, and iii) analyzing the solution. Those three stages consist of ten CT skills: problem formulation, abstraction, problem reformulation, decomposition, data collection and analysis, algorithmic design, parallelization and iteration, automation, generalization, and evaluation. The systematic development of CT skills needs an instrument for assessing CT skills at the basic school level. This doctoral study describes CT skills that can be distinguished from the Bebras (Kobras) international challenge results. Results show that wto CT skills emerged that can be characterized as algorithmic thinking and pattern recognition. These Bebras tasks were also modified to be used for setting directions for developing CT skills at the secondary school level. Eventually, a modified model for assessing CT skills is presented, combining the theoretical and empirical results from the three main studies.https://www.ester.ee/record=b543136

    DOC 2015-03 Master of Finance

    Get PDF
    Legislative Authority. Constitution of the Academic Senate of the University of Dayton, Article ll.B.

    The Counseling Training Environment Scale (CTES) : development of a self-report measure to assess counseling training environment

    Get PDF
    Based on Bronfenbrenner's (1979, 1992) ecological framework, the Counseling Training Environment Scale (CTES) was developed as a self-report measure that assesses the learning and training environment of counseling and related mental health training programs as perceived by current students. A two-phase mixed-methods design was used to create and psychometrically evaluate the CTES: (a) item development, and (b) assessment of the outcomes to examine for preliminary evidence of validity and reliability. The results of the item development and content validation process yielded 128 items, of which 34 were used for the final intact version of the CTES. A confirmatory factor analysis (CFA) was conducted on four models of the CTES: (a) 34-item single-factor model, (b) 34-item five-factor model, (c) 26-item modified five-factor model, and (d) 24-item modified single-factor model. Results of the CFA suggest that despite not conforming to the hypothesized model of Bronfenbrenner's (1979, 1992) ecological theory, the data gathered from the modified 24-item single-factor CTES demonstrated the best fit on the following fit indices: NNFI (.95), CFI (.96), SRMR (.04), and RMSEA (.04). The modified 24-item CTES was also found to demonstrate strong reliability and temporal stability as demonstrated through Classical Test Theory analyses (a = .92) and test-retest reliability (r = .90, p< .01, two-tailed)

    The Effects of Empowerment on Role Competency and Patient Safety Competency for Newly Graduated Nurse Practitioners

    Get PDF
    Introduction: Role competence and patient safety (PS) competence among healthcare professionals are rapidly developing issues due to increasing patient acuity and complexity in the healthcare system. Upon graduation, nurse practitioners (NPs) provide autonomous healthcare for populations with complex health needs, thus role and PS competence is imperative. In Canada, few studies have examined NP education and role development specific to NP role competence and PS competencies. This study addresses this gap in the research examining educational experiences of new NP graduates. Aim: The aim of this study is to test a hypothesized model of the relationships between educational structural empowerment, psychological empowerment, NP role competence, and PS competence among newly practicing NPs. Educational structural empowerment, partially mediated by psychological empowerment was hypothesized to positively influence the development of NPs’ role competence and their competence to safely engage in health care work. Methods: The sample was drawn from newly graduated NPs from across Canada, accessed through twenty professional nurse registering bodies and associations. A theoretical model of educational structural empowerment mediated by psychological empowerment on NP role competence and PS competence was developed and tested. The study survey included socio-demographic questions, the Conditions of Learning Effectiveness Questionnaire, the Psychological Empowerment Scale, the NP Competence Survey, and the Health Processional Education in PS Survey. The study’s comprehensive analytic framework included descriptive statistics analyses, exploratory factor analysis, confirmatory factor analyses and structural equation modeling. Results: One hundred and ninety Canadian educated NPs who completed their studies in the preceding 2-year time period responded. The study model tested the effect of educational structural empowerment on NP role competence and PS competence partially mediated by PE. PE partially mediated the positive relationship for educational SE and PS competence, yet no mediation effect occurred for educational SE and NP role competence. Conclusions: Nurse educators need to consider educational structural empowerment strategies as NPs’ positive perceptions of role competence have the potential to influence greater levels of PS competence. Further, identifying factors and self-perceptions important for competence in an education program offers insights that can address NP role and PS educational needs prior to healthcare professionals beginning to practice

    Witness-based validation of verification results with applications to software-model checking

    Get PDF
    In the scientific world, formal verification is an established engineering technique to ensure the correctness of hardware and software systems. Because formal verification is an arduous and error-prone endeavor, automated solutions are desirable, and researchers continue to develop new algorithms and optimize existing ones to push the boundaries of what can be verified automatically. These efforts do not go unnoticed by the industry. Hardware-circuit designs, flight-control systems, and operating-system drivers are just a few examples of systems where formal verification is already part of the quality-assurance repertoire. Nevertheless, the primary fields of application for formal verification are mainly those where errors carry a high risk of significant damage, either financial or physical, because the costs of formal verification are considered to be too high for most other projects, despite the fact that the research community has made vast advancements regarding the effectiveness and efficiency of formal verification techniques in the last decades. We present and address two potential reasons for this discrepancy that we identified in the field of automated formal software verification. (1) Even for experts in the field, it is often difficult to decide which of the multitude of available techniques is the most suitable solution they should recommend to solve a given verification problem. Moreover, even if a suitable solution is found for a given system, there is no guarantee that the solution is sustainable as the system evolves. Consequently, the cost of finding and maintaining a suitable approach for applying formal software verification to real-world systems is high. (2) Even assuming that a suitable and maintainable solution for applying formal software verification to a given system is found and verification results could be obtained, developers of the system still require further guidance towards making practical use of these results, which often differ significantly from the results they obtain from classical quality-assurance techniques they are familiar with, such as testing. To mitigate the first issue, using the open-source software-verification framework CPAchecker, we investigate several popular formal software-verification techniques such as predicate abstraction, Impact, bounded model checking, k -induction, and PDR, and perform an extensive and rigorous experimental study to identify their strengths and weaknesses regarding their comparative effectiveness and efficiency when applied to a large and established benchmark set, to provide a basis for choosing the best technique for a given problem. To mitigate the second issue, we propose a concrete standard format for the representation and communication of verification results that raises the bar from plain "yes" or "no" answers to verification witnesses, which are valuable artifacts of the verification process that contain detailed information discovered during the analysis. We then use these verification witnesses for several applications: To increase the trust in verification results, we irst develop several independent validators based on violation witnesses, i.e. verification witnesses that represent bugs detected by a verifier. We then extend our validators to also erify the verification results obtained from a successful verification, which are represented y correctness witnesses. Lastly, we also develop an interactive web service to store and retrieve these verification witnesses, to provide online validation to quickly de-prioritize likely wrong results, and to graphically visualize the witnesses, as an example of how verification can be integrated into a development process. Since the introduction of our proposed standard format for verification witnesses, it has been adopted by over thirty different software verifiers, and our witness-based result-validation tools have become a core component in the scoring process of the International Competition on Software Verification.In der Welt der Wissenschaft gilt die Formale Verifikation als etablierte Methode, die Korrektheit von Hard- und Software zu gewährleisten. Da die Anwendung formaler Verifikation jedoch selbst ein beschwerliches und fehlerträchtiges Unterfangen darstellt, ist es erstrebenswert, automatisierte Lösungen dafür zu finden. Forscher entwickeln daher immer wieder neue Algorithmen Formaler Verifikation oder verbessern bereits existierende Algorithmen, um die Grenzen der Automatisierbarkeit Formaler Verifikation weiter und weiter zu dehnen. Auch die Industrie ist bereits auf diese Anstrengungen aufmerksam geworden. Flugsteuerungssysteme, Betriebssystemtreiber und Entwürfe von Hardware-Schaltungen sind nur einzelne Beispiele von Systemen, bei denen Formale Verifikation bereits heute einen festen Stammplatz im Arsenal der Qualitätssicherungsmaßnahmen eingenommen hat. Trotz alledem bleiben die primären Einsatzgebiete Formaler Verifikation jene, in denen Fehler ein hohes Risiko finanzieller oder physischer Schäden bergen, da in anderen Projekten die Kosten des Einsatzes Formaler Verifikation in der Regel als zu hoch empfunden werden, unbeachtet der Tatsache, dass es der Forschungsgemeinschaft in den letzten Jahrzehnten gelungen ist, enorme Fortschritte bei der Verbesserung der Effektivität und Effizienz Formaler Verifikationstechniken zu machen. Wir präsentieren und diskutieren zwei potenzielle Ursachen für diese Diskrepanz zwischen Forschung und Industrie, die wir auf dem Gebiet der Automatisierten Formalen Softwareverifikation identifiziert haben. (1) Sogar Fachleuten fällt es oft schwer, zu entscheiden, welche der zahlreichen verfügbaren Methoden sie als vielversprechendste Lösung eines gegebenen Verifikationsproblems empfehlen sollten. Darüber hinaus gibt es selbst dann, wenn eine passende Lösung für ein gegebenes System gefunden wird, keine Garantie, dass sich diese Lösung im Laufe der Evolution des Systems als Nachhaltig erweisen wird. Daher sind sowohl die Wahl als auch der Unterhalt eines passenden Ansatzes zur Anwendung Formaler Softwareverifikation auf reale Systeme kostspielige Unterfangen. (2) Selbst unter der Annahme, dass eine passende und wartbare Lösung zur Anwendung Formaler Softwareverifikation auf ein gegebenes System gefunden und Verifikationsergebnisse erzielt werden, benötigen die Entwickler des Systems immer noch weitere Unterstützung, um einen praktischen Nutzen aus den Ergebnissen ziehen zu können, die sich oft maßgeblich unterscheiden von den Ergebnissen jener klassischen Qualitätssicherungssysteme, mit denen sie vertraut sind, wie beispielsweise dem Testen. Um das erste Problem zu entschärfen, untersuchen wir unter Verwendung des Open-Source-Softwareverifikationsystems CPAchecker mehrere beliebte Formale Softwareverifikationsmethoden, wie beispielsweise Prädikatenabstraktion, Impact, Bounded-Model-Checking, k-Induktion und PDR, und führen umfangreiche und gründliche experimentelle Studien auf einem großen und etablierten Konvolut an Beispielprogrammen durch, um die Stärken und Schwächen dieser Methoden hinsichtlich ihrer relativen Effektivität und Effizienz zu ermitteln und daraus eine Entscheidungsgrundlage für die Wahl der besten Lösung für ein gegebenes Problem abzuleiten. Um das zweite Problem zu entschärfen, schlagen wir ein konkretes Standardformat zur Modellierung und zum Austausch von Verifikationsergebnissen vor, welches die Ansprüche an Verifikationsergebnisse anhebt, weg von einfachen "ja/nein"-Antworten und hin zu Verifikationszeugen (Verification Witnesses), bei denen es sich um wertvolle Produkte des Verifikationsprozesses handelt und die detaillierte, während der Analyse entdeckte Informationen enthalten. Wir stellen mehrere Anwendungsbeispiele für diese Verifikationszeugen vor: Um das Vertrauen in Verifikationsergebnisse zu erhöhen, entwickeln wir zunächst mehrere, voneinander unabhängige Validatoren, die Verletzungszeugen (Violation Witnesses) verwenden, also Verifikationszeugen, welche von einem Verifikationswerkzeug gefundene Spezifikationsverletzungen darstellen, Diese Validatoren erweitern wir anschließend so, dass sie auch in der Lage sind, die Verifikationsergebnisse erfolgreicher Verifikationen, also Korrektheitsbehauptungen, die durch Korrektheitszeugen (Correctness Witnesses) dokumentiert werden, nachzuvollziehen. Schlussendlich entwickeln wir als Beispiel für die Integrierbarkeit Formaler Verifikation in den Entwicklungsprozess einen interaktiven Webservice für die Speicherung und den Abruf von Verifikationzeugen, um einen Online-Validierungsdienst zur schnellen Depriorisierung mutmaßlich falscher Verifikationsergebnisse anzubieten und Verifikationszeugen graphisch darzustellen. Unser Vorschlag für ein Standardformat für Verifikationszeugen wurde inzwischen von mehr als dreißig verschiedenen Softwareverifikationswerkzeugen übernommen und unsere zeugen-basierten Validierungswerkzeuge sind zu einer Kernkomponente des Bewertungsschemas des Internationalen Softwareverifikationswettbewerbs geworden
    corecore