2,557 research outputs found

    A Competency-based Approach toward Curricular Guidelines for Information Technology Education

    Get PDF
    The Association for Computing Machinery and the IEEE Computer Society have launched a new report titled, Curriculum Guidelines for Baccalaureate Degree Programs in Information Technology (IT2017). This paper discusses significant aspects of the IT2017 report and focuses on competency-driven learning rather than delivery of knowledge in information technology (IT) programs. It also highlights an IT curricular framework that meets the growing demands of a changing technological world in the next decade. Specifically, the paper outlines ways by which baccalaureate IT programs might implement the IT curricular framework and prepare students with knowledge, skills, and dispositions to equip graduates with competencies that matter in the workplace. The paper suggests that a focus on competencies allows academic departments to forge collaborations with employers and engage students in professional practice experiences. It also shows how professionals and educators might use the report in reviewing, updating, and creating baccalaureate IT degree programs worldwide

    The NYU Survey Service: Promoting Value in Undergraduate Education

    Get PDF
    New York University\u27s Data Service Studio has recently launched the NYU Survey Service, whose ultimate aim is to support the development and administration of surveys of all types. For the web-based component, we utilize a product called Qualtrics, which allows university affiliates to develop and administer web-based surveys. This article describes the process by which we at NYU came to offer the service during a time when concerns abound about the ability of libraries to support and expand services while still meeting service imperatives such as robust data services. While many considerations went into this evaluation and the ultimate conclusion to pilot the service, we emphasize those most related to data and information literacy, undergraduate instruction, learning and research, library collaborations and application administration and support

    Tracing the creation and evaluation of accessible Open Educational Resources through learning analytics

    Get PDF
    The adoption of Open Educational Resources (OER) has been continuously growing and with it the need to addressing the diversity of students’ learning needs. Because of that, OER should meet with characteristics such as the web accessibility and quality. Thus, teachers as the creators of OER need supporting tools and specialized competences. The main contribution of this thesis is a Learning Analytics Model to Trace the Creation and Evaluation of OER (LAMTCE) considering web accessibility and quality. LAMTCE also includes a user model of the teacher’s competences in the creation and evaluation of OER. Besides that, we developed ATCE, a learning analytics tool based on the LAMTCE model. Finally, it was carried out an evaluation conducted with teachers involving the use of the tool and we found that the tool really benefited teachers in the acquisition of their competences in creation and evaluation of accessible and quality OER.La adopción de Recursos Educativos Abiertos (REA) ha ido en aumento y con ello la necesidad de abordar la diversidad de necesidades de aprendizaje de los estudiantes. Por ello, los REA deben cumplir con características tales como la accesibilidad web y la calidad. Así, los profesores como los creadores de REA necesitan de herramientas de soporte y competencias especializadas. La principal contribución de la tesis es el modelo LAMTCE, un modelo de analíticas de aprendizaje para hacer seguimiento a la creación y evaluación de REA considerando la accesibilidad web y la calidad. LAMTCE también incluye un modelo de usuario de las competencias del profesor en creación y evaluación de REA. Además, se desarrolló ATCE, una herramienta de analíticas de aprendizaje que está basada en el modelo LAMTCE. Finalmente, se llevó a cabo un estudio con profesores involucrando el uso de la herramienta encontrando que ésta realmente benefició a los profesores en la adquisición de sus competencias en creación y evaluación de REA accesibles y de calidad

    An Organizational and Governance Model to Support Mass Collaborative Learning Initiatives

    Get PDF
    Funding text 1 This study was supported by the Center of Technology and Systems (CTS-UNINOVA). Funding text 2 Fundação para a Ciência e Tecnologia (project UIDB/00066/2020) and European Commission ERASMUS + through grant n° 2020-1-FR01-KA202-080231 ED-EN HUB.Mass collaboration can bring about major transformative changes in the way people can work collectively. This emerging paradigm promises significant economic and social benefits and enhanced efficiency across a range of sectors, including learning and education. Accordingly, this article introduces, demonstrates in use, and evaluates an organizational and governance model designed to provide guidance and execution support for the implementation and operation of mass collaborative learning initiatives. The design science research process is adopted to guide the design and development of the proposed model. The model stands on three streams of work, addressing key aspects and elements that have a supporting influence on community learning: (i) identify the positive and negative factors in existing and active examples of mass collaboration; (ii) adopt contributions of collaborative networks in terms of structural and behavioral aspects; and (iii) establish adequate learning assessment indicators and metrics. The model is used for a case study in which vocational education and training meet the needs of collaborative education–enterprise approaches. Initially, the validation of the model is verified by the partners and stakeholders of a particular project in the area of education–enterprises relations to ensure that it is sufficiently appropriate for applications in a digital platform developed by such projects. The three first steps of (the proposed) applicability evaluation (adequacy, feasibility, and effectiveness) are then performed. The positive results gained from model validation and its applicability evaluation in this project indicate that not only is the model fairly adequate, feasible, and effective for applications in the developed digital platform but also that it has a high potential for utilization in supporting and directing the creation, implementation, and operation of mass collaborative learning initiatives. Although the validation was carried out in the context of a single project, in fact, it was based on a large “focus group” of experts involved in this international initiative, which is in accordance with the Design Science Research method. Thus, this article reflects a kind of applied research of a socio-technical nature, aiming to find guidelines and practical solutions to the specific issues, problems, and concerns of mass collaborative learning initiatives.publishersversionpublishe

    A preliminary study of the relationship between complexity, motivation, and design quality.

    Get PDF
    This collection of work comprises a preliminary study of the relationships between product complexity, design motivation, and design quality. Complexity, as it relates to the design process, is largely undefined and there exists no generally accepted method of measurement. This study applies an independent data set to a complexity measurement technique and develops complexity measurements at the pre and post design stages. Pre design is considered when design ideas are in formation and customer needs are being addressed. Post design is considered when a functional prototype is realized, manufacturing and assembly processes have been considered, and the product design is considered finalized. Developing complexity measurements for both stages of design are critical to realizing lean design development. Additionally, this study investigates the effects of personal motivation on design quality outcomes. Taking from the field of sociology, a survey tool is utilized to gauge an individuals’ motivation toward design as a serious leisure activity. Serious leisure is considered an activity in which participants glean an internal reward, pleasure, or satisfaction from participation. Utilizing a proposed design quality survey, this study determines quality metrics based on customer needs, manufacturability, serviceability, and product fit and finish, and considers quality to be the ultimate measure of a design. The intersection of complexity, personal motivation, and design quality is of particular interest in this study, as it may provide insight into engineering team dynamics as it relates to design outcomes

    Metaevaluation of a university teacher education assessment system.

    Get PDF
    Metaevaluation is the evaluation of an evaluation or evaluation system (Scriven, 1969). It serves as a mechanism to ensure quality in evaluation approaches and implementation. Operationally metaevaluation is defined as “the process of delineating, obtaining, and applying descriptive information and judgmental information – about the utility, feasibility, propriety, and accuracy of an evaluation and its systematic nature, competent conduct, integrity/honesty, respectfulness, and social responsibility to guide the evaluation and/or report its strengths and weaknesses” (Stufflebeam, 2001, p.185). This study was a metaevaluation of an assessment system designed for accreditation requirements to support continuous improvement in teacher education programs at the University of Louisville. The study was intended to serve as a formative metaevaluation to identify strengths and weaknesses in the University of Louisville, College of Education and Human Development’s (CEHD) teacher education assessment system to support improvement of the system and better support continuous improvement of teacher education programs. The study took careful consideration of accountability and accreditation requirements, as well as evaluation and metaevalaution standards and practices. The study utilized Stufflebeam’s structure for metaevaluation (2001), which supports strategic and contextual analysis of the evaluation or evaluation system to address alignment with stakeholders needs. The study employed mixed methods to address four research questions. The research questions were focused on the application of data from the CEHD’s assessment system in driving program improvement and also the reliability and validity of instruments used in the assessment system. The first research question was focused on identifying the types of assessments that best support program improvement in teacher education. A qualitative case study analysis revealed a lack of explicit connections to data within the CEHD’s SLO action plans in which faculty identify plans for improving programs. Implied connections to data, included references to the 10 Unit Key Assessments, Hallmark Assessment Tasks (HATs), and indirect assessment data (QMS student satisfaction survey data. These results indicate that a variety of assessments support program improvement and are in alignment with CAEP standards (2013), the American Evaluation Association (2013), and the Joint Committee on Standards for Educational Evaluation (2011), multiple measures are necessary in sound evaluation and evaluation systems. This study resulted in recommendations to modify SLO templates and action plan prompts to ensure more explicit connections of data to the action plans and even follow-through on action plans. The second research question was intended to identify how assessment data are used to drive continuous improvement in teacher education programs. The qualitative case study review of SLO action plans and reflections on previous year’s plans for improvement identified actions in the area of curriculum, faculty development, assessments, field and clinical experiences, and candidate performance. These findings demonstrated a real strength of the CEHD’s assessment system, as it demonstrates that the assessment system is driving continuous program improvement. One suggestion for improvement was increased documentation related to follow-through of actions within the current assessment system structures. The third research question pertained to reliability of instruments used across programs. The analysis revealed no concerns in regards to reliability of instruments across programs. The CEHD is encouraged to incorporate continued training and collaborative sessions to dissect and practice application of instruments to ensure reliability over time. This is especially important as programs revise instruments, assessors matriculate, and assessment context changes. The fourth and final research questions reviewed the construct validity of instruments in the CEHD assessment system aligned with the CEHD’s conceptual framework. The study revealed adequate construct validity related to measuring critical thinking, problem solving, and professional leadership, however also revealed potential concerns regarding discriminant validity. To address these findings, it has been recommended that the CEHD transition to 4-point rubrics instead of the current 3-point rubrics used in the assessment system. The study has outlined next steps in making that transition. In conclusion, this study identified strengths in the reliability of instrumentation and strategic application of data. Areas for improvement include revision of instruments to provide differentiation between performance levels and outcomes in the assessment system and revisions to SLO processes and templates to ensure more explicit connections between data and decision making. Ultimately, this metaevaluation has identified the most pertinent next steps for CEHD administrators, faculty, and staff in improving the assessment system to drive continuous program improvement in alignment with the Council for the Accreditation of Educator Preparation (CAEP) and the Kentucky Education Professional Standards Board (EPSB) accreditation processes

    Leveraging data for student success

    Get PDF
    People providing services to schools, teachers, and students want to know whether these services are effective. With that knowledge, a project director can expand services that work well and adjust implementation of activities that are not working as expected. When finding that an innovative strategy benefits students, a project director might want to share that information with other service providers who could build upon that strategy. Some organizations that fund programs for students will want a report demonstrating the program’s success. Determining whether a program is effective requires expertise in data collection, study design, and analysis. Not all project directors have this expertise—they tend to be primarily focused on working with schools, teachers, and students to undertake program activities. Collecting and obtaining student-level data may not be a routine part of the program. This book provides an overview of the process for evaluating a program. It is not a detailed methodological text but focuses on awareness of the process. What do program directors need to know about data and data analysis to plan an evaluation or to communicate with an evaluator? Examples focus on supporting college and career readiness programs. Readers can apply these processes to other studies that include a data collection component.Publishe
    corecore