3,219 research outputs found
Taxation in the United States.
Thesis (M.B.A.)--Boston Universit
Describing and understanding the enacted curriculum of selected Grade 10 Life Science teachers in the Western Cape, South Africa
This study was conducted in a school in the Western Cape, South Africa situated in a community where learners came from difficult social backgrounds. Previous research has alluded to the challenges faced by teachers equipped with inadequate skills and a lack of effective modelling or mentoring to implement a formal curriculum that is outcomes-based and learner centred. The focus of the study was to uncover the enacted curriculum (and the underlying reasons for the enactment) of four Grade 10 Life Sciences Teachers. This multiple case study is based on data collection strategies that included video and audio-transcripts of the lessons as well as the use of additional relevant documents such as, for example, notes from lesson observations, and learner notebooks. These data were coded using NUDIST and then further analysed using the Pedagogic Content Knowledge (PCK) evidence-reporting table (PCK ERT). Interviews were conducted before the teaching events to allow for content representations (CoRes) to be developed. Overall the teachers lacked planning and the habit of reflection in and of practice. Hence video-stimulated interviews conducted after the teaching events allowed for Pedagogical and Professional experience Repertoires (PaP-eRs) to be developed in order to describe (from a teachers' perspective) what teachers did and why they did what they did. Teachers had varying backgrounds and experience and displayed very individualised and different enactments of the curriculum but they all used a consistent didactic approach in their teaching. The absence of teacher efficacy and the lack of integration of the PCK components limited the transformation of the content in any meaningful way and hence resulted in weak PCK. The relevance of PCK ERT as a descriptive framework for PCK in the context of this research is questioned on epistemic grounds. Factors identified that constrained the enacted practices of teachers included teachers' belief, orientation, poor Subject Matter Knowledge (SMK), school context and their perceptions of learners
Recommended from our members
Error assessment of National Water Model analysis & assimilation and short-range forecasts
Flooding is the costliest natural disaster in the United States and tragically often leads to loss of life. Flood prediction, response and mitigation are therefore critical areas of research and have been for many decades. Hydrologic and hydraulic models are a key component of flood prediction methods and highly detailed models have been implemented in many areas of high risk which often correspond to areas with high population. However, the high cost and complexity of highly detailed models means that many areas of the US are not covered by flood prediction early warning systems. Recent increases in computational power and increased resolution and coverage of remotely sensed data have allowed for the development of a continental scale streamflow prediction system known as the National Water Model which is currently forecasting streamflow values for over 2.7 million stream reaches across the US.
Flood inundation predictions can be derived from the National Water Model using digital elevation data to extract reach-scale rating curves and therefore river stage height. Using the height above nearest drainage method, flood inundation maps can be created from the stage height at relatively low computational cost at continental scale.
The National Water Model is currently operating as a deterministic model for short-term predictions and does not currently include an estimate of the uncertainty in these predictions. The final streamflow values are at the end of a chain of models which originate from precipitation forecasts and go through rainfall-runoff and finally routing modules. The total uncertainty in the streamflow predictions is therefore a function of the uncertainty in each step.
Uncertainty analysis commonly relies on an assessment of uncertainty in model parameters and boundary conditions, the use of perturbed inputs or through comparison of several different models of the same systems. Estimated uncertainty from the first model in a chain can then be propagated to the next model and so on until a final estimate is achieved. Unfortunately, the National Water Model is operated on a super computer and the details of the model are not available for perturbation analysis.
One step in the National Water Model hourly cycle is the assimilation of USGS gage data which allows for corrections to the model state before the forecast simulation is made. This excludes USGS gage data from being used as a verification dataset. Even so, it is still an informative exercise to compare NWM predictions at these sites. There are numerous local and regional gaging stations which are not assimilated into the National Water Model and can be used as an independent check on the model output. Recent flooding in the Llano River basin in central Texas provides an opportunity to compare National Water Model predictions to both USGS and non-USGS gage readings. This thesis presents an assessment of the error in National Water Model predictions in the Llano River basin.Civil, Architectural, and Environmental Engineerin
Code Complexity in Introductory Programming Courses
Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Peer reviewe
Art and Artifacts: Immersive and Interactive Technology in the Preservation and Engagement of Built Cultural Heritage
This session explores a global collaborative project in the visualization of the State of Qatar’s built cultural heritage, which expanded into the integration of interactive and immersive technologies to examine the shifting roles of viewer and participant. This breakout session presents mechanisms of our collaboration and the demonstrative outcomes of our research and will present strategies used to engage participants in the syncretic territory between art and archaeology
The Effect of Civic Knowledge and Attitudes on CS Student Work Preferences
We present an investigation in the connection between computing students'
civic knowledge, attitude, or self-efficacy and their willingness to work on
civic technologies. Early results indicate that these factors are related to a
willingness to accept government work in technology but not non-profit work
focused on civic technologies
Perceived Risk, Product Returns, and Optimal Resource Allocation: Evidence from a Field Experiment
Relatively few retailers include metrics such as product returns in their customer selection and optimal resource allocation algorithms when measuring and maximizing customer value. Even when they do include this metric, increases in product return behavior are usually considered merely an economic cost that must be managed by decreasing the marketing resource allocations toward the customers making the returns. However, recent research has suggested that satisfactory product return experiences can actually benefit firms by lowering the customer’s perceived risk of current and future purchases. To better understand the role of this perceived risk in the firm–customer exchange process, the authors conduct a large-scale customer selection and optimal resource allocation field experiment with 26,000 customers from an online retailer over six months. They find that the firm is able to increase both its short and long-term profits when accounting for the perceived risk related to product returns in addition to managing product return costs. Furthermore, the authors find that by including this risk, rather than simply implementing traditional customer lifetime value–based models generically, the firm can target more profitable customers
- …