1,682 research outputs found
A Recommended Program Of Principal-Teacher Supervision For Center Point School, Pittsburg, Texas
Background and development of Modern Supervision
Not with standing the fact we have a different type of supervision today than was practiced when public schools in America were first established, supervision has existed ever since the formation of the first public school system. When the General Court of Boston, directed by the selectmen, was instructed to secure teachers with certain religious training, a type of supervision was being carried on. That custom was followed for nearly fifty years, at which time the teachers came under more formal supervision. The schools, using Boston as an illustration, were supervised by a committee, who was appointed to visit the school, Inspect the plant and equipment and examine the achievement of the pupils, These supervisors did not criticize the teachers, neither did they advise them.
It Is noticeable that by 1714 supervision in the United States had passed through three phases. During the first phase, the supervisors selected the teachers. During the second phase, the supervisors selected the teachers and inspected the plant and the equipment. During the third phase, they had the added responsibility of criticizing and advising the teachers. By 1721 supervision was done by the selectmen and any others that they decided to invite
Event-related potentials reveal the development of stable face representations from natural variability
Natural variability between instances of unfamiliar faces can make it difficult to reconcile two images as the same person. Yet for familiar faces, effortless recognition occurs even with considerable variability between images. To explore how stable face representations develop, we employed incidental learning in the form of a face sorting task. In each trial, multiple images of two facial identities were sorted into two corresponding piles. Following the sort, participants showed evidence of having learnt the faces, performing more accurately on a matching task with seen than unseen identities. Furthermore, ventral temporal event-related potentials were more negative in the N250 time range for previously-seen than previously-unseen identities. These effects appear to demonstrate some degree of abstraction, rather than simple picture learning, as the neurophysiological and behavioural effects were observed with novel images of the previously-seen identities. The results provide evidence of the development of facial representations, allowing a window onto natural mechanisms of face learning
Sentinel node biopsy should be supplemented by axillary sampling in patients with small breast cancers
Axillary clearance provides important prognostic information but is associated with significant morbidity. Sentinel node biopsy can provide staging .141 patients with node negative early breast cancers-tumour size less than 1.5 cm measured clinically or by imaging had guided axillary sampling (sentinel lymph node biopsy in combination with axillary sampling). Four node axillary sampling improved the detection rate of axillary node metastases by 13.6% as compared to blue dye sentinel node biopsy alone. Positive sampled nodes strongly indicated the likelihood of further metastatic being revealed by axillary dissection (67%). Negative sampled nodes in combination with a positive sentinel node biopsy were associated with a much lower rate of further nodal involvement in the axillary clearance (8%)
Semantic Sentiment Analysis of Twitter Data
Internet and the proliferation of smart mobile devices have changed the way
information is created, shared, and spreads, e.g., microblogs such as Twitter,
weblogs such as LiveJournal, social networks such as Facebook, and instant
messengers such as Skype and WhatsApp are now commonly used to share thoughts
and opinions about anything in the surrounding world. This has resulted in the
proliferation of social media content, thus creating new opportunities to study
public opinion at a scale that was never possible before. Naturally, this
abundance of data has quickly attracted business and research interest from
various fields including marketing, political science, and social studies,
among many others, which are interested in questions like these: Do people like
the new Apple Watch? Do Americans support ObamaCare? How do Scottish feel about
the Brexit? Answering these questions requires studying the sentiment of
opinions people express in social media, which has given rise to the fast
growth of the field of sentiment analysis in social media, with Twitter being
especially popular for research due to its scale, representativeness, variety
of topics discussed, as well as ease of public access to its messages. Here we
present an overview of work on sentiment analysis on Twitter.Comment: Microblog sentiment analysis; Twitter opinion mining; In the
Encyclopedia on Social Network Analysis and Mining (ESNAM), Second edition.
201
Comparison of techniques for handling missing covariate data within prognostic modelling studies: a simulation study
Background: There is no consensus on the most appropriate approach to handle missing covariate data within prognostic modelling studies. Therefore a simulation study was performed to assess the effects of different missing data techniques on the performance of a prognostic model.
Methods: Datasets were generated to resemble the skewed distributions seen in a motivating breast cancer example. Multivariate missing data were imposed on four covariates using four different mechanisms; missing completely at random (MCAR), missing at random (MAR), missing not at random (MNAR) and a combination of all three mechanisms. Five amounts of incomplete cases from 5% to 75% were considered. Complete case analysis (CC), single imputation (SI) and five multiple imputation (MI) techniques available within the R statistical software were investigated: a) data augmentation (DA) approach assuming a multivariate normal distribution, b) DA assuming a general location model, c) regression switching imputation, d) regression switching with predictive mean matching (MICE-PMM) and e) flexible additive imputation models. A Cox proportional hazards model was fitted and appropriate estimates for the regression coefficients and model performance measures were obtained.
Results: Performing a CC analysis produced unbiased regression estimates, but inflated standard errors, which affected the significance of the covariates in the model with 25% or more missingness. Using SI, underestimated the variability; resulting in poor coverage even with 10% missingness. Of the MI approaches, applying MICE-PMM produced, in general, the least biased estimates and better coverage for the incomplete covariates and better model performance for all mechanisms. However, this MI approach still produced biased regression coefficient estimates for the incomplete skewed continuous covariates when 50% or more cases had missing data imposed with a MCAR, MAR or combined mechanism. When the missingness depended on the incomplete covariates, i.e. MNAR, estimates were biased with more than 10% incomplete cases for all MI approaches.
Conclusion: The results from this simulation study suggest that performing MICE-PMM may be the preferred MI approach provided that less than 50% of the cases have missing data and the missing data are not MNAR
REFERQUAL: A pilot study of a new service quality assessment instrument in the GP Exercise Referral scheme setting
Background
The development of an instrument accurately assessing service quality in the GP Exercise Referral Scheme (ERS) industry could potentially inform scheme organisers of the factors that affect adherence rates leading to the implementation of strategic interventions aimed at reducing client drop-out.
Methods
A modified version of the SERVQUAL instrument was designed for use in the ERS setting and subsequently piloted amongst 27 ERS clients.
Results
Test re-test correlations were calculated via Pearson's 'r' or Spearman's 'rho', depending on whether the variables were Normally Distributed, to show a significant (mean r = 0.957, SD = 0.02, p < 0.05; mean rho = 0.934, SD = 0.03, p < 0.05) relationship between all items within the questionnaire. In addition, satisfactory internal consistency was demonstrated via Cronbach's 'α'. Furthermore, clients responded favourably towards the usability, wording and applicability of the instrument's items.
Conclusion
REFERQUAL is considered to represent promise as a suitable tool for future evaluation of service quality within the ERS community. Future research should further assess the validity and reliability of this instrument through the use of a confirmatory factor analysis to scrutinise the proposed dimensional structure
WiseEye: next generation expandable and programmable camera trap platform for wildlife research
Funding: The work was supported by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1. The work of S. Newey and RJI was part funded by the Scottish Government's Rural and Environment Science and Analytical Services (RESAS). Details published as an Open Source Toolkit, PLOS Journals at: http://dx.doi.org/10.1371/journal.pone.0169758Peer reviewedPublisher PD
Is new drug prescribing in primary care specialist induced?
<p>Abstract</p> <p>Background</p> <p>Medical specialists are often seen as the first prescribers of new drugs. However, the extent to which specialists influence new drug prescribing in primary care is largely unknown.</p> <p>Methods</p> <p>This study estimates the influence of medical specialists on new drug prescribing in primary care shortly after market introduction. The influence of medical specialists on prescribing of five new drugs was measured in a cohort of 103 GPs, working in 59 practices, over the period 1999 until 2003. The influence of medical specialists on new drug prescribing in primary care was assessed using three outcome measures. Firstly, the proportion of patients receiving their first prescription for a new or reference drug from a specialist. Secondly, the proportion of GPs prescribing new drugs before any specialist prescribes to their patients. Thirdly, we compared the time until the GP's first own prescribing between GPs who waited for prescriptions from specialists and those who did not.</p> <p>Results</p> <p>The influence of specialists showed considerable differences among the new drugs studied. The proportion of patients receiving their first prescription from a specialist was greatest for the combination salmeterol/fluticasone (60.2%), and lowest for rofecoxib (23.0%). The proportion of GPs prescribing new drugs before waiting for prescriptions from medical specialists ranged from 21.1% in the case of esomeprazole to 32.9% for rofecoxib. Prescribing new drugs by specialists did not shorten the GP's own time to prescribing.</p> <p>Conclusion</p> <p>This study shows that the influence of medical specialists is clearly visible for all new drugs and often greater than for the existing older drugs, but the rapid uptake of new drugs in primary care does not seem specialist induced in all cases. GPs are responsible for a substantial amount of all early prescriptions for new drugs and for a subpopulation specialist endorsement is not a requisite to initiate in new drug prescribing. This contradicts with the idea that the diffusion of newly marketed drugs always follows a two-step model, with medical specialists as the innovators and GPs as the followers.</p
Genetic Structure Among 50 Species of the Northeastern Pacific Rocky Intertidal Community
Comparing many species' population genetic patterns across the same seascape can identify species with different levels of structure, and suggest hypotheses about the processes that cause such variation for species in the same ecosystem. This comparative approach helps focus on geographic barriers and selective or demographic processes that define genetic connectivity on an ecosystem scale, the understanding of which is particularly important for large-scale management efforts. Moreover, a multispecies dataset has great statistical advantages over single-species studies, lending explanatory power in an effort to uncover the mechanisms driving population structure. Here, we analyze a 50-species dataset of Pacific nearshore invertebrates with the aim of discovering the most influential structuring factors along the Pacific coast of North America. We collected cytochrome c oxidase I (COI) mtDNA data from populations of 34 species of marine invertebrates sampled coarsely at four coastal locations in California, Oregon, and Alaska, and added published data from 16 additional species. All nine species with non-pelagic development have strong genetic structure. For the 41 species with pelagic development, 13 show significant genetic differentiation, nine of which show striking FST levels of 0.1–0.6. Finer scale geographic investigations show unexpected regional patterns of genetic change near Cape Mendocino in northern California for five of the six species tested. The region between Oregon and Alaska is a second focus of intraspecific genetic change, showing differentiation in half the species tested. Across regions, strong genetic subdivision occurs more often than expected in mid-to-high intertidal species, a result that may reflect reduced gene flow due to natural selection along coastal environmental gradients. Finally, the results highlight the importance of making primary research accessible to policymakers, as unexpected barriers to marine dispersal break the coast into separate demographic zones that may require their own management plans
- …