419 research outputs found
Mouse of the Schoolyard
This thesis consists of two parts that show the diversity and flexibility of the student as a writer.
The original story was drafted, originally culled from a vast array of childhood memories, albeit alone it was too short to fulfill the requirements for a culminating project. The richness of the characters as well as the story itself called out for additional attention. An attempt to develop the story into a larger project proved an exercise in futility, as I felt it could not survive enough additions to increase the success of the project. An attempt to do so found the story mired in mindless bulk and boring dialogue. After developing and sketching the key characters, I decided to adapt the story into a movie screenplay.
Based predominantly on a series of non-fiction events from my childhood years spent in attendance at Corpus Christi Catholic Elementary School, the story concerns a young boy who suffers at the hands of a group of bullies who plague him for several years. Reaching a near breaking point and in the depths of despair, he is befriended by an old nun who teaches at the adjoining high school. Through a series of twists and turns, the young boy eventually meets his attackers head-on, growing up in the process and learning invaluable life-lessons from his new and unusual teacher along the way.
The adaptation to screenplay provided the opportunity to add some comedic relief to an otherwise dark piece of non-fiction while retaining the original storyline and its message of hope. It also allowed additional depth of the characters to shine through as well as allowed some of the characters to move to the forefront who had, in the story version, been vague and transparent.
The end result is the submitted screenplay based on the original short story. The inclusion of the story in the appendix will allow the comparison of the two, hopefully entertaining each in its own form
Professional Fees and Other Direct Costs in Chapter 7 Business Liquidations
Here we provide a comprehensive look at direct costs associated with chapter 7 business bankruptcy liquidations by examining cases from five geographically dispersed judicial districts. This Article measures chapter 7 direct costs, which are essentially out-of-pocket administrative costs associated with chapter 7 proceedings. Examples of direct costs include attorneys\u27 fees, filing fees, and other professional fees. Part II describes the procedures we followed in gathering data. Part II also sets forth several assumptions we made when describing direct costs. Each time an assumption needed to be made, we took the assumption that produced the lowest bankruptcy costs. Accordingly, the data in this Article is a conservative estimate of chapter 7 costs. Part III describes the characteristics of the debtors and cases in our sample. Part IV discusses the actual cost measurements and quantifies the costs of chapter 7 business bankruptcy. Part V uses statistical analysis to identify determinants of chapter 7 costs. Part VI summarizes our major conclusions
Project Maths academy: Using Khan Academy's exercise platform as an educational aid in a post-primary mathematics classroom
The focus of this paper is a First Year post-primary mathematics class in which Khan Academy's online exercise platform was used weekly for an academic year. Interviews were conducted with the teacher, and students were surveyed about their opinions of the platform and its usage. Regular classroom observations took place to gain an insight into the context of these opinions. A subsequent survey compared these students’ attitudes towards mathematics classes with those of their peers who were not using the platform. Test scores were compared between three classes (one using Khan Academy; two not) to ascertain whether the platform had any effect on student performance. The platform was found to be an invaluable tool for class management: the teacher was able to provide the capable students with enough work while attending to students in need of support. Students enjoyed their time spent on the platform and the more capable students were able to work at their own pace and tackle more challenging exercises. Test results show that the platform may have a negative effect on student performance in the areas of integers and probability, but a positive effect in coordinate geometry. We comment on the evidence for statistically significant differences in the general results of those using the platform and those not
A Mixed-Methods Study Assessing Special Education Preservice Candidates\u27 Preparedness for Their First Year of Teaching
This study employed a Likert-type survey,
Praxis/Pathwise
written observations, as well as guided and open-ended reflections to assess the perceptions of preparedness for the first year of teaching for special education student teaching candidates. Cooperating teachers completed the survey and Praxis /Pathwise observations. University supervisors completed Praxis/Pathwise observations and responded to and analyzed guided and open-ended reflections. The survey instrument was based on the research literature and included responsibilities typically required of special educators (e.g., completing paperwork, planning, assessment, etc.). Results indicated general congruence among the three data sources, but also indicated that two cooperating teachers were reluctant to provide negative feedback, indicating to university supervisors a need to provide guidance and assurance of the value of providing less positive assessments of their student teachers’ preparedness. This ongoing research study supports efforts toward accreditation and program improvement. The methods may be generalized to other programs, even when the actual data collection instruments may differ
Introducing NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies
Single cell analysis of skeletal muscle (SM) tissue is a fundamental tool for
understanding many neuromuscular disorders. For this analysis to be reliable
and reproducible, identification of individual fibres within microscopy images
(segmentation) of SM tissue should be precise. There is currently no tool or
pipeline that makes automatic and precise segmentation and curation of images
of SM tissue cross-sections possible. Biomedical scientists in this field rely
on custom tools and general machine learning (ML) models, both followed by
labour intensive and subjective manual interventions to get the segmentation
right. We believe that automated, precise, reproducible segmentation is
possible by training ML models. However, there are currently no good quality,
publicly available annotated imaging datasets available for ML model training.
In this paper we release NCL-SM: a high quality bioimaging dataset of 46 human
tissue sections from healthy control subjects and from patients with
genetically diagnosed muscle pathology. These images include 50k manually
segmented muscle fibres (myofibres). In addition we also curated high quality
myofibres and annotated reasons for rejecting low quality myofibres and regions
in SM tissue images, making this data completely ready for downstream analysis.
This, we believe, will pave the way for development of a fully automatic
pipeline that identifies individual myofibres within images of tissue sections
and, in particular, also classifies individual myofibres that are fit for
further analysis.Comment: Extended Abstract presented at Machine Learning for Health (ML4H)
symposium 2023, December 10th, 2023, New Orleans, United States, 09 pages
Full Paper presented at Big Data Analytics for Health and Medicine (BDA4HM)
workshop, IEEE BigData 2023, December 15th-18th, 2023, Sorrento, Ital
NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies
Single cell analysis of human skeletal muscle (SM) tissue cross-sections is a
fundamental tool for understanding many neuromuscular disorders. For this
analysis to be reliable and reproducible, identification of individual fibres
within microscopy images (segmentation) of SM tissue should be automatic and
precise. Biomedical scientists in this field currently rely on custom tools and
general machine learning (ML) models, both followed by labour intensive and
subjective manual interventions to fine-tune segmentation. We believe that
fully automated, precise, reproducible segmentation is possible by training ML
models. However, in this important biomedical domain, there are currently no
good quality, publicly available annotated imaging datasets available for ML
model training. In this paper we release NCL-SM: a high quality bioimaging
dataset of 46 human SM tissue cross-sections from both healthy control subjects
and from patients with genetically diagnosed muscle pathology. These images
include 50k manually segmented muscle fibres (myofibres). In addition we
also curated high quality myofibre segmentations, annotating reasons for
rejecting low quality myofibres and low quality regions in SM tissue images,
making these annotations completely ready for downstream analysis. This, we
believe, will pave the way for development of a fully automatic pipeline that
identifies individual myofibres within images of tissue sections and, in
particular, also classifies individual myofibres that are fit for further
analysis.Comment: Paper accepted at the Big Data Analytics for Health and Medicine
(BDA4HM) workshop, IEEE BigData 2023, December 15th-18th, 2023, Sorrento,
Italy, 07 pages. arXiv admin note: substantial text overlap with
arXiv:2311.1109
Direct and Absolute Quantification of over 1800 Yeast Proteins via Selected Reaction Monitoring
Defining intracellular protein concentration is critical in molecular systems biology. Although strategies for determining relative protein changes are available, defining robust absolute values in copies per cell has proven significantly more challenging. Here we present a reference data set quantifying over 1800 Saccharomyces cerevisiae proteins by direct means using protein-specific stable-isotope labeled internal standards and selected reaction monitoring (SRM) mass spectrometry, far exceeding any previous study. This was achieved by careful design of over 100 QconCAT recombinant proteins as standards, defining 1167 proteins in terms of copies per cell and upper limits on a further 668, with robust CVs routinely less than 20%. The selected reaction monitoring-derived proteome is compared with existing quantitative data sets, highlighting the disparities between methodologies. Coupled with a quantification of the transcriptome by RNA-seq taken from the same cells, these data support revised estimates of several fundamental molecular parameters: a total protein count of ∼100 million molecules-per-cell, a median of ∼1000 proteins-per-transcript, and a linear model of protein translation explaining 70% of the variance in translation rate. This work contributes a “gold-standard” reference yeast proteome (including 532 values based on high quality, dual peptide quantification) that can be widely used in systems models and for other comparative studies. Reliable and accurate quantification of the proteins present in a cell or tissue remains a major challenge for post-genome scientists. Proteins are the primary functional molecules in biological systems and knowledge of their abundance and dynamics is an important prerequisite to a complete understanding of natural physiological processes, or dysfunction in disease. Accordingly, much effort has been spent in the development of reliable, accurate and sensitive techniques to quantify the cellular proteome, the complement of proteins expressed at a given time under defined conditions (1). Moreover, the ability to model a biological system and thus characterize it in kinetic terms, requires that protein concentrations be defined in absolute numbers (2, 3). Given the high demand for accurate quantitative proteome data sets, there has been a continual drive to develop methodology to accomplish this, typically using mass spectrometry (MS) as the analytical platform. Many recent studies have highlighted the capabilities of MS to provide good coverage of the proteome at high sensitivity often using yeast as a demonstrator system (4⇓⇓⇓⇓⇓–10), suggesting that quantitative proteomics has now “come of age” (1). However, given that MS is not inherently quantitative, most of the approaches produce relative quantitation and do not typically measure the absolute concentrations of individual molecular species by direct means. For the yeast proteome, epitope tagging studies using green fluorescent protein or tandem affinity purification tags provides an alternative to MS. Here, collections of modified strains are generated that incorporate a detectable, and therefore quantifiable, tag that supports immunoblotting or fluorescence techniques (11, 12). However, such strategies for copies per cell (cpc) quantification rely on genetic manipulation of the host organism and hence do not quantify endogenous, unmodified protein. Similarly, the tagging can alter protein levels - in some instances hindering protein expression completely (11). Even so, epitope tagging methods have been of value to the community, yielding high coverage quantitative data sets for the majority of the yeast proteome (11, 12). MS-based methods do not rely on such nonendogenous labels, and can reach genome-wide levels of coverage. Accurate estimation of absolute concentrations i.e. protein copy number per cell, also usually necessitates the use of (one or more) external or internal standards from which to derive absolute abundance (4). Examples include a comprehensive quantification of the Leptospira interrogans proteome that used a 19 protein subset quantified using selected reaction monitoring (SRM)1 to calibrate their label-free data (8, 13). It is worth noting that epitope tagging methods, although also absolute, rely on a very limited set of standards for the quantitative western blots and necessitate incorporation of a suitable immunogenic tag (11). Other recent, innovative approaches exploiting total ion signal and internal scaling to estimate protein cellular abundance (10, 14), avoid the use of internal standards, though they do rely on targeted proteomic data to validate their approach. The use of targeted SRM strategies to derive proteomic calibration standards highlights its advantages in comparison to label-free in terms of accuracy, precision, dynamic range and limit of detection and has gained currency for its reliability and sensitivity (3, 15⇓–17). Indeed, SRM is often referred to as the “gold standard proteomic quantification method,” being particularly well-suited when the proteins to be quantified are known, when appropriate surrogate peptides for protein quantification can be selected a priori, and matched with stable isotope-labeled (SIL) standards (18⇓–20). In combination with SIL peptide standards that can be generated through a variety of means (3, 15), SRM can be used to quantify low copy number proteins, reaching down to ∼50 cpc in yeast (5). However, although SRM methodology has been used extensively for S. cerevisiae protein quantification by us and others (19, 21, 22), it has not been used for large protein cohorts because of the requirement to generate the large numbers of attendant SIL peptide standards; the largest published data set is only for a few tens of proteins. It remains a challenge therefore to robustly quantify an entire eukaryotic proteome in absolute terms by direct means using targeted MS and this is the focus of our present study, the Census Of the Proteome of Yeast (CoPY). We present here direct and absolute quantification of nearly 2000 endogenous proteins from S. cerevisiae grown in steady state in a chemostat culture, using the SRM-based QconCAT approach. Although arguably not quantification of the entire proteome, this represents an accurate and rigorous collection of direct yeast protein quantifications, providing a gold-standard data set of endogenous protein levels for future reference and comparative studies. The highly reproducible SIL-SRM MS data, with robust CVs typically less than 20%, is compared with other extant data sets that were obtained via alternative analytical strategies. We also report a matched high quality transcriptome from the same cells using RNA-seq, which supports additional calculations including a refined estimate of the total protein content in yeast cells, and a simple linear model of translation explaining 70% of the variance between RNA and protein levels in yeast chemostat cultures. These analyses confirm the validity of our data and approach, which we believe represents a state-of-the-art absolute quantification compendium of a significant proportion of a model eukaryotic proteome
- …