4,220 research outputs found
An inquiry-based learning approach to teaching information retrieval
The study of information retrieval (IR) has increased in interest and importance with the explosive growth of online information in recent years. Learning about IR within formal courses of study enables users of search engines to use
them more knowledgeably and effectively, while providing the starting point for the explorations of new researchers into novel search technologies. Although IR can be taught in a traditional manner of formal classroom instruction with students being led through the details of the subject and expected to reproduce this in assessment, the nature of IR as a topic makes it an ideal subject for inquiry-based learning approaches to teaching. In an inquiry-based learning approach students are introduced to the principles of a subject and then encouraged to develop their understanding by solving structured or open problems. Working through solutions in subsequent class discussions enables students to appreciate the availability of alternative solutions as proposed by their classmates. Following this approach students not only learn the details of IR techniques, but significantly, naturally learn to apply them in solution of problems. In doing this they not only gain an appreciation of alternative solutions to a problem, but also how to assess their relative strengths and weaknesses. Developing confidence and skills in problem solving enables student assessment to be structured around solution of problems. Thus students can be assessed on the basis of their understanding and ability to apply techniques, rather simply their skill at reciting facts. This has the additional benefit of encouraging general problem solving skills which can be of benefit in other subjects. This approach to teaching IR was successfully implemented in an undergraduate module where students were
assessed in a written examination exploring their knowledge and understanding of the principles of IR and their ability to apply them to solving problems, and a written assignment based on developing an individual research proposal
An ion source for molecular effusion studies
An ion source which utilizes a beam of monoenergetic electrons as the ionizing agent was designed and built for use in a 60° sector general utility mass spectrometer. The source was made to be used without a magnetic field to collimate the electron beam. The source is interchangeable with a surface ionization source that was already utilized in the mass spectrometer. Various operating characteristics of the source were obtained, and the source was tested for linearity. If the source were perfectly linear , the recorded ion current would be directly proportional to the atom current effusing into the ionizing region of the source. The source was estimated to be linear to within one percent
Thin film dielectric microstrip kinetic inductance detectors
Microwave Kinetic Inductance Detectors, or MKIDs, are a type of low
temperature detector that exhibit intrinsic frequency domain multiplexing at
microwave frequencies. We present the first theory and measurements on a MKID
based on a microstrip transmission line resonator. A complete characterization
of the dielectric loss and noise properties of these resonators is performed,
and agrees well with the derived theory. A competitive noise equivalent power
of 5 W Hz at 1 Hz has been demonstrated. The
resonators exhibit the highest quality factors known in a microstrip resonator
with a deposited thin film dielectric.Comment: 10 pages, 4 figures, APL accepte
Accuracy of Fitbit Charge 2 Worn At Different Wrist Locations During Exercise
Many newly released activity monitors use heart rate measured at the wrist to estimate exercise intensity, however, where the device is placed on the wrist may affect accuracy of the measurement. PURPOSE: To determine whether the Pure Pulse technology on the Fitbit Charge 2 will show different heart rate readings when placed on the recommended exercise position compared to the all-day wear position at various exercise intensities. METHODS: Thirty-five participants (MEAN ± SD; 22.0 ± 2.9yrs; 23.9 ± 2.6kg/m2; 18 male) consented to participate in a single visit where two Fitbit Charge 2 devices were placed on the non-dominant wrist. Fitbit A was placed 2-3 fingers above the wrist bone. Fitbit B was placed directly above the wrist bone. The treadmill was set at 3 mph with 0% grade. Participants remained at this speed for 4 minutes. Heart rate measurements were taken at the last 10 seconds of each stage from both Fitbits and a polar heart rate monitor (chest strap). The same procedure was followed for 5 and 6 mph. Statistical analyses were performed using IBM SPSS 23.0. A Two-way (speed x location) Repeated Measures ANOVA was used to examine mean differences. Pairwise comparisons with Bonferroni correction were used in post-hoc analysis. Pearson correlations and mean bias between polar heart rate monitor and activity monitors were also calculated for each speed. RESULTS: Repeated Measures ANOVA found significant differences between speeds (p\u3c0.01) and location (p\u3c0.01), but not for the interaction (p=0.234). Pairwise comparisons indicated significant differences between each speed (p\u3c0.01) and between the polar monitor and Fitbit B (p\u3c0.05), but not between the polar monitor and Fitbit A (p=0.608). Pearson correlations indicated strong correlations between each Fitbit and the polar monitor (r= .58-.91; all p\u3c0.01). Mean bias decreased as speed increased for Fitbit A (mean bias BPM ± SD; -1.1 ± 5.4; -1.9 ± 9.5; -0.4 ± 6.9; -0.3 ± 7.3 for resting, 3mph, 5mph, 6mph respectively) while mean bias for Fitbit B increased as speed increased (-2.8 ± 8.8; -3.1 ± 11.1; -3.9 ± 14.6; -6.7 ± 14.3 for resting, 3mph, 5mph, 6mph respectively). CONCLUSION: Wrist-worn heart rate monitors appear to provide values adequate for recreational use, however, following recommended guidelines on wear-position may impact heart rate readings
Multiple Testing and Data Adaptive Regression: An Application to HIV-1 Sequence Data
Analysis of viral strand sequence data and viral replication capacity could potentially lead to biological insights regarding the replication ability of HIV-1. Determining specific target codons on the viral strand will facilitate the manufacturing of target specific antiretrovirals. Various algorithmic and analysis techniques can be applied to this application. We propose using multiple testing to find codons which have significant univariate associations with replication capacity of the virus. We also propose using a data adaptive multiple regression algorithm to obtain multiple predictions of viral replication capacity based on an entire mutant/non-mutant sequence profile. The data set to which these techniques were applied consists of 317 patients, each with 282 sequenced protease and reverse transcriptase codons. Initially, the multiple testing procedure (Pollard and van der Laan, 2003) was applied to the individual specific viral sequence data. A single-step multiple testing procedure method was used to control the family wise error rate (FWER) at the five percent alpha level. Additional augmentation multiple testing procedures were applied to control the generalized family wise error (gFWER) or the tail probability of the proportion of false positives (TPPFP). Finally, the loss-based, cross-validated Deletion/Substitution/Addition regression algorithm (Sinisi and van der Laan, 2004) was applied to the dataset separately. This algorithm builds candidate estimators in the prediction of a univariate outcome by minimizing an empirical risk, and it uses cross-validation to select fine-tuning parameters such as: size of the regression model, maximum allowed order of interaction of terms in the regression model, and the dimension of the vector of covariates. This algorithm also is used to measure variable importance of the codons. Findings from these multiple analyses are consistent with biological findings and could possibly lead to further biological knowledge regarding HIV-1 viral data
Cis-Lunar Base Camp
Historically, when mounting expeditions into uncharted territories, explorers have established strategically positioned base camps to pre-position required equipment and consumables. These base camps are secure, safe positions from which expeditions can depart when conditions are favorable, at which technology and operations can be tested and validated, and facilitate timely access to more robust facilities in the event of an emergency. For human exploration missions into deep space, cis-lunar space is well suited to serve as such a base camp. The outer regions of cis-lunar space, such as the Earth-Moon Lagrange points, lie near the edge of Earth s gravity well, allowing equipment and consumables to be aggregated with easy access to deep space and to the lunar surface, as well as more distant destinations, such as near-Earth Asteroids (NEAs) and Mars and its moons. Several approaches to utilizing a cis-lunar base camp for sustainable human exploration, as well as some possible future applications are identified. The primary objective of the analysis presented in this paper is to identify options, show the macro trends, and provide information that can be used as a basis for more detailed mission development. Compared within are the high-level performance and cost of 15 preliminary cis-lunar exploration campaigns that establish the capability to conduct crewed missions of up to one year in duration, and then aggregate mass in cis-lunar space to facilitate an expedition from Cis-Lunar Base Camp. Launch vehicles, chemical propulsion stages, and electric propulsion stages are discussed and parametric sizing values are used to create architectures of in-space transportation elements that extend the existing in-space supply chain to cis-lunar space. The transportation options to cis-lunar space assessed vary in efficiency by almost 50%; from 0.16 to 0.68 kg of cargo in cis-lunar space for every kilogram of mass in Low Earth Orbit (LEO). For the 15 cases, 5-year campaign costs vary by only 15% from 0.36 to 0.51 on a normalized scale across all campaigns. Thus the development and first flight costs of assessed transportation options are similar. However, the cost of those options per flight beyond the initial operational capability varies by 70% from 0.3 to 1.0 on a normalized scale. The 10-year campaigns assessed begin to show the effect of this large range of cost beyond initial operational capability as they vary approximately 25% with values from 0.75 to 1.0 on the normalized campaign scale. Therefore, it is important to understand both the cost of implementation and first use as well as long term utilization. Finally, minimizing long term recurring costs is critical to the affordability of future human space exploration missions. Finally minimizing long term recurring costs is critical to the affordability of future human space exploration missions
Resampling Based Multiple Testing Procedure Controlling Tail Probability of the Proportion of False Positives
Simultaneously testing a collection of null hypotheses about a data generating distribution based on a sample of independent and identically distributed observations is a fundamental and important statistical problem involving many applications. In this article we propose a new resampling based multiple testing procedure asymptotically controlling the probability that the proportion of false positives among the set of rejections exceeds q at level alpha, where q and alpha are user supplied numbers. The procedure involves 1) specifying a conditional distribution for a guessed set of true null hypotheses, given the data, which asymptotically is degenerate at the true set of null hypotheses, and 2) specifying a generally valid null distribution for the vector of test-statistics proposed in Pollard and van der Laan (2003), and generalized in our subsequent articles Dudoit et al. (2004), van der Laan et al. (2004a) and van der Laan et al. (2004b). We establish the finite sample rational behind our proposal, and prove that this new multiple testing procedure asymptotically controls the wished tail probability for the proportion of false positives under general data generating distributions. In addition, we provide simulation studies establishing that this method is generally more powerful in finite samples than our previously proposed augmentation multiple testing procedure (van der Laan et al. (2004b)) and competing procedures from the literature. Finally, we illustrate our methodology with a data analysis
Application of a Multiple Testing Procedure Controlling the Proportion of False Positives to Protein and Bacterial Data
Simultaneously testing multiple hypotheses is important in high-dimensional biological studies. In these situations, one is often interested in controlling the Type-I error rate, such as the proportion of false positives to total rejections (TPPFP) at a specific level, alpha. This article will present an application of the E-Bayes/Bootstrap TPPFP procedure, presented in van der Laan et al. (2005), which controls the tail probability of the proportion of false positives (TPPFP), on two biological datasets. The two data applications include firstly, the application to a mass-spectrometry dataset of two leukemia subtypes, AML and ALL. The protein data measurements include intensity and mass-to-charge (m/z) ratios of bone marrow samples, with two replicates per sample. We apply techniques to preprocess the data; i.e. correct for baseline shift of the data as well as appropriately smooth the intensity profiles over the m/z values. After preprocessing the data we show an application of a TPPFP multiple testing techniques (van der Laan et al. (2005)) to test the difference between two groups of patients (AML/ALL) with respect to their intensity values over various m/z ratios, thus indicative of testing proteins of different sizes. Secondly, we will show an illustration of the E-Bayes/Bootstrap TPPFP procedure on a bacterial data set. In this application we are interested in finding bacteria whose mean difference over time points is differentially expressed between two U.S. cities. With both of these data applications, we also show comparisons to the van der Laan et al. (2004b) tppfp augmentation method, and discover the E-Bayes/Bootstrap TPPFP method is less conservative, therefore rejecting more tests at a specific alpha leve
- …