8,292 research outputs found
National Transonic Facility: A review of the operational plan
The proposed National Transonic Facility (NTF) operational plan is reviewed. The NTF will provide an aerodynamic test capability significantly exceeding that of other transonic regime wind tunnels now available. A limited number of academic research program that might use the NTF are suggested. It is concluded that the NTF operational plan is useful for management, technical, instrumentation, and model building techniques available in the specialized field of aerodynamic analysis and simulation. It is also suggested that NASA hold an annual conference to discuss wind tunnel research results and to report on developments that will further improve the utilization and cost effectiveness of the NTF and other wind tunnels
Winter Food Habits and Preferences of Northern Bobwhites in East Texas
During late winter, 1994 and 1995, we investigated food habits and preferences of northern bobwhites (Colinus virginianus; hereafter, bobwhites) collected on forested lands in east Texas. Crops for bobwhites were collected from areas under 3 management regimes, namely intensively managed for bobwhites (QMA) (i.e., tree basal area reduced, annually burned, numerous multi-stage food plots, etc.), extensively managed for timber and wildlife (NBS) (i.e., burned every 3-5 years, scattered 2-stage food plots with corn feeders), and unmanaged for wildlife (i.e., burned every 5-7 years). With years pooled, partridge pea (Cassia fasciculata), Hercules club (Zanthoxylum clava-herculis), and pine (Pinus spp.) seeds, and clover leaflets (Trifolium spp.) comprised 93% by weight of foods of 79 bobwhites foods on QMA. On NBS, 81% of 40 bobwhite diets was butterfly pea (Centrosema virginianum), browntop millet, pine, wild bean (Strophostyles spp.), and corn seeds and clover leaflets; millet and corn were from food plots and feeders, respectively. For unmanaged areas, 79% of 19 bobwhite diets was butterfly pea, rush (Juncus spp.), pine, partridge pea, and American beautyberry (Callicarpa americana) seeds, and clover leaflets. Top-ranked food items on QMA were pine, hairy vetch, and Hercules club seeds in 1994 and butterfly pea, partridge pea, and wax myrtle (Myrica cerifera) seeds in 1995 (P \u3c 0.05). On NBS, hawthorn (Crataegus spp.) and beautyberry seeds were top-ranked in 1994 as were kobe lespedeza, wild bean, and butterfly pea seeds in 1995. On unmanaged areas, butterfly pea and partridge pea seeds and clover leaflets were highest ranked in 1995. On forested lands, activities (e.g., disking, burning, establishing food plots) which provide seed-bearing plants, especially legumes, and clover greenery benefit bobwhites
Winter Food Habits and Preferences of Northern Bobwhites in East Texas
During late winter, 1994 and 1995, we investigated food habits and preferences of northern bobwhites (Colinus virginianus; hereafter, bobwhites) collected on forested lands in east Texas. Crops for bobwhites were collected from areas under 3 management regimes, namely intensively managed for bobwhites (QMA) (i.e., tree basal area reduced, annually burned, numerous multi-stage food plots, etc.), extensively managed for timber and wildlife (NBS) (i.e., burned every 3-5 years, scattered 2-stage food plots with corn feeders), and unmanaged for wildlife (i.e., burned every 5-7 years). With years pooled, partridge pea (Cassia fasciculata), Hercules club (Zanthoxylum clava-herculis), and pine (Pinus spp.) seeds, and clover leaflets (Trifolium spp.) comprised 93% by weight of foods of 79 bobwhites foods on QMA. On NBS, 81% of 40 bobwhite diets was butterfly pea (Centrosema virginianum), browntop millet, pine, wild bean (Strophostyles spp.), and corn seeds and clover leaflets; millet and corn were from food plots and feeders, respectively. For unmanaged areas, 79% of 19 bobwhite diets was butterfly pea, rush (Juncus spp.), pine, partridge pea, and American beautyberry (Callicarpa americana) seeds, and clover leaflets. Top-ranked food items on QMA were pine
Electronic Structure of Copper Impurities in ZnO
We have measured the near infrared absorption, Zeeman effect, and electron spin resonance of Cu2+ ions introduced as a substitutional impurity into single-crystal ZnO. From the g values of the lowest Γ6 component of the T2 state (the ground state), gII=0.74 and g⊥=1.531, and from the g values of the Γ4Γ5 component of the E state, gII=1.63 and g⊥=0, we have determined the wave functions of Cu2+ in terms of an LCAO MO model in which overlap only with the first nearest neighbor oxygen ions is considered. These wave functions indicate that the copper 3d (t2) hole spends about 40% of its time in the oxygen orbitals, and that the copper t2 orbitals are expanded radially with respect to the e orbitals. Corroboration for the radial expansion of the t2 orbitals is obtained from an analysis of the hyperfine splitting. It is concluded from our model that the large values of the hyperfine constants, |A|=195×10^-4 cm^-1 and |B|=231×10^-4 cm^-1, are due to the contribution from the orbital motion of the t2 hole
Augmented input: The effect of visuographic supports on the auditory comprehension of people with chronic aphasia
Background: Augmented input (AI), or the use of visuographic images and linguistic supports, is a strategy for facilitating the auditory comprehension of people with chronic aphasia. To date, researchers have not systematically evaluated the effects of various types of AI strategies on auditory comprehension.
Aims: The purpose of the study was to perform an initial evaluation of the changes in auditory comprehension accuracy experienced by people with aphasia when they received one type of AI. Specifically, the authors examined the effect four types of non-personalized visuographic image conditions on the comprehension of people with aphasia when listening to narratives.
Methods & Procedures: A total of 21 people with chronic aphasia listened to four stories, one in each of four conditions (i.e., no-context photographs, low-context drawings with embedded no-context photographs, high-context photographs, and no visuographic support). Auditory comprehension was measured by assessing participants’ accuracy in responding to 15 multiple- choice sentence completion statements related to each story.
Outcomes & Results: Results showed no significant differences in response accuracy across the four visuographic conditions.
Conclusions: The type of visuographic image provided as AI in this study did not influence participants’ response accuracy for sentence completion comprehension tasks. However, the authors only examined non-personalized visuographic images as a type of AI support. Future researchers should systematically examine the benefits provided to people with aphasia by other types of visuographic and linguistic AI supports
Augmented input: The effect of visuographic supports on the auditory comprehension of people with chronic aphasia
Background: Augmented input (AI), or the use of visuographic images and linguistic supports, is a strategy for facilitating the auditory comprehension of people with chronic aphasia. To date, researchers have not systematically evaluated the effects of various types of AI strategies on auditory comprehension.
Aims: The purpose of the study was to perform an initial evaluation of the changes in auditory comprehension accuracy experienced by people with aphasia when they received one type of AI. Specifically, the authors examined the effect four types of non-personalized visuographic image conditions on the comprehension of people with aphasia when listening to narratives.
Methods & Procedures: A total of 21 people with chronic aphasia listened to four stories, one in each of four conditions (i.e., no-context photographs, low-context drawings with embedded no-context photographs, high-context photographs, and no visuographic support). Auditory comprehension was measured by assessing participants’ accuracy in responding to 15 multiple- choice sentence completion statements related to each story.
Outcomes & Results: Results showed no significant differences in response accuracy across the four visuographic conditions.
Conclusions: The type of visuographic image provided as AI in this study did not influence participants’ response accuracy for sentence completion comprehension tasks. However, the authors only examined non-personalized visuographic images as a type of AI support. Future researchers should systematically examine the benefits provided to people with aphasia by other types of visuographic and linguistic AI supports
Augmented input: The effect of visuographic supports on the auditory comprehension of people with chronic aphasia
Background: Augmented input (AI), or the use of visuographic images and linguistic supports, is a strategy for facilitating the auditory comprehension of people with chronic aphasia. To date, researchers have not systematically evaluated the effects of various types of AI strategies on auditory comprehension.
Aims: The purpose of the study was to perform an initial evaluation of the changes in auditory comprehension accuracy experienced by people with aphasia when they received one type of AI. Specifically, the authors examined the effect four types of non-personalized visuographic image conditions on the comprehension of people with aphasia when listening to narratives.
Methods & Procedures: A total of 21 people with chronic aphasia listened to four stories, one in each of four conditions (i.e., no-context photographs, low-context drawings with embedded no-context photographs, high-context photographs, and no visuographic support). Auditory comprehension was measured by assessing participants’ accuracy in responding to 15 multiple- choice sentence completion statements related to each story.
Outcomes & Results: Results showed no significant differences in response accuracy across the four visuographic conditions.
Conclusions: The type of visuographic image provided as AI in this study did not influence participants’ response accuracy for sentence completion comprehension tasks. However, the authors only examined non-personalized visuographic images as a type of AI support. Future researchers should systematically examine the benefits provided to people with aphasia by other types of visuographic and linguistic AI supports
Iowan Drift Problem, Northeastern Iowa
https://ir.uiowa.edu/igs_ri/1006/thumbnail.jp
Document Filtering for Long-tail Entities
Filtering relevant documents with respect to entities is an essential task in
the context of knowledge base construction and maintenance. It entails
processing a time-ordered stream of documents that might be relevant to an
entity in order to select only those that contain vital information.
State-of-the-art approaches to document filtering for popular entities are
entity-dependent: they rely on and are also trained on the specifics of
differentiating features for each specific entity. Moreover, these approaches
tend to use so-called extrinsic information such as Wikipedia page views and
related entities which is typically only available only for popular head
entities. Entity-dependent approaches based on such signals are therefore
ill-suited as filtering methods for long-tail entities. In this paper we
propose a document filtering method for long-tail entities that is
entity-independent and thus also generalizes to unseen or rarely seen entities.
It is based on intrinsic features, i.e., features that are derived from the
documents in which the entities are mentioned. We propose a set of features
that capture informativeness, entity-saliency, and timeliness. In particular,
we introduce features based on entity aspect similarities, relation patterns,
and temporal expressions and combine these with standard features for document
filtering. Experiments following the TREC KBA 2014 setup on a publicly
available dataset show that our model is able to improve the filtering
performance for long-tail entities over several baselines. Results of applying
the model to unseen entities are promising, indicating that the model is able
to learn the general characteristics of a vital document. The overall
performance across all entities---i.e., not just long-tail entities---improves
upon the state-of-the-art without depending on any entity-specific training
data.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on
Information and Knowledge Management. 201
- …