9,189 research outputs found

    The triple task technique for studying writing processes : on which task is attention focused ?

    Get PDF
    The triple task technique measures the time and cognitive effort devoted to specific writing processes by combining directed retrospection with secondary task reaction time (RT). Writing a text is the primary task and rapidly detecting auditory probes to index cognitive effort is the secondary task. The third task is retrospecting and categorizing the contents of working memory at the time of each probe. The present paper reviews studies on the reactivity and validity of the technique. Further, one recent criticism of the method's validity is tested here: namely, that the primary task for the experimenter is not the primary task for the writer, thus distorting the time and effort measurements. We found that time and effort allocated to planning, translating, executing, evaluating, and revising was the same when the writer was encouraged by instructions to focus either on the speed of responding or the accuracy of retrospection instead of the text itself. Because writing requires sustained thought and attention to produce a cumulative product, it is apparently difficult to make text production anything but the primary task. The triple task technique offers a useful alternative to pause analysis and verbal protocols for investigating the functional features of writing

    The Study of the Pioneer Anomaly: New Data and Objectives for New Investigation

    Full text link
    Radiometric tracking data from Pioneer 10 and 11 spacecraft has consistently indicated the presence of a small, anomalous, Doppler frequency drift, uniformly changing with a rate of ~6 x 10^{-9} Hz/s; the drift can be interpreted as a constant sunward acceleration of each particular spacecraft of a_P = (8.74 \pm 1.33) x 10^{-10} m/s^2. This signal is known as the Pioneer anomaly; the nature of this anomaly remains unexplained. We discuss the efforts to retrieve the entire data sets of the Pioneer 10/11 radiometric Doppler data. We also report on the recently recovered telemetry files that may be used to reconstruct the engineering history of both spacecraft using original project documentation and newly developed software tools. We discuss possible ways to further investigate the discovered effect using these telemetry files in conjunction with the analysis of the much extended Doppler data. We present the main objectives of new upcoming study of the Pioneer anomaly, namely i) analysis of the early data that could yield the direction of the anomaly, ii) analysis of planetary encounters, that should tell more about the onset of the anomaly, iii) analysis of the entire dataset, to better determine the anomaly's temporal behavior, iv) comparative analysis of individual anomalous accelerations for the two Pioneers, v) the detailed study of on-board systematics, and vi) development of a thermal-electric-dynamical model using on-board telemetry. The outlined strategy may allow for a higher accuracy solution for a_P and, possibly, will lead to an unambiguous determination of the origin of the Pioneer anomaly.Comment: 43 pages, 40 figures, 3 tables, minor changes before publicatio

    Rate theory for correlated processes: Double-jumps in adatom diffusion

    Get PDF
    We study the rate of activated motion over multiple barriers, in particular the correlated double-jump of an adatom diffusing on a missing-row reconstructed Platinum (110) surface. We develop a Transition Path Theory, showing that the activation energy is given by the minimum-energy trajectory which succeeds in the double-jump. We explicitly calculate this trajectory within an effective-medium molecular dynamics simulation. A cusp in the acceptance region leads to a sqrt{T} prefactor for the activated rate of double-jumps. Theory and numerical results agree

    The Character of Z-pole Data Constraints on Standard Model Parameters

    Full text link
    Despite the impressive precision of the Z-pole measurements made at LEP and SLC, the allowed region for the principle Standard Model parameters responsible for radiative corrections (the mass of the Higgs, the mass of the top and alpha(Mz)) is still large enough to encompass significant non-linearities. The nature of the experimental constraints therefore depends in an interesting way on the "accidental" relationships among the various measurements. In particular, the fact that the Z-pole measurements favor values of the Higgs mass excluded by direct searches leads us to examine the effects of external Higgsstrahlung, a process ignored by the usual precision electroweak calculations.Comment: 9 pages, 6 figures, REVTeX format; added reference in section IV; added paragraph on widths and a few cosmetic changes to correspond to published versio

    VETA-1 x ray detection system

    Get PDF
    The alignment and X-ray imaging performance of the Advanced X-ray Astrophysics Facility (AXAF) Verification Engineering Test Article-I (VETA-I) was measured by the VETA-I X-Ray Detection System (VXDS). The VXDS was based on the X-ray detection system utilized in the AXAF Technology Mirror Assembly (TMA) program, upgraded to meet the more stringent requirements of the VETA-I test program. The VXDS includes two types of X-ray detectors: (1) a High Resolution Imager (HRI) which provides X-ray imaging capabilities, and (2) sealed and flow proportional counters which, in conjunction with apertures of various types and precision translation stages, provide the most accurate measurement of VETA-I performance. Herein we give an overview of the VXDS hardware including X-ray detectors, translation stages, apertures, proportional counters and flow counter gas supply system and associated electronics. We also describe the installation of the VXDS into the Marshall Space Flight Center (MSFC) X-Ray Calibration Facility (XRCF). We discuss in detail the design and performance of those elements of the VXDS which have not been discussed elsewhere; translation systems, flow counter gas supply system, apertures and thermal monitoring system

    Measurement of the cross-section ratio 3H(d,γ)5He/3H(d,α)n at 100 keV

    Get PDF
    The cross-section ratio for 3H(d,γ)5He relative to 3H(d,α)n has been measured at an effective deuteron bombarding energy of 100 keV with a NaI pair spectrometer and a tritiated-titanium target. The ratio was determined to be (1.2±0.3)×10^-4 by comparing the spectra and count rates for 3H(d,γ)5He and 3H(d,α)n with 2H(3He,γ)5Li and 2H(3He,α)1H

    The focused ion beam as an integrated circuit restructuring tool

    Get PDF
    One of the capabilities of focused ion beam systems is ion milling. The purpose of this work is to explore this capability as a tool for integrated circuit restructuring. Methods for cutting and joining conductors are needed. Two methods for joining conductors are demonstrated. The first consists of spinning nitrocellulose (a self‐developing resist) on the circuit, ion exposing an area, say, 7×7 μm, then milling a smaller via with sloping sidewalls through the first metal layer down to the second, e‐beam evaporating metal, and then dissolving the nitrocellulose to achieve liftoff. The resistance of these links between two metal levels varied from 1 to 7 Ω. The second, simpler method consists of milling a via with vertical sidewalls down to the lower metal layer, then reducing the milling scan to a smaller area in the center of this via, thereby redepositing the metal from the lower layer on the vertical sidewall. The short circuit thus achieved varied from 0.4 to 1.5 Ω for vias of dimensions 3×3 μm to 1×1 μm, respectively. The time to mill a 1×1 μm via with a 68 keV Ga+ beam, of 220 Pa current is 60 s. In a system optimized for this application, this milling time is expected to be reduced by a factor of at least 100. In addition, cuts have been made in 1‐μm‐thick Al films covered by 0.65 μm of SiO2. These cuts have resistances in excess of 20 MΩ. This method of circuit restructuring can work at dimensions a factor of 10 smaller than laser zapping and requires no special sites to be fabricated

    Prokaryotic responses to a warm temperature anomaly in northeast subarctic Pacific waters

    Get PDF
    Recent studies on marine heat waves describe water temperature anomalies causing changes in food web structure, bloom dynamics, biodiversity loss, and increased plant and animal mortality. However, little information is available on how water temperature anomalies impact prokaryotes (bacteria and archaea) inhabiting ocean waters. This is a nontrivial omission given their integral roles in driving major biogeochemical fluxes that influence ocean productivity and the climate system. Here we present a time-resolved study on the impact of a large-scale warm water surface anomaly in the northeast subarctic Pacific Ocean, colloquially known as the Blob, on prokaryotic community compositions. Multivariate statistical analyses identified significant depth- and season-dependent trends that were accentuated during the Blob. Moreover, network and indicator analyses identified shifts in specific prokaryotic assemblages from typically particle-associated before the Blob to taxa considered free-living and chemoautotrophic during the Blob, with potential implications for primary production and organic carbon conversion and export. Traving et al. use small subunit ribosomal RNA gene sequencing to examine spatial and temporal trends in bacterial and archaeal community structure during a large marine warm water surface anomaly, the Blob. Their findings suggest that community structure shifted during the Blob, with taxa considered free-living and chemoautotrophic prevailing under these unusual conditions
    corecore