443 research outputs found

    Study of electrode phenomena by the cathode ray oscillograph

    Get PDF
    A study of the time of passivation of gold has been made in a series of hydrochloric acid solutions. For a given concentration of hydrochloric acid the equation (i - i₀)T = K holds for all current densities provided the solution is vigorously stirred. This equation is similar to that obtained by Shutt and Walton working at lower current densities. A linear relation has been found to exist between i₀, the limiting current density and the acid concentration. A similar relation also holds between the constant K and the hydrochloric acid concentration. The results obtained have been interpreted on the basis of a diffusion theory. The time of passivation is taken to be the time required to set up a diffusion layer at the electrode surface. The reduction of the chloride concentration at the electrode surface to nearly zero is assumed to be necessary before the passivation of the electrode can take place. Two methods have been given whereby the thickness of the diffusion layer at the electrode can be determined The results are in reasonable agreement with each other and are of the order normally encountered in diffusion phenomena. A reason for the non-application of sand's equation has been suggested

    A comparative framework: how broadly applicable is a 'rigorous' critical junctures framework?

    Get PDF
    The paper tests Hogan and Doyle's (2007, 2008) framework for examining critical junctures. This framework sought to incorporate the concept of ideational change in understanding critical junctures. Until its development, frameworks utilized in identifying critical junctures were subjective, seeking only to identify crisis, and subsequent policy changes, arguing that one invariably led to the other, as both occurred around the same time. Hogan and Doyle (2007, 2008) hypothesized ideational change as an intermediating variable in their framework, determining if, and when, a crisis leads to radical policy change. Here we test this framework on cases similar to, but different from, those employed in developing the exemplar. This will enable us determine whether the framework's relegation of ideational change to a condition of crisis holds, or, if ideational change has more importance than is ascribed to it by this framework. This will also enable us determined if the framework itself is robust, and fit for the purposes it was designed to perform — identifying the nature of policy change

    Bioactivity and structural properties of chimeric analogs of the starfish SALMFamide neuropeptides S1 and S2

    Get PDF
    The starfish SALMFamide neuropeptides S1 (GFNSALMFamide) and S2 (SGPYSFNSGLTFamide) are the prototypical members of a family of neuropeptides that act as muscle relaxants in echinoderms. Comparison of the bioactivity of S1 and S2 as muscle relaxants has revealed that S2 is ten times more potent than S1. Here we investigated a structural basis for this difference in potency by comparing the bioactivity and solution conformations (using NMR and CD spectroscopy) of S1 and S2 with three chimeric analogs of these peptides. A peptide comprising S1 with the addition of S2's N-terminal tetrapeptide (Long S1 or LS1; SGPYGFNSALMFamide) was not significantly different to S1 in its bioactivity and did not exhibit concentration-dependent structuring seen with S2. An analog of S1with its penultimate residue substituted from S2 (S1(T); GFNSALTFamide) exhibited S1-like bioactivity and structure. However, an analog of S2 with its penultimate residue substituted from S1 (S2(M); SGPYSFNSGLMFamide) exhibited loss of S2-type bioactivity and structural properties. Collectively, our data indicate that the C-terminal regions of S1 and S2 are the key determinants of their differing bioactivity. However, the N-terminal region of S2 may influence its bioactivity by conferring structural stability in solution. Thus, analysis of chimeric SALMFamides has revealed how neuropeptide bioactivity is determined by a complex interplay of sequence and conformation

    Primary Beam and Dish Surface Characterization at the Allen Telescope Array by Radio Holography

    Full text link
    The Allen Telescope Array (ATA) is a cm-wave interferometer in California, comprising 42 antenna elements with 6-m diameter dishes. We characterize the antenna optical accuracy using two-antenna interferometry and radio holography. The distortion of each telescope relative to the average is small, with RMS differences of 1 percent of beam peak value. Holography provides images of dish illumination pattern, allowing characterization of as-built mirror surfaces. The ATA dishes can experience mm-scale distortions across -2 meter lengths due to mounting stresses or solar radiation. Experimental RMS errors are 0.7 mm at night and 3 mm under worst case solar illumination. For frequencies 4, 10, and 15 GHz, the nighttime values indicate sensitivity losses of 1, 10 and 20 percent, respectively. The ATA.s exceptional wide-bandwidth permits observations over a continuous range 0.5 to 11.2 GHz, and future retrofits may increase this range to 15 GHz. Beam patterns show a slowly varying focus frequency dependence. We probe the antenna optical gain and beam pattern stability as a function of focus and observation frequency, concluding that ATA can produce high fidelity images over a decade of simultaneous observation frequencies. In the day, the antenna sensitivity and pointing accuracy are affected. We find that at frequencies greater than 5 GHz, daytime observations greater than 5 GHz will suffer some sensitivity loss and it may be necessary to make antenna pointing corrections on a 1 to 2 hourly basis.Comment: 19 pages, 23 figures, 3 tables, Authors indicated by an double dagger ({\ddag}) are affiliated with the SETI Institute, Mountain View, CA 95070. Authors indicated by a section break ({\S}) are affiliated with the Hat Creek Radio Observatory and/or the Radio Astronomy Laboratory, both affiliated with the University of California Berkeley, Berkeley C

    Czech Social Reform/Non-reform: Routes, Actors and Problems

    Full text link
    In this contribution, the author first considers the characteristics of the Czechoslovak communist welfare state and its theoretical alternatives. Throughout the reform process, dependency on both corporatist and socialist regimes won out, while residualist efforts were promoted in the beginning, but were later held back. The author then considers the possible actors involved in social reforms. In this respect, when proceeding from a general to a more concrete level, thought should first be devoted to the social classes and their ideologies, and second to political parties and their leaders. The author goes on to summarise the particular problems and traps in individual sections of the Czech social system. While no objection to decent standards of social protection and health care could be raised, the poor efficiency of their achievement should evoke concern. The author concludes by reflecting on the possible specificities of Czech social reform in comparison with the other countries undergoing reform and the EU. The current lethargy of the Czech welfare system corresponds to a “frozen edifice”, just as in most Western countries. However, such stagnation is apparently acceptable to both the politicians (who mask it in reformist rhetoric) and the population (which learned to master taking advantage of the generous welfare state) and thus is basically sustainable in the long run.http://deepblue.lib.umich.edu/bitstream/2027.42/40037/3/wp651.pd

    The Effects of Cryotherapy on the Velocity of a Pitched Baseball

    Get PDF
    The purpose of this study was to investigate the feasibility of a cold water application of 35 ° to 40°F, between innings, and its effect on the pitching arm through the course of a designated series of throws. Eight male students from the basic physical education classes at South Dakota State University participated in the study conducted over a period of three weeks. All of the individuals involved were administered each of three selected treatments. The data in this study were analyzed in three ways. A t ratio was used to interpret the changes in the average velocity of the pitched ball in the first two innings as compared with the changes in average velocity for the last two innings of a nine inning game. The average changes in velocity over the full nine innings were also computed for each group. Each of the three treatments was administered to all groups and the results were compared to find the average velocity for all inning. If an F ratio was found to be significant, the Duncan\u27s Multiple-Range test was used to determine where the significant difference occurred. As a result of the statistical analysis of the data obtained, the investigator found that the cold water treatment between innings caused a significant decrease in the velocity of the pitched baseball as determined by tests conducted after innings one and two, and after innings eight and nine

    Lake-size dependency of wind shear and convection as controls on gas exchange

    Get PDF
    High-frequency physical observations from 40 temperate lakes were used to examine the relative contributions of wind shear (u*) and convection (w*) to turbulence in the surface mixed layer. Seasonal patterns of u* and w* were dissimilar; u* was often highest in the spring, while w * increased throughout the summer to a maximum in early fall. Convection was a larger mixed-layer turbulence source than wind shear (u */w*-1 for lakes* and w* differ in temporal pattern and magnitude across lakes, both convection and wind shear should be considered in future formulations of lake-air gas exchange, especially for small lakes. © 2012 by the American Geophysical Union.Jordan S. Read, David P. Hamilton, Ankur R. Desai, Kevin C. Rose, Sally MacIntyre, John D. Lenters, Robyn L. Smyth, Paul C. Hanson, Jonathan J. Cole, Peter A. Staehr, James A. Rusak, Donald C. Pierson, Justin D. Brookes, Alo Laas, and Chin H. W

    Normalizing single-cell RNA sequencing data: challenges and opportunities

    Get PDF
    Single-cell transcriptomics is becoming an important component of the molecular biologist's toolkit. A critical step when analyzing data generated using this technology is normalization. However, normalization is typically performed using methods developed for bulk RNA sequencing or even microarray data, and the suitability of these methods for single-cell transcriptomics has not been assessed. We here discuss commonly used normalization approaches and illustrate how these can produce misleading results. Finally, we present alternative approaches and provide recommendations for single-cell RNA sequencing users
    corecore