552 research outputs found

    Coasting cosmologies with time dependent cosmological constant

    Get PDF
    The effect of a time dependent cosmological constant is considered in a family of scalar tensor theories. Friedmann-Robertson-Walker cosmological models for vacumm and perfect fluid matter are found. They have a linear expansion factor, the so called coasting cosmology, the gravitational "constant" decreace inversely with time; this model satisfy the Dirac hipotesis. The cosmological "constant" decreace inversely with the square of time, therefore we can have a very small value for it at present time.Comment: 7 pages, latex file (ijmpal macro), accepted for publication in Int. Mod. Phys.

    Hamilton's principle: why is the integrated difference of kinetic and potential energy minimized?

    Full text link
    I present an intuitive answer to an often asked question: why is the integrated difference K-U between the kinetic and potential energy the quantity to be minimized in Hamilton's principle? Using elementary arguments, I map the problem of finding the path of a moving particle connecting two points to that of finding the minimum potential energy of a static string. The mapping implies that the configuration of a non--stretchable string of variable tension corresponds to the spatial path dictated by the Principle of Least Action; that of a stretchable string in space-time is the one dictated by Hamilton's principle. This correspondence provides the answer to the question above: while a downward force curves the trajectory of a particle in the (x,t) plane downward, an upward force of the same magnitude stretches the string to the same configuration x(t).Comment: 7 pages, 4 figures. Submitted to the American Journal of Physic

    Detection of Delaminations Located at Ceramic/Metal Jointed Interface by Scanning Acoustic Microscopy

    Get PDF
    Since ceramic/metal joints currently play an important role of the structural parts for applications in electrical, electronic or aerospace industries, techniques must be developed for evaluating the integrity of these joints. Such techniques as collimated X-ray beam radiography [1], indentation fracture, and laser speckle imaging have been developed with limited success. No truly nondestructive techniques for evaluating joint strength have been established to date. If a conventional C-scan mode apparatus could be applied directly for detecting a defect such as a delamination on a joint interface, it might be an attractive solution in terms of visualizing the defect as a first step in the evaluation. The shape of the standard specimen of the ceramic/metal joint is essentially a rectangular bar. When the C-scan mode apparatus is used to visualize the jointed interface, an acoustic wave is required to be incident from the ceramic side of the specimen. When considering the attenuation of an ultrasonic wave in the frequency range from 10 to 100 MHz and the thickness of the ceramic portion of the specimen, the wave may not reach the interface, or the wave reflected from the interface may not be detected. When using frequencies lower than 10 MHz, the interface may be imaged, but with limited resolution. Moreover, the contrast may be poor because of water diffusing into the crack in the surface of the specimen. When a conventional A-mode apparatus such as a digital oscilloscope is used to obtain quantitative data, reflected waveforms might be collected. However, the data might not be good enough to analyze details of a defect, such as caused by a fracturing process. Recent studies have shown that delaminations at a ceramic/metal joint, such as a Si3N4/Cu/Steel joint, originate along the periphery of the interface [2]

    Teleology and Realism in Leibniz's Philosophy of Science

    Get PDF
    This paper argues for an interpretation of Leibniz’s claim that physics requires both mechanical and teleological principles as a view regarding the interpretation of physical theories. Granting that Leibniz’s fundamental ontology remains non-physical, or mentalistic, it argues that teleological principles nevertheless ground a realist commitment about mechanical descriptions of phenomena. The empirical results of the new sciences, according to Leibniz, have genuine truth conditions: there is a fact of the matter about the regularities observed in experience. Taking this stance, however, requires bringing non-empirical reasons to bear upon mechanical causal claims. This paper first evaluates extant interpretations of Leibniz’s thesis that there are two realms in physics as describing parallel, self-sufficient sets of laws. It then examines Leibniz’s use of teleological principles to interpret scientific results in the context of his interventions in debates in seventeenth-century kinematic theory, and in the teaching of Copernicanism. Leibniz’s use of the principle of continuity and the principle of simplicity, for instance, reveal an underlying commitment to the truth-aptness, or approximate truth-aptness, of the new natural sciences. The paper concludes with a brief remark on the relation between metaphysics, theology, and physics in Leibniz

    Measuring kindergarteners’ motivational beliefs about writing: a mixed-methods exploration of alternate assessment formats

    Get PDF
    There have been a handful of studies on kindergarteners’ motivational beliefs about writing, yet measuring these beliefs in young children continues to pose a set of challenges. The purpose of this exploratory, mixed-methods study was to examine how kindergarteners understand and respond to different assessment formats designed to capture their motivational beliefs about writing. Across two studies, we administered four assessment formats — a 4-point Likert-type scale survey, a binary choice survey, a challenge preference task, and a semi-structured interview — to a sample of 114 kindergarteners engaged in a larger writing intervention study. Our overall goals were to examine the benefits and challenges of using these assessment formats to capture kindergarteners’ motivational beliefs and to gain insight on future directions for studying these beliefs in this young age group. Many participants had a difficult time responding to the 4-point Likert-type scale survey, due to challenges with the response format and the way the items were worded. However, more simplified assessment formats, including the binary choice survey and challenge preference task, may not have fully captured the nuances and complexities of participants’ motivational beliefs. The semi-structured interview leveraged participants’ voices and highlighted details that were overlooked in the other assessment formats. Participants’ interview responses were deeply intertwined with their local, everyday experiences and pushed back on common assumptions of what constitutes negatively oriented motivational beliefs about writing. Overall, our results suggest that kindergarteners’ motivational beliefs appear to be multifaceted, contextually grounded, and hard to quantify. Additional research is needed to further understand how motivational beliefs are shaped during kindergarten. We argue that motivational beliefs must be studied in context rather than in a vacuum, in order to work toward a fair and meaningful understanding of motivational beliefs about writing that can be applied to school settings

    Wigner Distribution Function Approach to Dissipative Problems in Quantum Mechanics with emphasis on Decoherence and Measurement Theory

    Get PDF
    We first review the usefulness of the Wigner distribution functions (WDF), associated with Lindblad and pre-master equations, for analyzing a host of problems in Quantum Optics where dissipation plays a major role, an arena where weak coupling and long-time approximations are valid. However, we also show their limitations for the discussion of decoherence, which is generally a short-time phenomenon with decay rates typically much smaller than typical dissipative decay rates. We discuss two approaches to the problem both of which use a quantum Langevin equation (QLE) as a starting-point: (a) use of a reduced WDF but in the context of an exact master equation (b) use of a WDF for the complete system corresponding to entanglement at all times

    Stochastic climate theory and modeling

    Get PDF
    Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models

    The state of the Martian climate

    Get PDF
    60°N was +2.0°C, relative to the 1981–2010 average value (Fig. 5.1). This marks a new high for the record. The average annual surface air temperature (SAT) anomaly for 2016 for land stations north of starting in 1900, and is a significant increase over the previous highest value of +1.2°C, which was observed in 2007, 2011, and 2015. Average global annual temperatures also showed record values in 2015 and 2016. Currently, the Arctic is warming at more than twice the rate of lower latitudes
    • 

    corecore