201 research outputs found
The Speed of Light and the Hubble Parameter: The Mass-Boom Effect
We prove here that Newtons universal gravitation and momentum conservation
laws together reproduce Weinbergs relation. It is shown that the Hubble
parameter H must be built in this relation, or equivalently the age of the
Universe t. Using a wave-to-particle interaction technique we then prove that
the speed of light c decreases with cosmological time, and that c is
proportional to the Hubble parameter H. We see the expansion of the Universe as
a local effect due to the LAB value of the speed of light co taken as constant.
We present a generalized red shift law and find a predicted acceleration for
photons that agrees well with the result from Pioneer 10/11 anomalous
acceleration. We finally present a cosmological model coherent with the above
results that we call the Mass-Boom. It has a linear increase of mass m with
time as a result of the speed of light c linear decrease with time, and the
conservation of momentum mc. We obtain the baryonic mass parameter equal to the
curvature parameter, omega m = omega k, so that the model is of the type of the
Einstein static, closed, finite, spherical, unlimited, with zero cosmological
constant. This model is the cosmological view as seen by photons, neutrinos,
tachyons etc. in contrast with the local view, the LAB reference. Neither dark
matter nor dark energy is required by this model. With an initial constant
speed of light during a short time we get inflation (an exponential expansion).
This converts, during the inflation time, the Plancks fluctuation length of
10-33 cm to the present size of the Universe (about 1028 cm, constant from then
on). Thereafter the Mass-Boom takes care to bring the initial values of the
Universe (about 1015 gr) to the value at the present time of about 1055 gr.Comment: 15 pages, presented at the 9th Symposium on "Frontiers of Fundamental
Physics", 7-9 Jan. 2008, University of Udine, Italy. Changed content
Quantum Information and Wave function Collapse
Inofrmation-theoretical restrictions on information transferred in the
measurement of object S by information system O are studied. It is shown that
such constraints, induced by Heisenberg commutation relations, result in the
loss of information about the purity of S state. Consequently, it becomes
impossible for O to discriminate pure and mixed S states. In individual events
this effect is manifested by the stochastic outcomes of pure S state
measurement, i.e. the collapse of pure S state.Comment: 8 pages, talk given on Simposium 'Frontiers of fundamental Physics',
Udine, Italy, January 2008, to appear in Proceeding
Weak nuclear forces cause the strong nuclear force
We determine the strength of the weak nuclear force which holds the lattices
of the elementary particles together. We also determine the strength of the
strong nuclear force which emanates from the sides of the nuclear lattices. The
strong force is the sum of the unsaturated weak forces at the surface of the
nuclear lattices. The strong force is then about ten to the power of 6 times
stronger than the weak force between two lattice points.Comment: 12 pages, 1 figur
Hidden-variable theory versus Copenhagen quantum mechanics
The main assumptions the Copenhagen quantum mechanics has been based on will
be summarized and the known (not yet decided) contradiction between Einstein
and Bohr will be newly analyzed. The given assumptions have been represented
basically by time-dependent Schroedinger equation, to which some further
assumptions have been added. Some critical comments have been raised against
the given mathematical model structure by Pauli (1933) and by Susskind and
Glogover (1964). They may be removed if only the Schroedinger equation is
conserved and the additional assumptions are abandoned, as shown recently. It
seems to be in contradiction to the numerous declarations that the Copenhagen
model has been approved by experimental results.
However, in the most of these experiments only the agreement with the mere
Schroedinger equation has been tested. All mentioned assumptions have been
tested practically only in the EPR experiment (measurement of coincidence light
transmission through two polarizers) proposed originally by Einstein (1935).
Also these experimental results have been interpreted as supporting the
Copenhagen alternative, which has not been, however, true. In fact the
microscopic world may be described correspondingly only with the help of the
hidden-variable theory that is represented by the Schroedinger equation without
mentioned additional assumptions, which has the consequence that the earlier
interpretation gap between microscopic and macroscopic worlds has been removed.
The only difference concerns the existence of discrete states. The
possibilities of the human reason of getting to know the nature will be also
shortly discussed in the beginning of this contribution.Comment: 10 pages, 2 figures; v2: local refinements and improvements of the
tex
Biological Principles in Self-Organization of Young Brain - Viewed from Kohonen Model
Variants of the Kohonen model are proposed to study biological principles of
self-organization in a model of young brain. We suggest a function to measure
aquired knowledge and use it to auto-adapt the topology of neuronal
connectivity, yielding substantial organizational improvement relative to the
standard model. In the early phase of organization with most intense learning,
we observe that neural connectivity is of Small World type, which is very
efficient to organize neurons in response to stimuli. In analogy to human brain
where pruning of neural connectivity (and neuron cell death) occurs in early
life, this feature is present also in our model, which is found to stabilize
neuronal response to stimuli
A dependent nominal type theory
Nominal abstract syntax is an approach to representing names and binding
pioneered by Gabbay and Pitts. So far nominal techniques have mostly been
studied using classical logic or model theory, not type theory. Nominal
extensions to simple, dependent and ML-like polymorphic languages have been
studied, but decidability and normalization results have only been established
for simple nominal type theories. We present a LF-style dependent type theory
extended with name-abstraction types, prove soundness and decidability of
beta-eta-equivalence checking, discuss adequacy and canonical forms via an
example, and discuss extensions such as dependently-typed recursion and
induction principles
Interaction of a CO molecule with a Pt monoatomic chain: the top geometry
Recent experiments showed that the conductance of Pt nanocontacts and
nanowires is measurably reduced by adsorption of CO. We present DFT
calculations of the electronic structure and ballistic conductance of a Pt
monoatomic chain and a CO molecule adsorbed in an on-top position. We find that
the main electronic molecule-chain interaction occurs via the and
orbitals of the molecule, involved in a donation/back-donation
process similar to that of CO on transition-metal surfaces. The ideal ballistic
conductance of the monoatomic chain undergoes a moderate reduction by about 1.0
G_0 (from 4 G_0 to 3.1 G_0) upon adsorption of CO. By repeating all
calculations with and without spin-orbit coupling, no substantial spin-orbit
induced change emerges either in the chain-molecule interaction mechanism or in
the conductance.Comment: 4 pages, 2 figures, in proceedings of Frontiers of Fundamental and
Computational Physic
Kripke Semantics for Martin-L\"of's Extensional Type Theory
It is well-known that simple type theory is complete with respect to
non-standard set-valued models. Completeness for standard models only holds
with respect to certain extended classes of models, e.g., the class of
cartesian closed categories. Similarly, dependent type theory is complete for
locally cartesian closed categories. However, it is usually difficult to
establish the coherence of interpretations of dependent type theory, i.e., to
show that the interpretations of equal expressions are indeed equal. Several
classes of models have been used to remedy this problem. We contribute to this
investigation by giving a semantics that is standard, coherent, and
sufficiently general for completeness while remaining relatively easy to
compute with. Our models interpret types of Martin-L\"of's extensional
dependent type theory as sets indexed over posets or, equivalently, as
fibrations over posets. This semantics can be seen as a generalization to
dependent type theory of the interpretation of intuitionistic first-order logic
in Kripke models. This yields a simple coherent model theory, with respect to
which simple and dependent type theory are sound and complete
Evidence for a New Light Boson from Cosmological Gamma-Ray Propagation?
An anomalously large transparency of the Universe to gamma rays has recently
been discovered by the Imaging Atmospheric Cherenkov Telescopes (IACTs)
H.E.S.S. and MAGIC. We show that observations can be reconciled with standard
blazar emission models provided photon oscillations into a very light
Axion-Like Particle occur in extragalactic magnetic fields. A quantitative
estimate of this effect is successfully applied to the blazar 3C279. Our
prediction can be tested with the satellite-borne Fermi/LAT detector as well as
with the ground-based IACTs H.E.S.S., MAGIC, CANGAROOIII, VERITAS and the
Extensive Air Shower arrays ARGO-YBJ andMILAGRO. Our result also offers an
important observational test for models of dark energy wherein quintessence is
coupled to the photon through an effective dimension-five operator.Comment: 11 pages, 2 figures, Proceeding of the Conference "Frontiers of
Fundamental and Computational Physics", AIP Conference Proceedings 1018
(2008
Imperative Object-based Calculi in (Co)Inductive Type Theories
We discuss the formalization of Abadi and Cardelli's imps, a paradigmatic object-based calculus with types and side effects, in Co-Inductive Type Theories, such as the Calculus of (Co)Inductive Constructions (CC(Co)Ind).
Instead of representing directly the original system "as it is", we reformulate its syntax and semantics bearing in mind the proof-theoretical features provided by the target metalanguage. On one hand, this methodology allows for a smoother implementation and treatment of the calculus in the metalanguage. On the other, it is possible to see the calculus from a new perspective, thus having the occasion to suggest original and cleaner presentations.
We give hence anew presentation of imps, exploiting natural deduction semantics, (weak) higher-order abstract syntax, and, for a significant fragment of the calculus, coinductive typing systems. This presentation is easier to use and implement than the original one, and the proofs of key metaproperties, e.g. subject reduction, are much simpler.
Although all proof developments have been carried out in the Coq system, the solutions we have devised in the encoding of and metareasoning on imps can be applied to other imperative calculi and proof environments with similar features
- âŠ