1,588 research outputs found
Aftermath Of The Nothing
This article consists in two parts that are complementary and autonomous at the same time.
In the first one, we develop some surprising consequences of
the introduction of a new constant called Lambda in order to represent the object ``nothing"
or ``void" into a standard set theory. On a conceptual level, it allows to see sets in a new light and to give a legitimacy to the empty set. On a technical level, it leads to a relative resolution of the anomaly of the intersection of a family free of sets.
In the second part, we show the interest of introducing an operator of potentiality into a standard set theory. Among other results, this operator allows to prove the existence of a hierarchy of empty sets and to propose a solution to the puzzle of "ubiquity" of the empty set.
Both theories are presented with equi-consistency results (model and interpretation).
Here is a declaration of intent : in each case, the starting point is a conceptual questionning; the technical tools come in a second time\\[0.4cm]
\textbf{Keywords:} nothing, void, empty set, null-class, zero-order logic with quantifiers, potential, effective, empty set, ubiquity, hierarchy, equality, equality by the bottom, identity, identification
An Empirical Analysis of Income Convergence in the European Union
In this paper, we investigate the convergence process within the European Union (27 countries). More particularly, we study the convergence process of the new entrants from Central and Eastern Europe and of the 15 Western countries between 1990 and 2007. Applying a panel approach to the convergence equation derived by Mankiw et al. (1992) from the Solow model, we highlight the existence of heterogeneity in the European Union and show that new entrants and former members of the European Union can be seen as belonging to significantly differ ent groups of convergence. The existence of heterogeneity in the European Union or the Eurozone might affect their stability as the recent Greece’s sovereign debt crisis illustrates it.
Consumer rapport to luxury : Analyzing complex and ambivalent attitudes
The very nature of luxury goods, the variety of consumption situations and the everlasting philosophical debate over luxury lead to particularly complex and ambivalent consumer attitudes, as evidenced by a first study based on the content analysis of in-depth interviews. A second study, based on surveys in twenty countries using finite mixture modeling, identifies three types of consumer rapport to luxury.luxury; ambiguity; attitude measurement; consumer behavior
How do consumers overcome ambivalence toward hedonic purchases ? a typology of consumer strategies
Purchase decisions for hedonic products and services are often characterized by ambivalence -sensory benefits make them attractive, but consumers may feel guilty about bying them. To overcome this ambivalence, consumers frequently adopt strategies that allow them to enloy hedonic benefits while limiting their negative feelings. Combining an extensive literature review with an interpretive study, the authors identify 23 consumer strategies and propose a typology in four groups on the basis of strategy antecedents: two groups of objective strategies (obtaining consumption benefits without purchasing, objectively contining purchasing costs) and two groups of subjective strategies (manipulating the mental accounting of costs and benefits, relinquishing responsability).consumer behavior; hedonic purchase; consumer strategies
RazĂŁo(ões) HolĂstica(s) e Semântica
This article-testimony can be seen as an example of a maybe new discipline that could be called “scientific metaphysics” made of thought experiments, definitions, some proofs, some explanations, some conjectures. Of course, to be called science, the discipline needs some possibility of “verification” too. We will see if it can be considered.Este artigo-testemunho pode ser visto como um exemplo de uma possĂvel nova disciplina, que poderia ser chamada de "metafĂsica cientĂfica", composta por experimen-tos mentais, definições, algumas provas, algumas explicações, e algumas conjecturas.Obviamente, para ser chamada de ciĂŞncia, a disciplina tambĂ©m precisa de alguma possibilidade de "verificação". Veremos se isso pode ser considerado
A bi-projection method for Bingham type flows
International audienceWe propose and study a new numerical scheme to compute the isothermal and unsteady flow of an incompressible viscoplastic Bingham medium.The main difficulty, for both theoretical and numerical approaches, is due to the non-differentiability of the plastic part of stress tensor in regionswhere the rate-of-strain tensor vanishes. This is handled by reformulating the definition of the plastic stress tensor in terms ofa projection.A new time scheme, based on the classical incremental projection method for the Newtonian Navier-Stokes equations, is proposed. The plastictensor is treated implicitly in the first sub-step of the projection scheme and is computed by using a fixed point procedure. A pseudo-timerelaxation is added into the Bingham projection whose effect is to ensure a geometric convergence of the fixed point algorithm. This is akey feature of the bi-projection scheme which provides a fast and accurate computation of the plastic tensor.Stability and error analyses of the numerical scheme are provided. The error induced by the pseudo-time relaxation term is controlled bya prescribed numerical parameter so that a first-order estimate of the time error is derived for the velocity field.A second-order cell-centred finite volume scheme on staggered grids is applied for the spatial discretisation.The scheme is assessed against previously published benchmark results for both Newtonian and Bingham flows in a two-dimensional lid-drivencavity for Reynolds number equals 1 000.Moreover, the proposed numerical scheme is able to reproduce the fundamental property of cessation in finite time of a viscoplasticmedium in the absence of any energy source term in the equations.For a fixed value (100) of the Bingham number, various numerical simulations for a range of Reynolds numbers up to 200 000 were performedwith the bi-projection scheme on a grid with 1024x1024 mesh points. The effect of this (physical) parameter on the flow behaviour is discussed
Origins of plateau formation in ion energy spectra under target normal sheath acceleration
Target normal sheath acceleration (TNSA) is a method employed in
laser--matter interaction experiments to accelerate light ions (usually
protons). Laser setups with durations of a few 10 fs and relatively low
intensity contrasts observe plateau regions in their ion energy spectra when
shooting on thin foil targets with thicknesses of order 10 m. In
this paper we identify a mechanism which explains this phenomenon using one
dimensional particle-in-cell simulations. Fast electrons generated from the
laser interaction recirculate back and forth through the target, giving rise to
time-oscillating charge and current densities at the target backside. Periodic
decreases in the electron density lead to transient disruptions of the TNSA
sheath field: peaks in the ion spectra form as a result, which are then spread
in energy from a modified potential driven by further electron recirculation.
The ratio between the laser pulse duration and the recirculation period
(dependent on the target thickness, including the portion of the pre-plasma
which is denser than the critical density) determines if a plateau forms in the
energy spectra.Comment: 11 pages, 12 figure
Non-isochoric stable granular models taking into account fluidisation by pore gas pressure
In this paper, we study non-isochoric models for mixtures of solid particles,
at high volume concentration, and a gas. One of the motivations of this work
concerns geophysics and more particularly the pyroclastic density currents
which are precisely mixtures of pyroclast and lithic fragments and air. They
are extremely destructive phenomena, capable of devastating urbanised areas,
and are known to propagate over long distances, even over almost flat
topography. Fluidisation of these dense granular flows by pore gas pressure is
one response that could explain this behaviour and must therefore be taken into
account in the models. Starting from a gas-solid mixing model and invoking the
compressibility of the gas, through a law of state, we rewrite the conservation
of mass equation of the gas phase into an equation on the pore gas pressure
whose net effect is to reduce the friction between the particles. The momentum
equation of the solid phase is completed by generic constitutive laws,
specified as in Schaeffer et al (2019, Journal of Fluid Mechanics, 874,
926-951) by a yield function and a dilatancy function. Therefore, the
divergence of the velocity field, which reflects the ability of the granular
flow to expand or compress, depends on the volume fraction, pressure, strain
rate and inertial number. In addition, we require the dilatancy function to
describe the rate of volume change of the granular material near an isochoric
equilibrium state, i.e. at constant volume. This property ensures that the
volume fraction, which is the solution to the conservation of mass equation, is
positive and finite at all times. We also require that the non-isochoric
fluidised model is linearly stable and dissipates energy (over time). In this
theoretical framework, we derive the dilatancy models corresponding to
classical rheologies such as Drucker-Prager and (I) (with or without
expansion effects). The main result of this work is to show that it is possible
to obtain non-isochoric and fluidised granular models satisfying all the
properties necessary to correctly account for the physics of granular flows and
being well-posed, at least linearly stable
- …