1,579 research outputs found
La crise du socialisme réellement existant
The Soviet Union and its East European glacis are experiencing their second major wave of crisis since the death of Stalin. But unlike the "de-stalinization" crisis of the second half of the 1950s, which was largely political and was overcome through a combination of repression and reform that left the system basically intact, the present crisis cannot be weathered so easily and threatens to usher in a period of political upheaval, as it has already in Poland and, to some extent, in Romania.On the most obvious level, the crisis manifests itself in the constant decline of the economic growth rate since the late 1950s, which has put an end to the slow but steady rise in living standards, the basis upon which the tacit post-Stalin accord between the bureaucracy and society was founded. The roots of this crisis are deeply structural, but structural reforms, in particular the introduction of the « regulated market mechanism », which appears to be the nly viable alternative open to the bureaucracy, will meet with strong opposition from important sectors of that elite, especially the provincial party bosses, and threaten to create a split in its ranks.At the same time, such a reform is politically unfeasible without important concessions to the working class in the direction of democratization or, at the least, the right to organize into independent trade unions to protect itself against management, whose powers would be greatly enhanced by the reform. But such concessions to the working class, as Poland shows, are perceived by the bureaucracy as a threat to its very existence. At the same time, the working class today is potentially a much more formidable political force than at any time since the civil war.The leadership is, therefore, in a dilemma. The 1980s are likely to see the explosive combination of a simultaneous crisis "as the top" and "at the bottom"
Communicating Uncertainty in Warning Intelligence
On November 23, 2021, Dr. David Mandel presented Communicating Uncertainty in Warning Intelligence at the 2021 CASIS West Coast Security Conference. The primary concepts of Dr. Mandel’s presentation centered on the utilization of verbal versus numeric probabilities, the variability in understandings of verbal probabilities, and the relationship between confidence levels and event probabilities. Dr. Mandel’s presentation was followed by a question and answer period directed at a group of panelists allowing the audience and CASIS Vancouver executives to directly engage with the content of each speaker’s presentation
Artificial General Intelligence, Existential Risk, and Human Risk Perception
Artificial general intelligence (AGI) does not yet exist, but given the pace
of technological development in artificial intelligence, it is projected to
reach human-level intelligence within roughly the next two decades. After that,
many experts expect it to far surpass human intelligence and to do so rapidly.
The prospect of superintelligent AGI poses an existential risk to humans
because there is no reliable method for ensuring that AGI goals stay aligned
with human goals. Drawing on publicly available forecaster and opinion data,
the author examines how experts and non-experts perceive risk from AGI. The
findings indicate that the perceived risk of a world catastrophe or extinction
from AGI is greater than for other existential risks. The increase in perceived
risk over the last year is also steeper for AGI than for other existential
threats (e.g., nuclear war or human-caused climate change). That AGI is a
pressing existential risk is something on which experts and non-experts agree,
but the basis for such agreement currently remains obscure
Constructing A Flexible Likelihood Function For Spectroscopic Inference
We present a modular, extensible likelihood framework for spectroscopic
inference based on synthetic model spectra. The subtraction of an imperfect
model from a continuously sampled spectrum introduces covariance between
adjacent datapoints (pixels) into the residual spectrum. For the high
signal-to-noise data with large spectral range that is commonly employed in
stellar astrophysics, that covariant structure can lead to dramatically
underestimated parameter uncertainties (and, in some cases, biases). We
construct a likelihood function that accounts for the structure of the
covariance matrix, utilizing the machinery of Gaussian process kernels. This
framework specifically address the common problem of mismatches in model
spectral line strengths (with respect to data) due to intrinsic model
imperfections (e.g., in the atomic/molecular databases or opacity
prescriptions) by developing a novel local covariance kernel formalism that
identifies and self-consistently downweights pathological spectral line
"outliers." By fitting many spectra in a hierarchical manner, these local
kernels provide a mechanism to learn about and build data-driven corrections to
synthetic spectral libraries. An open-source software implementation of this
approach is available at http://iancze.github.io/Starfish, including a
sophisticated probabilistic scheme for spectral interpolation when using model
libraries that are sparsely sampled in the stellar parameters. We demonstrate
some salient features of the framework by fitting the high resolution -band
spectrum of WASP-14, an F5 dwarf with a transiting exoplanet, and the moderate
resolution -band spectrum of Gliese 51, an M5 field dwarf.Comment: Accepted to ApJ. Incorporated referees' comments. New figures 1, 8,
10, 12, and 14. Supplemental website: http://iancze.github.io/Starfish
The Treatment Versus Experimentation Dilemma in Dose-finding Studies
Phase I clinical trials are conducted in order to find the maximum tolerated dose (MTD) of a given drug from a finite set of doses. For ethical reasons, these studies are usually sequential, treating patients or group of patients with the best available dose according to the current knowledge. However, it is proved here that such designs, and, more generally, designs that concentrate on one dose from some time on, cannot provide consistent estimators for the MTD unless very strong parametric assumptions hold. We describe a family of sequential designs that treat individuals with one of the two closest doses to the estimated MTD, and prove that such designs, under general conditions, concentrate eventually on the two closest doses to the MTD and estimate the MTD consistently. It is shown that this family contains randomized designs that assign the MTD with probability that approaches 1 as the size of the experiment goes to infinity. We compare several designs by simulations, studying their performances in terms of correct estimation of the MTD and the proportion of individuals treated with the MTD.Phase I clinical trials are conducted in order to find the maximum tolerated dose (MTD) of a given drug from a finite set of doses. For ethical reasons, these studies are usually sequential, treating patients or group of patients with the best available dose according to the current knowledge. However, it is proved here that such designs, and, more generally, designs that concentrate on one dose from some time on, cannot provide consistent estimators for the MTD unless very strong parametric assumptions hold. We describe a family of sequential designs that treat individuals with one of the two closest doses to the estimated MTD, and prove that such designs, under general conditions, concentrate eventually on the two closest doses to the MTD and estimate the MTD consistently. It is shown that this family contains randomized designs that assign the MTD with probability that approaches 1 as the size of the experiment goes to infinity. We compare several designs by simulations, studying their performances in terms of correct estimation of the MTD and the proportion of individuals treated with the MTD.Non-Refereed Working Papers / of national relevance onl
The Intelligentsia and the October Revolution
This article examines the attitude of the “democratic,” left-leaning intelligentsia to the revolutions of 1917. It documents and analyzes the latter’s growing alienation from the popular classes, the workers and peasants, over the course of 1917. That alienation is explained on the background of the deepening class polarization of Russia society, a process that can be traced back to the Revolution of 1905 and even earlier, but which reached its apogee in 1917 in the October Revolution. That revolution is revealed as an exclusively plebeian affair to which the left-leaning intelligentsia was intensely hostile, a situation that deeply worried worker activists.Este artigo examina a atitude da intelligentsia “democrática”, de orientação de esquerda, para com as revoluções de 1917. Documenta e analisa sua crescente alienação posterior em relação às classes populares, operários e camponeses, ao longo de 1917. Essa alienação é explicada no marco do aprofundamento da polarização da sociedade russa, um processo cujas raízes podem ser encontradas na revolução de 1905, e até mesmo antes dela, mas que alcançou seu ápice em 1917, na Revolução de Outubro. Essa revolução se revelou um evento exclusivamente plebeu, diante do qual a intelligentsia de orientação de esquerda foi bastante hostil, situação esta que preocupou profundamente os ativistas operários
- …