47 research outputs found
Decoding Information from noisy, redundant, and intentionally-distorted sources
Advances in information technology reduce barriers to information
propagation, but at the same time they also induce the information overload
problem. For the making of various decisions, mere digestion of the relevant
information has become a daunting task due to the massive amount of information
available. This information, such as that generated by evaluation systems
developed by various web sites, is in general useful but may be noisy and may
also contain biased entries. In this study, we establish a framework to
systematically tackle the challenging problem of information decoding in the
presence of massive and redundant data. When applied to a voting system, our
method simultaneously ranks the raters and the ratees using only the evaluation
data, consisting of an array of scores each of which represents the rating of a
ratee by a rater. Not only is our appraoch effective in decoding information,
it is also shown to be robust against various hypothetical types of noise as
well as intentional abuses.Comment: 19 pages, 5 figures, accepted for publication in Physica
Selection criteria used when purchasing bulls at the Breeders Performance Tested Bull Sale
The purpose of this study was to determine the selection criteria buyers use for purchasing performance tested bulls at the Breeders Performance Tested Bull Sale. Various criteria were examined including: trait selection, perception of the effectiveness of the performance program, perception of individuals purchasing bulls in relation to the buyer\u27s demographic locations, and buyer characteristics.
To facilitate the purpose of this study, the following specific objectives were developed:
1. To develop an average profile of the individuals who commonly purchase performance tested bulls.
2. To determine most common selection criteria used by buyers when purchasing bulls and the relationships of those criteria to selected buyer demographic characteristics.
3. To determine buyers\u27 perceptions of the quality of the performance program and its relationship to selected buyer demographic characteristics.
This was a descriptive/correlational study which was Ex Post Facto in nature. Data were collected using a researcher developed questionnaire. The questionnaire was field tested to determine content validity and reliability and appropriate adjustments were made prior to mailing to respondents.
Findings
The majority of respondents felt that the test records provided to them on the day of the sale were useful . A large percentage of the respondents indicated their bulls were productive breeders with only a few respondents experiencing calving problems. When asked why they selected the Breeder Performance Tested Performance Sale, the greatest majority selected the sale due to its reputation.
Most respondents were satisfied with the bull they purchased and the performance tested bull program. They responded positively to those questions seeking their satisfaction with the program. Their responses included; may return to the sale to purchased an additional bull , contributed positively to the genetic improvement of their herd and would recommend the sale to others . In regard to the respondents positive reply to the various independent variables concerning satisfaction with the sale it can be concluded that the overall satisfaction with the performance program is very positive.
The dependent variables were four computed scale scores (sale factors, general information, descriptive information, and performance information) based upon respondent\u27s perceptions\u27 of the importance of various kinds of selection criteria. Scores for each set of factors were arranged in a Lickert type scale ranging from one, being very important , and five being very unimportant . The respondent had the opportunity to determine the degree of importance of each selection criteria.
Respondents categorized the perceived importance of the various selection criteria provided to each potential buyer on sale day. The descriptive category received the highest rating while disposition was selected as the most important selection criterion within the category. Performance Information followed very closely. It should be noted that birth weight was selected as the single most important selection criterion (in all groups). The category of General Information followed next with breed ranking as the most important criterion in this group. Sale Factors followed closely behind with reputation of sale ranked as the most important criterion in this category.
There is no reason to conclude that there is a relationship between respondents\u27 satisfaction with the sale , level of education , management practices employed by the buyer upon arrival to their farm , participating in bull lease program , farming status , or, method of marketing , and their perceived importance of any of the four kinds of selection criteria provided to them about the bull. There was a statistically significant relationship between type of producer and their perceptions of the importance of sale factors and general information .
Implications
The Breeder\u27s Performance Tested Bull Sale has made a tremendous impact of the availability of genetically superior breeding sires in Tennessee. The selection criteria utilized by individuals is the primary strategy for selecting a superior breeding sire. The data compiled in this study reveals those criteria deemed important by the respondent. It is apparent that some of the data provided to the potential buyer are unclear in meaning to them. Perhaps potential buyers are unfamiliar with available performance data or unclear about how to utilize a combination of all available data to select a breeding sire
From phenomenological modelling of anomalous diffusion through continuous-time random walks and fractional calculus to correlation analysis of complex systems
This document contains more than one topic, but they are all connected in ei-
ther physical analogy, analytic/numerical resemblance or because one is a building
block of another. The topics are anomalous diffusion, modelling of stylised facts
based on an empirical random walker diffusion model and null-hypothesis tests in
time series data-analysis reusing the same diffusion model. Inbetween these topics
are interrupted by an introduction of new methods for fast production of random
numbers and matrices of certain types. This interruption constitutes the entire
chapter on random numbers that is purely algorithmic and was inspired by the
need of fast random numbers of special types. The sequence of chapters is chrono-
logically meaningful in the sense that fast random numbers are needed in the first
topic dealing with continuous-time random walks (CTRWs) and their connection
to fractional diffusion. The contents of the last four chapters were indeed produced
in this sequence, but with some temporal overlap.
While the fast Monte Carlo solution of the time and space fractional diffusion
equation is a nice application that sped-up hugely with our new method we were
also interested in CTRWs as a model for certain stylised facts. Without knowing
economists [80] reinvented what physicists had subconsciously used for decades
already. It is the so called stylised fact for which another word can be empirical
truth. A simple example: The diffusion equation gives a probability at a certain
time to find a certain diffusive particle in some position or indicates concentration
of a dye. It is debatable if probability is physical reality. Most importantly, it
does not describe the physical system completely. Instead, the equation describes
only a certain expectation value of interest, where it does not matter if it is of
grains, prices or people which diffuse away. Reality is coded and “averaged” in the
diffusion constant.
Interpreting a CTRW as an abstract microscopic particle motion model it
can solve the time and space fractional diffusion equation. This type of diffusion
equation mimics some types of anomalous diffusion, a name usually given to effects
that cannot be explained by classic stochastic models. In particular not by the
classic diffusion equation. It was recognised only recently, ca. in the mid 1990s, that
the random walk model used here is the abstract particle based counterpart for the
macroscopic time- and space-fractional diffusion equation, just like the “classic”
random walk with regular jumps ±∆x solves the classic diffusion equation. Both
equations can be solved in a Monte Carlo fashion with many realisations of walks.
Interpreting the CTRW as a time series model it can serve as a possible null-
hypothesis scenario in applications with measurements that behave similarly. It
may be necessary to simulate many null-hypothesis realisations of the system to
give a (probabilistic) answer to what the “outcome” is under the assumption that
the particles, stocks, etc. are not correlated.
Another topic is (random) correlation matrices. These are partly built on the
previously introduced continuous-time random walks and are important in null-
hypothesis testing, data analysis and filtering. The main ob jects encountered in
dealing with these matrices are eigenvalues and eigenvectors. The latter are car-
ried over to the following topic of mode analysis and application in clustering. The
presented properties of correlation matrices of correlated measurements seem to
be wasted in contemporary methods of clustering with (dis-)similarity measures
from time series. Most applications of spectral clustering ignores information and
is not able to distinguish between certain cases. The suggested procedure is sup-
posed to identify and separate out clusters by using additional information coded
in the eigenvectors. In addition, random matrix theory can also serve to analyse
microarray data for the extraction of functional genetic groups and it also suggests
an error model. Finally, the last topic on synchronisation analysis of electroen-
cephalogram (EEG) data resurrects the eigenvalues and eigenvectors as well as the
mode analysis, but this time of matrices made of synchronisation coefficients of
neurological activity
Understanding the Connections between Science, Philosophy, Psychology, Religion, Politics, and Economics -- Articles and Reviews 2006-2019
The first group of articles attempt to give some insight into how we behave that is reasonably free of theoretical delusions as shown by reviews of books by leading authors in philosophy and psychology, which as I note can be seen as the same discipline in many situations. In the next section I comment on very basic confusions where one might least expect them – in science and mathematics. Next, I turn to confusions where most people do expect them—in religion (i.e., in cooperative groups formed to facilitate reproduction). Finally, I provide some viewpoints on areas where all the issues come together—economics and politics.
The key to everything about us is biology, and it is obliviousness to it that leads
millions of smart educated people like Obama, Chomsky, Clinton and the Pope to espouse suicidal utopian ideals that inexorably lead straight to Hell on Earth. As Wittgenstein noted,it is what is always before our eyes that is the hardest to see. We live in the world of conscious, deliberative linguistic System 2, but it is unconscious, automatic reflexive System 1 that rules. This is the source of the universal blindness described by Searle’s The Phenomenological Illusion (TPI), Pinker’s Blank Slate and Tooby and Cosmides’ Standard Social Science Model.
America and the world are in the process of collapse from excessive population growth. The root cause of collapse is the inability of our innate psychology to adapt to the modern world. This, plus ignorance of basic biology and psychology, leads to the social engineering delusions of the partially educated who control democratic societies. Hence my essay “Suicide by Democracy”. It is also now clear that the seven sociopaths who rule China are winning world war 3, and so my concluding essay on them. The only greater threat is Artificial Intelligence which I comment on briefly in the last paragraph
Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2019 Michael Starks 3rd Edition
This collection of articles and reviews are about human behavior (as are all articles by anyone about anything), and so about the limitations of having a recent monkey ancestry (8 million years or much less depending on viewpoint) and manifest words and deeds within the framework of our innate psychology as presented in the table of intentionality. As famous evolutionist Richard Leakey says, it is critical to keep in mind not that we evolved from apes, but that in every important way, we are apes. If everyone was given a real understanding of this (i.e., of human ecology and psychology to actually give them some control over themselves), maybe civilization would have a chance. As things are however the leaders of society have no more grasp of things than their constituents and so collapse into anarchy and dictatorship appears inevitable.
Since philosophy proper is essentially the same as the descriptive psychology of higher order thought (behavior), and philosophical problems are the result of our innate psychology, or as Wittgenstein put it, due to the lack of perspicuity of language, they run throughout human discourse and behavior, so there is endless need for philosophical analysis, not only in the ‘human sciences’ of philosophy, sociology, anthropology, political science, psychology, history, literature, religion, etc., but in the ‘hard sciences’ of physics, mathematics, and biology. It is universal to mix the language game questions with the real scientific ones as to what the empirical facts are. Scientism is ever present and the master has laid it before us long ago, i.e., Wittgenstein (hereafter W) beginning with the Blue and Brown Books in the early 1930’s.
Although I separate the book into sections on philosophy and psychology, religion, biology, the ‘hard sciences’ and politics/sociology/economics, all the articles, like all behavior, are intimately connected if one knows how to look at them. As I note, The Phenomenological Illusion (oblivion to our automated System 1) is universal and extends not merely throughout philosophy but throughout life. I am sure that Chomsky, Obama, Zuckerberg and the Pope would be incredulous if told that they suffer from the same problems as Hegel, Husserl and Heidegger, or that that they differ only in degree from drug and sex addicts in being motivated by stimulation of their frontal cortices by the delivery of dopamine (and over 100 other chemicals) via the ventral tegmentum and the nucleus accumbens, but it’s clearly true. While the phenomenologists only wasted a lot of people’s time, they are wasting the earth and their descendant’s future.
I hope that these essays will help to separate the philosophical issues of language use from the scientific factual issues, and in some small way hinder the collapse of civilization, or at least make it clear why it is doomed.
Those wishing to read my other writings may see Talking Monkeys 2nd ed (2019), The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle 2nd ed (2019), Suicide by Democracy 3rd ed (2019), The Logical Stucture of Human Behavior (2019) and Suicidal Utopian Delusions in the 21st Century 4th ed (2019
The Bayes mind: From the St. Petersburg paradox to the New York Stock Exchange.
The thesis is an exposition and defence of Bayesianism as the preferred methodology of reasoning under uncertainty in social contexts. Chapter 1 gives a general outline of the foundations of probabilistic reasoning, as well as a critical exposition of the main non- Bayesian approaches to probability. After a brief discussion of the formal theory of probability, the thesis examines some non-Bayesian interpretations of the probability calculus, and purports to show their insufficiency. Chapter 2 provides an outline of the Bayesian (subjectivist) research programme. The opening sections of the chapter contain a historical overview of Bayesianism, as well as a defence of the assumptions on which it rests. The concluding sections then examine some of the key issues of contention between Bayesians and their critics, such as the nature of empirical confirmation and learning from experience. Since it is the author's contention that any sound methodology should be applicable, if necessary with modifications, across a wide range of contexts, the concluding two chapters make a Bayesian case first in a theoretical, and next in a practical setting. In particular, Chapter 3 discusses the issue of simplicity as a theoretical virtue. It argues that a Bayesian can coherently and successfully account for the structural or formal simplicity of the hypotheses that he entertains, by using his assignments of subjective prior probabilities in the process known as 'Bayesian Conditionalisation'. It also argues that some of the recent criticisms voiced against the Bayesian account of simplicity are inconsistent and/or question-begging. The process known in statistics as 'curve-fitting' provides the material for the discussion. Finally, Chapter 4 presents an extension of the Bayesian methodology to practical decision-making, using the context of investing activity. It purports to show that the most convincing picture of economic agents' investing behaviour is best explained by assuming that in the course of such behaviour, the agents maximise their expected utility, as is stipulated by the Bayesian decision theory. The argument revolves around the 'Efficient Markets Hypothesis' in the theory of finance, and the conclusion hinges both on the empirical adequacy of the various versions of this hypothesis and on its behavioural underpinnings. The thesis contains two appendices, intended to illustrate certain points made in the main body. The first appendix is a critical appraisal of a popular non-Bayesian account of causal inference in statistical contexts, with a bearing on the discussion in Chapter 3, while the second appendix provides a real-life illustration of some of the issues raised in Chapter 4. The overall structure of the thesis is intended to show how, from highly plausible assumptions, one can derive a powerful theory of reasoning under uncertainty that faithfully and uniformly represents both the theoretical and the practical concerns of the human mind
How Leadership Dynamics Differs in High- and Low- Performing Firms in a Sustainable Innovation Context : A Qualitative Case Study from the Health Tech Sector
Leadership is essential to achieving sustainable innovation, yet research to date on innovation
tends to focus on individual leaders, while innovation leadership appears to be a collective and
dynamic process. While extant literature has examined collective leadership dynamics
research regarding how collective leadership dynamics play out over time in sustainable
innovative firms over time is nascent. This thesis is bridging the gap in the literature by
examining the leadership dynamics in sustainable innovation companies. Specifically, to
explore these dynamics, we conduct an explorative multi-case qualitative study in the health
tech sector interviewing leaders in five companies in total, three high-performing and two lowperforming
companies. The findings overall reveal key differences in high- and lowperforming
firms. First, collective leadership dynamics varies along two dimensions,
changeable roles, and fluid contributions. Second, these dynamics along these two dimensions
differ through three phases, the initial (1), investment (2), and launching phases (3). While
high- and low-performing companies have similar dynamics in the initial phase 1) with
collective processes and interchangeable roles, differences in dynamics appear in the
investment (2) and launching phases (3). While an influx of tension from new individuals is
affecting both the high- and low-performing companies, differences appear in how they handle
such tensions. In the second and third phase, the high-performing companies manage to utilize
tension and at the same time build a more structured company where competency and
delegation are critical. Low-performing companies experience the tension as a negative
disturbance, where collective leadership appears to coincide with the CEO´s role in the
company weakens.
The findings contribute to understanding the relationship between the collective leadership,
along two dynamic dimensions, and how this relates to growth in a sustainable innovation
context.nhhma
New Foundations for Imprecise Bayesianism.
My dissertation examines two kinds of statistical tools for taking prior information into account, and investigates what reasons we have for using one or the other in different sorts of inference and decision problems.
Chapter 1 describes a new objective Bayesian method for constructing `precise priors'. Precise prior probability distributions are statistical tools for taking account of your `prior evidence' in an inference or decision problem. `Prior evidence' is the wooly hodgepodge of information that you come to the table with. `Experimental evidence' is the new data that you gather to facilitate inference and decision-making. I leverage this method to provide the seeds of a solution to `the problem of the priors', the problem of providing a compelling epistemic rationale for using some `objective' method or other for constructing priors. You ought to use the proposed method, at least in certain contexts, I argue, because it minimizes your need for epistemic luck in securing accurate `posterior' (post-experiment) beliefs.
Chapter 2 addresses a pressing concern about precise priors. Precise priors, some Bayesians say, fail to adequately summarize certain kinds of evidence. As a class, precise priors capture improper responses to unspecific and equivocal evidence. This motivates the introduction of imprecise priors. We need imprecise priors, or sets of distributions to summarize such evidence. I argue that, despite appearances to the contrary, precise priors are, in fact, flexible enough to capture proper responses to unspecific and equivocal evidence. The proper motivation for introducing imprecise priors, then, is not that they are required to summarize such evidence. We ought to search for new epistemic reasons to introduce imprecise priors.
Chapter 3 explores two new kinds of reasons for employing imprecise priors. We ought to adopt imprecise priors in certain contexts because they put us in an unequivocally better position to secure epistemically valuable posterior beliefs than precise priors do. We ought to adopt imprecise priors in various other contexts because they minimize our need for epistemic luck in securing such posteriors. This points the way toward a new, potentially promising epistemic foundation for imprecise Bayesianism.PHDPhilosophyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99960/1/jpkonek_1.pd