96,965 research outputs found
Analogy, Mind, and Life
I'll show that the kind of analogy between life and information [argue for by authors such as Davies (2000), Walker and Davies (2013), Dyson (1979), Gleick (2011), Kurzweil (2012), Ward (2009)] – that seems to be central to the effect that artificial mind may represents an expected advance in the life evolution in Universe – is like the design argument and that if the design argument is unfounded and invalid, the argument to the effect that artificial mind may represents an expected advance in the life evolution in Universe is also unfounded and invalid.
However, if we are prepared to admit (though we should not do) this method of reasoning as valid, I'll show that the analogy between life and information to the effect that artificial mind may represents an expected advance in the life evolution in Universe seems suggest some type of reductionism of life to information, but biology respectively chemistry or physics are not reductionist, contrary to what seems to be suggested by the analogy between life and information
Learning Laplacian Matrix in Smooth Graph Signal Representations
The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior
- …