138,195 research outputs found
"Revolution? What Revolution?" Successes and limits of computing technologies in philosophy and religion
Computing technologies like other technological innovations in the modern West are inevitably introduced with the rhetoric of "revolution". Especially during the 1980s (the PC revolution) and 1990s (the Internet and Web revolutions), enthusiasts insistently celebrated radical changes— changes ostensibly inevitable and certainly as radical as those brought about by the invention of the printing press, if not the discovery of fire.\ud
These enthusiasms now seem very "1990s�—in part as the revolution stumbled with the dot.com failures and the devastating impacts of 9/11. Moreover, as I will sketch out below, the patterns of diffusion and impact in philosophy and religion show both tremendous success, as certain revolutionary promises are indeed kept—as well as (sometimes spectacular) failures. Perhaps we use revolutionary rhetoric less frequently because the revolution has indeed succeeded: computing technologies, and many of the powers and potentials they bring us as scholars and religionists have become so ubiquitous and normal that they no longer seem "revolutionary at all. At the same time, many of the early hopes and promises instantiated in such specific projects as Artificial Intelligence and anticipations of virtual religious communities only have been dashed against the apparently intractable limits of even these most remarkable technologies. While these failures are usually forgotten they leave in their wake a clearer sense of what these new technologies can, and cannot do
Thinking Outside the Box: The Essence and Implications of Quantum Entanglement
Many experiments have shown that quantum entanglement is physically real. In this paper, we will discuss its ontological origin, implications and applications by thinking outside the standard interpretations of quantum mechanics. We argue that quantum entanglement originates from the primordial spin processes in non-spatial and non-temporal pre-spacetime, implies genuine interconnectedness and inseparableness of once interacting quantum entities, plays vital roles in biology and consciousness and, once better understood and harnessed, has far-reaching consequences and applications in many fields such as medicine and neuroscience. We further argue that quantum computation power also originates from the primordial spin processes in pre-spacetime. Finally, we discuss the roles of quantum entanglement in spin-mediated consciousness theory
The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism
Computer vision and other biometrics data science applications have commenced
a new project of profiling people. Rather than using 'transaction generated
information', these systems measure the 'real world' and produce an assessment
of the 'world state' - in this case an assessment of some individual trait.
Instead of using proxies or scores to evaluate people, they increasingly deploy
a logic of revealing the truth about reality and the people within it. While
these profiling knowledge claims are sometimes tentative, they increasingly
suggest that only through computation can these excesses of reality be captured
and understood. This article explores the bases of those claims in the systems
of measurement, representation, and classification deployed in computer vision.
It asks if there is something new in this type of knowledge claim, sketches an
account of a new form of computational empiricism being operationalised, and
questions what kind of human subject is being constructed by these
technological systems and practices. Finally, the article explores legal
mechanisms for contesting the emergence of computational empiricism as the
dominant knowledge platform for understanding the world and the people within
it
Some Thoughts on Hypercomputation
Hypercomputation is a relatively new branch of computer science that emerged
from the idea that the Church--Turing Thesis, which is supposed to describe
what is computable and what is noncomputable, cannot possible be true. Because
of its apparent validity, the Church--Turing Thesis has been used to
investigate the possible limits of intelligence of any imaginable life form,
and, consequently, the limits of information processing, since living beings
are, among others, information processors. However, in the light of
hypercomputation, which seems to be feasibly in our universe, one cannot impose
arbitrary limits to what intelligence can achieve unless there are specific
physical laws that prohibit the realization of something. In addition,
hypercomputation allows us to ponder about aspects of communication between
intelligent beings that have not been considered befor
Privacy in the Genomic Era
Genome sequencing technology has advanced at a rapid pace and it is now
possible to generate highly-detailed genotypes inexpensively. The collection
and analysis of such data has the potential to support various applications,
including personalized medical services. While the benefits of the genomics
revolution are trumpeted by the biomedical community, the increased
availability of such data has major implications for personal privacy; notably
because the genome has certain essential features, which include (but are not
limited to) (i) an association with traits and certain diseases, (ii)
identification capability (e.g., forensics), and (iii) revelation of family
relationships. Moreover, direct-to-consumer DNA testing increases the
likelihood that genome data will be made available in less regulated
environments, such as the Internet and for-profit companies. The problem of
genome data privacy thus resides at the crossroads of computer science,
medicine, and public policy. While the computer scientists have addressed data
privacy for various data types, there has been less attention dedicated to
genomic data. Thus, the goal of this paper is to provide a systematization of
knowledge for the computer science community. In doing so, we address some of
the (sometimes erroneous) beliefs of this field and we report on a survey we
conducted about genome data privacy with biomedical specialists. Then, after
characterizing the genome privacy problem, we review the state-of-the-art
regarding privacy attacks on genomic data and strategies for mitigating such
attacks, as well as contextualizing these attacks from the perspective of
medicine and public policy. This paper concludes with an enumeration of the
challenges for genome data privacy and presents a framework to systematize the
analysis of threats and the design of countermeasures as the field moves
forward
Rethinking affordance
n/a – Critical survey essay retheorising the concept of 'affordance' in digital media context. Lead article in a special issue on the topic, co-edited by the authors for the journal Media Theory
The MeSH-gram Neural Network Model: Extending Word Embedding Vectors with MeSH Concepts for UMLS Semantic Similarity and Relatedness in the Biomedical Domain
Eliciting semantic similarity between concepts in the biomedical domain
remains a challenging task. Recent approaches founded on embedding vectors have
gained in popularity as they risen to efficiently capture semantic
relationships The underlying idea is that two words that have close meaning
gather similar contexts. In this study, we propose a new neural network model
named MeSH-gram which relies on a straighforward approach that extends the
skip-gram neural network model by considering MeSH (Medical Subject Headings)
descriptors instead words. Trained on publicly available corpus PubMed MEDLINE,
MeSH-gram is evaluated on reference standards manually annotated for semantic
similarity. MeSH-gram is first compared to skip-gram with vectors of size 300
and at several windows contexts. A deeper comparison is performed with tewenty
existing models. All the obtained results of Spearman's rank correlations
between human scores and computed similarities show that MeSH-gram outperforms
the skip-gram model, and is comparable to the best methods but that need more
computation and external resources.Comment: 6 pages, 2 table
Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review
Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual
participant data. For continuous outcomes, especially those with naturally skewed distributions, summary
information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal,
we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis.
Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with
missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane
Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited
reference searching and emailed topic experts to identify recent methodological developments. Details recorded
included the description of the method, the information required to implement the method, any underlying
assumptions and whether the method could be readily applied in standard statistical software. We provided a
summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios.
Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in
addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis
level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical
approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following
screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and
three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when
replacing a missing SD the approximation using the range minimised loss of precision and generally performed better
than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile
performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials
gave superior results.
Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median)
reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or
variability summary statistics within meta-analyses
- …