1,037 research outputs found
Report from the Tri-Agency Cosmological Simulation Task Force
The Tri-Agency Cosmological Simulations (TACS) Task Force was formed when
Program Managers from the Department of Energy (DOE), the National Aeronautics
and Space Administration (NASA), and the National Science Foundation (NSF)
expressed an interest in receiving input into the cosmological simulations
landscape related to the upcoming DOE/NSF Vera Rubin Observatory (Rubin),
NASA/ESA's Euclid, and NASA's Wide Field Infrared Survey Telescope (WFIRST).
The Co-Chairs of TACS, Katrin Heitmann and Alina Kiessling, invited community
scientists from the USA and Europe who are each subject matter experts and are
also members of one or more of the surveys to contribute. The following report
represents the input from TACS that was delivered to the Agencies in December
2018.Comment: 36 pages, 3 figures. Delivered to NASA, NSF, and DOE in Dec 201
Physics as Information Processing
I review some recent advances in foundational research at Pavia QUIT group.
The general idea is that there is only Quantum Theory without quantization
rules, and the whole Physics---including space-time and relativity--is emergent
from the quantum-information processing. And since Quantum Theory itself is
axiomatized solely on informational principles, the whole Physics must be
reformulated in information-theoretical terms: this is the "It from Bit of J.
A. Wheeler. The review is divided into four parts: a) the informational
axiomatization of Quantum Theory; b) how space-time and relativistic covariance
emerge from quantum computation; c) what is the information-theoretical meaning
of inertial mass and of , and how the quantum field emerges; d) an
observational consequence of the new quantum field theory: a mass-dependent
refraction index of vacuum. I will conclude with the research lines that will
follow in the immediate future.Comment: Work presented at the conference "Advances in Quantum Theory" held on
14-17 June 2010 at the Linnaeus University, Vaxjo, Swede
External coupling between building energy simulation and computational fluid dynamics
xviii,139hlm.;bib.;tab.;ill
Kinetic Analysis of dynamic MP4A PET Scans of Human Brain using Voxel based Nonlinear Least Squares Fitting
Dynamic PET (Positron Emission Tomography) involving a number of radiotracers is an established technique for in vivo estimation of biochemical parameters in human brain, such as the overall metabolic rate and certain receptor concentrations or enzyme activities. 11C labeled methyl-4-piperidyl acetate (MP4A) and -propionate (MP4P) are established radiotracers for measuring activity of acetylcholine esterase (AChE), which relates to functionality of the cholinergic system. MP4A kinetic analysis without arterial blood sampling employs a reference tissue based "irreversible tracer model". Implementations can be region or voxel based, in the second case providing parametric images of k3 which is an indicator of AChE activity. This work introduces an implementation of voxel based kinetic analysis using weighted Nonlinear Least Squares fitting (NLS), which is fast enough for standard PCs. The entire workflow leading from reconstructed PET scans to parametric images of k3, including normalization and correction for patient movement, has been automatized. Image preprocessing has been redefined and fixed masks are no longer required. A focus of this work is error estimation of k3 at the voxel and regional level. A formula is derived for voxel based estimation of random error, it is based on residual weighted squared differences and has been successfully validated against simulated data. The reference curves turned out to be the main source of errors in regional mean values of k3. Major improvements were reached in this area by switching from fixed to adaptive Putamen masks and raising their volume from 5.4 to 12.5 or 16 ml. Also, a method for correcting reference curves obtained from nonideal reference tissues is presented. For the improved implementation, random error of the mean k3 of a number of cerebral regions has been assessed based on PET studies of 12 human subjects, by splitting them in two independent data sets at the sinogram level. According to this sample, absolute standard errors of 0.0012 in most cortex regions and 0.0053 in Hippocampus are induced by noise of voxel based activity curves, while errors of approximately 0.0025 and 0.0050 are induced by noise of the reference curves. Different types of systematic as well as noise-induced bias have been investigated by simulations; their combined effect on the computed k3 was found below 3 percent. The implementation is available as a modul of the VINCI software package and has been used in clinical studies on Parkinson's Disease and Alzheimer Dementia
Enhancing trustability in MMOGs environments
Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds
(VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more
autonomic, security, and trust mechanisms in a way similar to humans do in the real
life world. As known, this is a difficult matter because trusting in humans and organizations
depends on the perception and experience of each individual, which is difficult to
quantify or measure. In fact, these societal environments lack trust mechanisms similar
to those involved in humans-to-human interactions. Besides, interactions mediated
by compute devices are constantly evolving, requiring trust mechanisms that keep the
pace with the developments and assess risk situations.
In VW/MMOGs, it is widely recognized that users develop trust relationships from their
in-world interactions with others. However, these trust relationships end up not being
represented in the data structures (or databases) of such virtual worlds, though they
sometimes appear associated to reputation and recommendation systems. In addition,
as far as we know, the user is not provided with a personal trust tool to sustain his/her
decision making while he/she interacts with other users in the virtual or game world.
In order to solve this problem, as well as those mentioned above, we propose herein a
formal representation of these personal trust relationships, which are based on avataravatar
interactions. The leading idea is to provide each avatar-impersonated player
with a personal trust tool that follows a distributed trust model, i.e., the trust data is
distributed over the societal network of a given VW/MMOG.
Representing, manipulating, and inferring trust from the user/player point of view certainly
is a grand challenge. When someone meets an unknown individual, the question
is âCan I trust him/her or not?â. It is clear that this requires the user to have access to
a representation of trust about others, but, unless we are using an open source VW/MMOG,
it is difficult ânot to say unfeasibleâ to get access to such data. Even, in an open
source system, a number of users may refuse to pass information about its friends, acquaintances,
or others. Putting together its own data and gathered data obtained from
others, the avatar-impersonated player should be able to come across a trust result
about its current trustee. For the trust assessment method used in this thesis, we use
subjective logic operators and graph search algorithms to undertake such trust inference
about the trustee. The proposed trust inference system has been validated using
a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy
increase in evaluating trustability of avatars.
Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its
trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g.,
graph search methods), on an individual basis, rather than based on usual centralized
reputation systems. In particular, and unlike other trust discovery methods, our methods
run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft),
mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo,
Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de
assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos
utilizam na vida real. Como se sabe, esta nĂŁo Ă© uma questĂŁo fĂĄcil. Porque confiar em
seres humanos e ou organizaçÔes depende da percepção e da experiĂȘncia de cada indivĂduo,
o que Ă© difĂcil de quantificar ou medir Ă partida. Na verdade, esses ambientes
sociais carecem dos mecanismos de confiança presentes em interacçÔes humanas presenciais.
Além disso, as interacçÔes mediadas por dispositivos computacionais estão em
constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da
evolução para avaliar situaçÔes de risco.
Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relaçÔes
de confiança a partir das suas interacçÔes no mundo com outros. No entanto, essas relaçÔes
de confiança acabam por não ser representadas nas estruturas de dados (ou bases
de dados) do VW/MMOG especĂfico, embora Ă s vezes apareçam associados Ă reputação
e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe
é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual
para sustentar o seu processo de tomada de decisĂŁo, enquanto ele interage com outros
utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como
os mencionados acima, propomos nesta tese uma representação formal para essas relaçÔes
de confiança pessoal, baseada em interacçÔes avatar-avatar. A ideia principal
é fornecer a cada jogador representado por um avatar uma ferramenta de confiança
pessoal que segue um modelo de confiança distribuĂda, ou seja, os dados de confiança
sĂŁo distribuĂdos atravĂ©s da rede social de um determinado VW/MMOG.
Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente
um grande desafio. Quando alguĂ©m encontra um indivĂduo desconhecido, a
pergunta Ă© âPosso confiar ou nĂŁo nele?â. Ă claro que isto requer que o utilizador tenha
acesso a uma representação de confiança sobre os outros, mas, a menos que possamos
usar uma plataforma VW/MMOG de cĂłdigo aberto, Ă© difĂcil â para nĂŁo dizer impossĂvel
â obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de cĂłdigo
aberto, um nĂșmero de utilizadores pode recusar partilhar informaçÔes sobre seus amigos,
conhecidos, ou sobre outros. Ao juntar seus prĂłprios dados com os dados obtidos de
outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma
avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir.
Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos
lógica subjectiva para a representação da confiança, e também operadores lógicos da
lĂłgica subjectiva juntamente com algoritmos de procura em grafos para empreender
o processo de inferĂȘncia da confiança relativamente a outro utilizador. O sistema de
inferĂȘncia de confiança proposto foi validado atravĂ©s de um nĂșmero de cenĂĄrios Open-Simulator (opensimulator.org), que mostrou um aumento na precisĂŁo na avaliação da
confiança de avatares.
Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos
virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a
lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo,
através de métodos de pesquisa em grafos), partindo de uma base individual, em
vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao
contrårio de outros métodos de determinação do grau de confiança, os nossos métodos
sĂŁo executados em tempo real
From oscillatory transcranial current stimulation to scalp EEG changes: a biophysical and physiological modeling study.
International audienceBoth biophysical and neurophysiological aspects need to be considered to assess the impact of electric fields induced by transcranial current stimulation (tCS) on the cerebral cortex and the subsequent effects occurring on scalp EEG. The objective of this work was to elaborate a global model allowing for the simulation of scalp EEG signals under tCS. In our integrated modeling approach, realistic meshes of the head tissues and of the stimulation electrodes were first built to map the generated electric field distribution on the cortical surface. Secondly, source activities at various cortical macro-regions were generated by means of a computational model of neuronal populations. The model parameters were adjusted so that populations generated an oscillating activity around 10 Hz resembling typical EEG alpha activity. In order to account for tCS effects and following current biophysical models, the calculated component of the electric field normal to the cortex was used to locally influence the activity of neuronal populations. Lastly, EEG under both spontaneous and tACS-stimulated (transcranial sinunoidal tCS from 4 to 16 Hz) brain activity was simulated at the level of scalp electrodes by solving the forward problem in the aforementioned realistic head model. Under the 10 Hz-tACS condition, a significant increase in alpha power occurred in simulated scalp EEG signals as compared to the no-stimulation condition. This increase involved most channels bilaterally, was more pronounced on posterior electrodes and was only significant for tACS frequencies from 8 to 12 Hz. The immediate effects of tACS in the model agreed with the post-tACS results previously reported in real subjects. Moreover, additional information was also brought by the model at other electrode positions or stimulation frequency. This suggests that our modeling approach can be used to compare, interpret and predict changes occurring on EEG with respect to parameters used in specific stimulation configurations
A Foundational View on Integration Problems
The integration of reasoning and computation services across system and
language boundaries is a challenging problem of computer science. In this
paper, we use integration for the scenario where we have two systems that we
integrate by moving problems and solutions between them. While this scenario is
often approached from an engineering perspective, we take a foundational view.
Based on the generic declarative language MMT, we develop a theoretical
framework for system integration using theories and partial theory morphisms.
Because MMT permits representations of the meta-logical foundations themselves,
this includes integration across logics. We discuss safe and unsafe integration
schemes and devise a general form of safe integration
- âŠ