366,713 research outputs found
Computational science curriculum in Utrecht
In 1993 Utrecht University has started a curriculum in Computational Science, starting at the undergraduate
level and leading to the Dutch `Doctorandus' degree (wich is more or less comparable to the Master's degree). The curriculum has been st up as a joint collaboration between the Departments of Mathematics & Computer Science, and Physics. It aims at a complete and self-contained educational program that should fulll society's growing demand for scientic computing, and it does so by trying to make students familiar with computational models (physics), applied mathematics (with emphasis on numerical analysis), and computer possibilities (computer science).
In our presentation we will discuss the ideas behind this new study, the perspectives for students with respect to carreer, and we will report on our experiences during the rst two years of existence of the new curriculum
Formal Verification of Security Protocol Implementations: A Survey
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
Troping the Enemy: Metaphor, Culture, and the Big Data Black Boxes of National Security
This article considers how cultural understanding is being brought into the work of the Intelligence Advanced Research Projects Activity (IARPA), through an analysis of its Metaphor program. It examines the type of social science underwriting this program, unpacks implications of the agency’s conception of metaphor for understanding so-called cultures of interest, and compares IARPA’s to competing accounts of how metaphor works to create cultural meaning. The article highlights some risks posed by key deficits in the Intelligence Community\u27s (IC) approach to culture, which relies on the cognitive linguistic theories of George Lakoff and colleagues. It also explores the problem of the opacity of these risks for analysts, even as such predictive cultural analytics are becoming a part of intelligence forecasting. This article examines the problem of information secrecy in two ways, by unpacking the opacity of “black box,” algorithm-based social science of culture for end users with little appreciation of their potential biases, and by evaluating the IC\u27s nontransparent approach to foreign cultures, as it underwrites national security assessments
Theory of holey twistsonic media
Rotating two overlapping lattices relative to each other produces the well known moiré interference patterns and has surprisingly led to strongly correlated superconductivity in twisted bilayer graphene. This seminal effect that is associated with electrons occupying flat dispersion bands has stimulated a surge of activities in classical wave physics such as acoustics to explore equivalent scenarios. Here, we mimic twisted bilayer physics by employing a rigorous sound wave expansion technique to conduct band engineering in holey bilayer plates, i.e., twistsonic media. Our numerical findings show how one flexibly is able to design moiré sound interference characteristics that alone are controlled by the twist angle and the interlayer air separation. More specifically, our numerical approach provides a significant advantage in both computational speed and storage size in comparison with widely used commercial finite-element-method solvers. We foresee that our findings should stimulate further studies in terms of band engineering and exotic topological twisted phases.J.C. acknowledges the support from the European Research Council (ERC) through the Starting Grant 714577 PHONOMETA. Z.Z. acknowledges the support from the NSFC (12104226), the China National Postdoctoral Program for Innovative Talents (BX20200165) and the China Postdoctoral Science Foundation (2020M681541). D.T. acknowledges the support of MINECO through a Ramón y Cajal grant (Grant No. RYC-2016-21188) and of the Ministry of Science, Innovation and Universities trough project number RTI2018-093921-A-C42
An Introduction to Programming for Bioscientists: A Python-based Primer
Computing has revolutionized the biological sciences over the past several
decades, such that virtually all contemporary research in the biosciences
utilizes computer programs. The computational advances have come on many
fronts, spurred by fundamental developments in hardware, software, and
algorithms. These advances have influenced, and even engendered, a phenomenal
array of bioscience fields, including molecular evolution and bioinformatics;
genome-, proteome-, transcriptome- and metabolome-wide experimental studies;
structural genomics; and atomistic simulations of cellular-scale molecular
assemblies as large as ribosomes and intact viruses. In short, much of
post-genomic biology is increasingly becoming a form of computational biology.
The ability to design and write computer programs is among the most
indispensable skills that a modern researcher can cultivate. Python has become
a popular programming language in the biosciences, largely because (i) its
straightforward semantics and clean syntax make it a readily accessible first
language; (ii) it is expressive and well-suited to object-oriented programming,
as well as other modern paradigms; and (iii) the many available libraries and
third-party toolkits extend the functionality of the core language into
virtually every biological domain (sequence and structure analyses,
phylogenomics, workflow management systems, etc.). This primer offers a basic
introduction to coding, via Python, and it includes concrete examples and
exercises to illustrate the language's usage and capabilities; the main text
culminates with a final project in structural bioinformatics. A suite of
Supplemental Chapters is also provided. Starting with basic concepts, such as
that of a 'variable', the Chapters methodically advance the reader to the point
of writing a graphical user interface to compute the Hamming distance between
two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables,
numerous exercises, and 19 pages of Supporting Information; currently in
press at PLOS Computational Biolog
The DLV System for Knowledge Representation and Reasoning
This paper presents the DLV system, which is widely considered the
state-of-the-art implementation of disjunctive logic programming, and addresses
several aspects. As for problem solving, we provide a formal definition of its
kernel language, function-free disjunctive logic programs (also known as
disjunctive datalog), extended by weak constraints, which are a powerful tool
to express optimization problems. We then illustrate the usage of DLV as a tool
for knowledge representation and reasoning, describing a new declarative
programming methodology which allows one to encode complex problems (up to
-complete problems) in a declarative fashion. On the foundational
side, we provide a detailed analysis of the computational complexity of the
language of DLV, and by deriving new complexity results we chart a complete
picture of the complexity of this language and important fragments thereof.
Furthermore, we illustrate the general architecture of the DLV system which
has been influenced by these results. As for applications, we overview
application front-ends which have been developed on top of DLV to solve
specific knowledge representation tasks, and we briefly describe the main
international projects investigating the potential of the system for industrial
exploitation. Finally, we report about thorough experimentation and
benchmarking, which has been carried out to assess the efficiency of the
system. The experimental results confirm the solidity of DLV and highlight its
potential for emerging application areas like knowledge management and
information integration.Comment: 56 pages, 9 figures, 6 table
Monoidal computer III: A coalgebraic view of computability and complexity
Monoidal computer is a categorical model of intensional computation, where
many different programs correspond to the same input-output behavior. The
upshot of yet another model of computation is that a categorical formalism
should provide a much needed high level language for theory of computation,
flexible enough to allow abstracting away the low level implementation details
when they are irrelevant, or taking them into account when they are genuinely
needed. A salient feature of the approach through monoidal categories is the
formal graphical language of string diagrams, which supports visual reasoning
about programs and computations.
In the present paper, we provide a coalgebraic characterization of monoidal
computer. It turns out that the availability of interpreters and specializers,
that make a monoidal category into a monoidal computer, is equivalent with the
existence of a *universal state space*, that carries a weakly final state
machine for any pair of input and output types. Being able to program state
machines in monoidal computers allows us to represent Turing machines, to
capture their execution, count their steps, as well as, e.g., the memory cells
that they use. The coalgebraic view of monoidal computer thus provides a
convenient diagrammatic language for studying computability and complexity.Comment: 34 pages, 24 figures; in this version: added the Appendi
Abstract Program Slicing: an Abstract Interpretation-based approach to Program Slicing
In the present paper we formally define the notion of abstract program
slicing, a general form of program slicing where properties of data are
considered instead of their exact value. This approach is applied to a language
with numeric and reference values, and relies on the notion of abstract
dependencies between program components (statements).
The different forms of (backward) abstract slicing are added to an existing
formal framework where traditional, non-abstract forms of slicing could be
compared. The extended framework allows us to appreciate that abstract slicing
is a generalization of traditional slicing, since traditional slicing (dealing
with syntactic dependencies) is generalized by (semantic) non-abstract forms of
slicing, which are actually equivalent to an abstract form where the identity
abstraction is performed on data.
Sound algorithms for computing abstract dependencies and a systematic
characterization of program slices are provided, which rely on the notion of
agreement between program states
- …