374 research outputs found
A flexible model for dynamic linking in Java and C#
Dynamic linking supports flexible code deployment, allowing partially linked code to link further code on the fly, as needed.
Thus, end-users enjoy the advantage of automatically receiving any updates, without any need for any explicit actions on their side,
such as re-compilation, or re-linking. On the down side, two executions of a program may link in different versions of code, which
in some cases causes subtle errors, and may mystify end-users.
Dynamic linking in Java and C# are similar: the same linking phases are involved, soundness is based on similar ideas, and
executions which do not throw linking errors give the same result. They are, however, not identical: the linking phases are combined
differently, and take place in different order. Consequently, linking errors may be detected at different times by Java and C# runtime
systems.
We develop a non-deterministic model, which describes the behaviour of both Java and C# program executions. The nondeterminism
allows us to describe the design space, to distill the similarities between the two languages, and to use one proof of
soundness for both. We also prove that all execution strategies are equivalent with respect to terminating executions that do not
throw link errors: they give the same results
ShareJIT: JIT Code Cache Sharing across Processes and Its Practical Implementation
Just-in-time (JIT) compilation coupled with code caching are widely used to
improve performance in dynamic programming language implementations. These code
caches, along with the associated profiling data for the hot code, however,
consume significant amounts of memory. Furthermore, they incur extra JIT
compilation time for their creation. On Android, the current standard JIT
compiler and its code caches are not shared among processes---that is, the
runtime system maintains a private code cache, and its associated data, for
each runtime process. However, applications running on the same platform tend
to share multiple libraries in common. Sharing cached code across multiple
applications and multiple processes can lead to a reduction in memory use. It
can directly reduce compile time. It can also reduce the cumulative amount of
time spent interpreting code. All three of these effects can improve actual
runtime performance.
In this paper, we describe ShareJIT, a global code cache for JITs that can
share code across multiple applications and multiple processes. We implemented
ShareJIT in the context of the Android Runtime (ART), a widely used,
state-of-the-art system. To increase sharing, our implementation constrains the
amount of context that the JIT compiler can use to optimize the code. This
exposes a fundamental tradeoff: increased specialization to a single process'
context decreases the extent to which the compiled code can be shared. In
ShareJIT, we limit some optimization to increase shareability. To evaluate the
ShareJIT, we tested 8 popular Android apps in a total of 30 experiments.
ShareJIT improved overall performance by 9% on average, while decreasing memory
consumption by 16% on average and JIT compilation time by 37% on average.Comment: OOPSLA 201
Machine Learning at Microsoft with ML .NET
Machine Learning is transitioning from an art and science into a technology
available to every developer. In the near future, every application on every
platform will incorporate trained models to encode data-based decisions that
would be impossible for developers to author. This presents a significant
engineering challenge, since currently data science and modeling are largely
decoupled from standard software development processes. This separation makes
incorporating machine learning capabilities inside applications unnecessarily
costly and difficult, and furthermore discourage developers from embracing ML
in first place. In this paper we present ML .NET, a framework developed at
Microsoft over the last decade in response to the challenge of making it easy
to ship machine learning models in large software applications. We present its
architecture, and illuminate the application demands that shaped it.
Specifically, we introduce DataView, the core data abstraction of ML .NET which
allows it to capture full predictive pipelines efficiently and consistently
across training and inference lifecycles. We close the paper with a
surprisingly favorable performance study of ML .NET compared to more recent
entrants, and a discussion of some lessons learned
An Autonomic Cross-Platform Operating Environment for On-Demand Internet Computing
The Internet has evolved into a global and ubiquitous communication medium interconnecting powerful application servers, diverse desktop computers and mobile notebooks. Along with recent developments in computer technology, such as the convergence of computing and communication devices, the way how people use computers and the Internet has changed people´s working habits and has led to new application scenarios. On the one hand, pervasive computing, ubiquitous computing and nomadic computing become more and more important since different computing devices like PDAs and notebooks may be used concurrently and alternately, e.g. while the user is on the move. On the other hand, the ubiquitous availability and pervasive interconnection of computing systems have fostered various trends towards the dynamic utilization and spontaneous collaboration of available remote computing resources, which are addressed by approaches like utility computing, grid computing, cloud computing and public computing. From a general point of view, the common objective of this development is the use of Internet applications on demand, i.e. applications that are not installed in advance by a platform administrator but are dynamically deployed and run as they are requested by the application user. The heterogeneous and unmanaged nature of the Internet represents a major challenge for the on demand use of custom Internet applications across heterogeneous hardware platforms, operating systems and network environments. Promising remedies are autonomic computing systems that are supposed to maintain themselves without particular user or application intervention. In this thesis, an Autonomic Cross-Platform Operating Environment (ACOE) is presented that supports On Demand Internet Computing (ODIC), such as dynamic application composition and ad hoc execution migration. The approach is based on an integration middleware called crossware that does not replace existing middleware but operates as a self-managing mediator between diverse application requirements and heterogeneous platform configurations. A Java implementation of the Crossware Development Kit (XDK) is presented, followed by the description of the On Demand Internet Computing System (ODIX). The feasibility of the approach is shown by the implementation of an Internet Application Workbench, an Internet Application Factory and an Internet Peer Federation. They illustrate the use of ODIX to support local, remote and distributed ODIC, respectively. Finally, the suitability of the approach is discussed with respect to the support of ODIC
I-JVM: a Java Virtual Machine for Component Isolation in OSGi
The OSGi framework is a Java-based, centralized, component oriented platform. It is being widely adopted as an execution environment for the development of extensible applications. However, current Java Virtual Machines are unable to isolate components from each other. For instance, a malicious component can freeze the complete platform by allocating too much memory or alter the behavior of other components by modifying shared variables. This paper presents I-JVM, a Java Virtual Machine that provides a lightweight approach to isolation while preserving compatibility with legacy OSGi applications. Our evaluation of I-JVM shows that it solves the 8 known OSGi vulnerabilities that are due to the Java Virtual Machine. Overall, the overhead of I-JVM compared to the JVM on which it is based is below 20%
PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models
We present PyTorch Geometric Temporal a deep learning framework combining
state-of-the-art machine learning algorithms for neural spatiotemporal signal
processing. The main goal of the library is to make temporal geometric deep
learning available for researchers and machine learning practitioners in a
unified easy-to-use framework. PyTorch Geometric Temporal was created with
foundations on existing libraries in the PyTorch eco-system, streamlined neural
network layer definitions, temporal snapshot generators for batching, and
integrated benchmark datasets. These features are illustrated with a
tutorial-like case study. Experiments demonstrate the predictive performance of
the models implemented in the library on real world problems such as
epidemiological forecasting, ridehail demand prediction and web-traffic
management. Our sensitivity analysis of runtime shows that the framework can
potentially operate on web-scale datasets with rich temporal features and
spatial structure.Comment: Source code at:
https://github.com/benedekrozemberczki/pytorch_geometric_tempora
- …