1,156 research outputs found
A Cyclic Distributed Garbage Collector for Network Objects
This paper presents an algorithm for distributed garbage collection and outlines its implementation within the Network Objects system. The algorithm is based on a reference listing scheme, which is augmented by partial tracing in order to collect distributed garbage cycles. Processes may be dynamically organised into groups, according to appropriate heuristics, to reclaim distributed garbage cycles. The algorithm places no overhead on local collectors and suspends local mutators only briefly. Partial tracing of the distributed graph involves only objects thought to be part of a garbage cycle: no collaboration with other processes is required. The algorithm offers considerable flexibility, allowing expediency and fault-tolerance to be traded against completeness
Inversion for reservoir pressure change using overburden strain measurements determined from 4D seismic
When significant pore pressure changes occur because of production from a
hydrocarbon reservoir the rocks both inside and outside of the reservoir deform.
This deformation results in traveltime changes between reflection events on timelapse
seismic data, because the distance between reflection events is altered and the
seismic velocity changes with the strain. These traveltime differences are referred to
as time-lapse time shifts.
In this thesis, time-lapse time shifts observed in the overburden are used as an input
to a linear inversion for reservoir pressure. Measurements from the overburden are
used because, in general, time shift estimates are more stable, the strain deformations
can be considered linear, and fluid effects are negligible, compared to the reservoirlevel
signal.
A critical examination of methods currently available to measure time-lapse time
shifts is offered. It is found that available methods are most accurate when the time
shifts are slowly varying with pressure and changes in the seismic reflectivity are
negligible. While both of these conditions are generally met in the overburden they
are rarely met at reservoir level.
Next, a geomechanical model that linearly relates the overburden time-lapse time
shifts to reservoir pressure is considered. This model takes a semi-analytical
approach by numerical integration of a nucleus of strain in a homogeneous poroelastic
halfspace. Although this model has the potentially limiting assumption of a
homogenous medium, it allows for reservoirs of arbitrary geometries, and, in
contrast to the complex numerical approaches, it is simple to parameterise and
compututationally efficient.
This model is used to create a linear inversion scheme which is first tested on synthetic
data output from complex finite-element model. Despite the simplifications of the
i
inversion operator the pressure change is recovered to within ±10% normalised error
of the true pressure distribution.
Next, the inversion scheme is applied to two real data cases in different geological
settings. First to a sector of the Valhall Field, a compacting chalk reservoir in the
Norwegian Sea, and then the Genesis Field, a stacked turbidite in the Gulf of Mexico.
In both cases the results give good qualitative matches to existing reservoir simulator
estimates of compaction or pressure depletion. It is possible that updating of the
simulation model may be assisted by these results. Further avenues of investigation
are proposed to test the robustness of the simplified geomechanical approach in the
presence of more complex geomechanical features such as faults and strong material
contrasts
The influence of overburden on quantitive time-lapse seismic interpretation
Time-lapse seismic data quality has improved over the past decade, which makes
dynamic interpretation of the reservoir changes possible. To push the limits of this
technique further, this thesis studies the time-lapse seismic noise generated by overburden
heterogeneities, as well as its influence on quantitative seismic interpretation.
This is done by testing the accuracy of a multi-attribute pressure and saturation
inversion method in this context to gain insight into its performance in the case
of seismic acquisitions not being perfectly repeated. Extensive seismic modelling
studies are conducted in order to quantify the accumulated error for three different
overburden complexities.
Channels in the overburden above the Nelson Field, North Sea, are found to cause
errors in the time-lapse amplitudes. The magnitude of these amplitude errors decreases
with increased repeatability of the monitor survey’s source and receiver
positions. On average, saturation change is estimated to an accuracy of less than
6% when affected by amplitude errors only. However, these mean errors significantly
increase to more than 20% if the residual time shifts caused by the channels
are not removed from the seismic data. Moreover, the maximum saturation change
estimation error can exceed the production induced signal locally. In addition, a
major finding of this study is that the shape of the channel in conjunction with
the acquisition direction has a significant impact on the spatial distribution of the
errors at the reservoir level. It is also shown that the commonly used repeatability
measures of NRMS or Δsource+ΔReceiver do not correlate well with the spatial
distribution of areas with increased saturation change estimation error.
Consequently, a layer stripping method is presented which reduces the amplitude
errors caused by the overburden channel and the acquisition non-repeatability by a
factor of two. Nevertheless, the limits of using post-stack data to invert for timelapse
changes become apparent and, as a result, it is strongly advised to do further
research into applying this method to pre-stack seismic data.
Production-induced amplitude changes inside the stacked reservoirs of a deepwater
West of Africa field constitute the second overburden complexity studied. These
changes imprint on the lower reservoir channel and reduce the time-lapse amplitude
change locally by up to 42%. Furthermore, time-lapse amplitude errors are as large
as 38% in case that the velocity change inside the upper reservoir is not included in
the monitor migration velocity model. In addition, an important conclusion of this
study is that due to the high frequency assumption ray-tracing based seismic modelling
does not perform well for cellular models such as this West of Africa example.
Finite-difference modelling methods are strongly advised to be used instead.
Finally, the effect of overburden changes above the highly compacting Ekofisk chalk
reservoir, North Sea, is investigated by combining reservoir simulation, geomechanical
and ray-tracing models. The velocity change of the overburden rocks reduces
the time-lapse amplitudes at the top reservoir predominantly in the zone of vertical
displacements greater than six metres. In this zone, the mean time-lapse amplitude
errors in the full and far offset stack data are 9.4% and 4.23%, respectively. These
errors decrease below 2.3% in areas of less than six metres vertical displacement.
Consequently, the full and far offset stack amplitudes are not suited for quantitative
time-lapse interpretation. The time-lapse amplitudes for the near and mid
offset stacks are significantly less affected and the mean errors are smaller than 1.5%
across the entire reservoir. Therefore, these two partial stacks are recommended for
quantitative time-lapse interpretation.
Three different overburden complexities in the North Sea and West of Africa are
studied and prove to have a measurable impact on the time-lapse amplitudes. It is
shown that these errors affect the ability to estimate the saturation change and in a
way that is not entirely predictable from inferences using commonly used repeatability
measures
List Processing in Real Time on a Serial Computer
Key Words and Phrases: real-time, compacting, garbage collection, list processing, virtual memory, file or database management, storage management, storage allocation, LISP, CDR-coding, reference counting.
CR Categories: 3.50, 3.60, 373, 3.80, 4.13, 24.32, 433, 4.35, 4.49
This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C-0522.A real-time list processing system is one in which the time required by each elementary list operation (CONS, CAR, CDR, RPLACA, RPLACD, EQ, and ATOM in LISP) is bounded by a (small) constant. Classical list processing systems such as LISP do not have this property because a call to CONS may invoke the garbage collector which requires time proportional to the number of accessible cells to finish. The space requirement of a classical LISP system with N accessible cells under equilibrium conditions is (1.5+μ)N or (1+μ)N, depending upon whether a stack is required for the garbage collector, where μ>0 is typically less than 2.
A list processing system is presented which:
1) is real-time--i.e. T(CONS) is bounded by a constant independent of the number of cells in use;
2) requires space (2+2μ)N, i.e. not more than twice that of a classical system;
3) runs on a serial computer without a time-sharing clock;
4) handles directed cycles in the data structures;
5) is fast--the average time for each operation is about the same as with normal garbage collection;
6) compacts--minimizes the working set;
7) keeps the free pool in one contiguous block--objects of nonuniform size pose no problem;
8) uses one phase incremental collection--no separate mark, sweep, relocate phases;
9) requires no garbage collector stack;
10) requires no "mark bits", per se;
11) is simple--suitable for microcoded implementation.
Extensions of the system to handle a user program stack, compact list representation ("CDR-coding"), arrays of non-uniform size, and hash linking are discussed. CDR-coding is shown to reduce memory requirements for N LISP cells to ≈(I+μ)N. Our system is also compared with another approach to the real-time storage management problem, reference counting, and reference counting is shown to be neither competitive with our system when speed of allocation is critical, nor compatible, in the sense that a system with both forms of garbage collection is worse than our pure one.MIT Artificial Intelligence Laboratory
Department of Defense Advanced Research Projects Agenc
Inferring Concise Specifications of APIs
Modern software relies on libraries and uses them via application programming
interfaces (APIs). Correct API usage as well as many software engineering tasks
are enabled when APIs have formal specifications. In this work, we analyze the
implementation of each method in an API to infer a formal postcondition.
Conventional wisdom is that, if one has preconditions, then one can use the
strongest postcondition predicate transformer (SP) to infer postconditions.
However, SP yields postconditions that are exponentially large, which makes
them difficult to use, either by humans or by tools. Our key idea is an
algorithm that converts such exponentially large specifications into a form
that is more concise and thus more usable. This is done by leveraging the
structure of the specifications that result from the use of SP. We applied our
technique to infer postconditions for over 2,300 methods in seven popular Java
libraries. Our technique was able to infer specifications for 75.7% of these
methods, each of which was verified using an Extended Static Checker. We also
found that 84.6% of resulting specifications were less than 1/4 page (20 lines)
in length. Our technique was able to reduce the length of SMT proofs needed for
verifying implementations by 76.7% and reduced prover execution time by 26.7%
Toward precision medicine with nanopore technology
Currently, when patients are diagnosed with cancer, they often receive a treatment based on the type and stage of the tumor. However, different patients may respond to the same treatment differently, due to the variation in their genomic alteration profile. Thus, it is essential to understand the effect of genomic alterations on cancer drug efficiency and engineer devices to monitor these changes for therapeutic response prediction. Nanopore-based detection technology features devices containing a nanometer-scale pore embedded in a thin membrane that can be utilized for DNA sequencing, biosensing, and detection of biological or chemical modifications on single molecules. Overall, this project aims to evaluate the capability of the biological nanopore, alpha-hemolysin, as a biosensor for genetic and epigenetic biomarkers of cancer. Specifically, we utilized the nanopore to (1) study the effect of point mutations on C-kit1 G-quadruplex formation and its response to CX-5461 cancer drug; (2) evaluate the nanopore\u27s ability to detect cytosine methylation in label-dependent and label-independent manners; and (3) detect circulating-tumor DNA collected from lung cancer patients\u27 plasma for disease detection and treatment response monitoring. Compared to conventional techniques, nanopore assays offer increased flexibility and much shorter processing time
- …