323 research outputs found
A new integration algorithm for ordinary differential equations based on continued fraction approximations
A new integration algorithm is found, and an implementation is compared with other programmed algorithms. The new algorithm is a step by step procedure for solving the initial value problem in ordinary differential equations. It is designed to approximate poles of small integer order in the solutions of the differential equations by continued fractions obtained by manipulating the sums of truncated Taylor series expansions. The new method is compared with the Gragg- Bulirsch-Stoer, and the Taylor series method. The Taylor series method and the new method are shown to be superior in speed and accuracy, while the new method is shown to be most superior when the solution is required near a singularity. The new method can finally be seen to pass automatically through singularities where all the other methods which are discussed will have failed
TBC of the thoracic wall with fistulisation through the breast
A 53-year-old North African woman presented with a longstanding history of ulcerations of the right breast. Physical examination showed (Fig. 1 subfigure) an ulcer of 1.5 cm in the outer inferior quadrant, another smaller areolar ulcer and a discharging sinus tract in the inframammary sulcus. Apart from female genital mutilation, her past medical history was negative. Laboratory work up was essentially normal, culture of the ulcers were taken. Mammography showed infra-areolar skin retraction, associated with irregular, high density distortion of the breast tissue. Ultrasound (Fig. 1) revealed communicating sinus tracts coming from an intercostal mass with central necrosis. Mobile internal echoes were suggestive for abscess formation and a truecut biopsy was taken. An important granulomatous inflammatory pattern and fibrosis were found. Axillary lymphadenopathy was present
XML for Domain Viewpoints
Within research institutions like CERN (European Organization for Nuclear
Research) there are often disparate databases (different in format, type and
structure) that users need to access in a domain-specific manner. Users may
want to access a simple unit of information without having to understand detail
of the underlying schema or they may want to access the same information from
several different sources. It is neither desirable nor feasible to require
users to have knowledge of these schemas. Instead it would be advantageous if a
user could query these sources using his or her own domain models and
abstractions of the data. This paper describes the basis of an XML (eXtended
Markup Language) framework that provides this functionality and is currently
being developed at CERN. The goal of the first prototype was to explore the
possibilities of XML for data integration and model management. It shows how
XML can be used to integrate data sources. The framework is not only applicable
to CERN data sources but other environments too.Comment: 9 pages, 6 figures, conference report from SCI'2001 Multiconference
on Systemics & Informatics, Florid
Database independent Migration of Objects into an Object-Relational Database
This paper reports on the CERN-based WISDOM project which is studying the
serialisation and deserialisation of data to/from an object database
(objectivity) and ORACLE 9i.Comment: 26 pages, 18 figures; CMS CERN Conference Report cr02_01
Mobile Computing in Physics Analysis - An Indicator for eScience
This paper presents the design and implementation of a Grid-enabled physics
analysis environment for handheld and other resource-limited computing devices
as one example of the use of mobile devices in eScience. Handheld devices offer
great potential because they provide ubiquitous access to data and
round-the-clock connectivity over wireless links. Our solution aims to provide
users of handheld devices the capability to launch heavy computational tasks on
computational and data Grids, monitor the jobs status during execution, and
retrieve results after job completion. Users carry their jobs on their handheld
devices in the form of executables (and associated libraries). Users can
transparently view the status of their jobs and get back their outputs without
having to know where they are being executed. In this way, our system is able
to act as a high-throughput computing environment where devices ranging from
powerful desktop machines to small handhelds can employ the power of the Grid.
The results shown in this paper are readily applicable to the wider eScience
community.Comment: 8 pages, 7 figures. Presented at the 3rd Int Conf on Mobile Computing
& Ubiquitous Networking (ICMU06. London October 200
Design Patterns for Description-Driven Systems
In data modelling, product information has most often been handled separately
from process information. The integration of product and process models in a
unified data model could provide the means by which information could be shared
across an enterprise throughout the system lifecycle from design through to
production. Recently attempts have been made to integrate these two separate
views of systems through identifying common data models. This paper relates
description-driven systems to multi-layer architectures and reveals where
existing design patterns facilitate the integration of product and process
models and where patterns are missing or where existing patterns require
enrichment for this integration. It reports on the construction of a so-called
description-driven system which integrates Product Data Management (PDM) and
Workflow Management (WfM) data models through a common meta-model.Comment: 14 pages, 13 figures. Presented at the 3rd Enterprise Distributed
Object Computing EDOC'99 conference. Mannheim, Germany. September 199
Pion form factor in the Kroll-Lee-Zumino model
The renormalizable Abelian quantum field theory model of Kroll, Lee, and
Zumino is used to compute the one-loop vertex corrections to the tree-level,
Vector Meson Dominance (VMD) pion form factor. These corrections, together with
the known one-loop vacuum polarization contribution, lead to a substantial
improvement over VMD. The resulting pion form factor in the space-like region
is in excellent agreement with data in the whole range of accessible momentum
transfers. The time-like form factor, known to reproduce the Gounaris-Sakurai
formula at and near the rho-meson peak, is unaffected by the vertex correction
at order (g_\rpp^2).Comment: Revised version corrects a misprint in Eq.(1
Status and perspective of detector databases in the CMS experiment at the LHC
This note gives an overview at a high conceptual level of the various databases that capture the information concerning the CMS detector. The detector domain has been split up into four, partly overlapping parts that cover phases in the detector life cycle: construction, integration, configuration and condition, and a geometry part that is common to all phases. The discussion addresses the specific content and usage of each part, and further requirements, dependencies and interfaces
Object Serialization and Deserialization Using XML
Interoperability of potentially heterogeneous databases has been an ongoing
research issue for a number of years in the database community. With the trend
towards globalization of data location and data access and the consequent
requirement for the coexistence of new data stores with legacy systems, the
cooperation and data interchange between data repositories has become
increasingly important. The emergence of the eXtensible Markup Language (XML)
as a database independent representation for data offers a suitable mechanism
for transporting data between repositories. This paper describes a research
activity within a group at CERN (called CMS) towards identifying and
implementing database serialization and deserialization methods that can be
used to replicate or migrate objects across the network between CERN and
worldwide centres using XML to serialize the contents of multiple objects
resident in object-oriented databases.Comment: 14 pages 7 figure
DIANA Scheduling Hierarchies for Optimizing Bulk Job Scheduling
The use of meta-schedulers for resource management in large-scale distributed
systems often leads to a hierarchy of schedulers. In this paper, we discuss why
existing meta-scheduling hierarchies are sometimes not sufficient for Grid
systems due to their inability to re-organise jobs already scheduled locally.
Such a job re-organisation is required to adapt to evolving loads which are
common in heavily used Grid infrastructures. We propose a peer-to-peer
scheduling model and evaluate it using case studies and mathematical modelling.
We detail the DIANA (Data Intensive and Network Aware) scheduling algorithm and
its queue management system for coping with the load distribution and for
supporting bulk job scheduling. We demonstrate that such a system is beneficial
for dynamic, distributed and self-organizing resource management and can assist
in optimizing load or job distribution in complex Grid infrastructures.Comment: 8 pages, 9 figures. Presented at the 2nd IEEE Int Conference on
eScience & Grid Computing. Amsterdam Netherlands, December 200
- …