901 research outputs found

    Examples of Reusing Synchronization Code in Aspect-Oriented Programming using Composition Filters

    Get PDF
    Applying the object-oriented paradigm for the development of large and complex software systems offers several advantages, of which increased extensibility and reusability are the most prominent ones. The object-oriented model is also quite suitable for modeling concurrent systems. However, it appears that extensibility and reusability of concurrent applications is far from trivial. The problems that arise, the so-called inheritance anomalies or crosscutting aspects have been extensively studied in the literature. As a solution to the synchronization reuse problems, we present the composition-filters approach. Composition filters can express synchronization constraints and operations on objects as modular extensions. In this paper we briefly explain the composition filters approach, demonstrate its expressive power through a number of examples and show that composition filters do not suffer from the inheritance anomalies

    An Object-Oriented Model for Extensible Concurrent Systems: the Composition-Filters Approach

    Get PDF
    Applying the object-oriented paradigm for the development of large and complex software systems offers several advantages, of which increased extensibility and reusability are the most prominent ones. The object-oriented model is also quite suitable for modeling concurrent systems. However, it appears that extensibility and reusability of concurrent applications is far from trivial. The problems that arise, the so-called inheritance anomalies are analyzed and presented in this paper. A set of requirements for extensible concurrent languages is formulated. As a solution to the identified problems, an extension to the object-oriented model is presented; composition filters. Composition filters capture messages and can express certain constraints and operations on these messages, for example buffering. In this paper we explain the composition filters approach, demonstrate its expressive power through a number of examples and show that composition filters do not suffer from the inheritance anomalies and fulfill the requirements that were established

    Beyond XSPEC: Towards Highly Configurable Analysis

    Full text link
    We present a quantitative comparison between software features of the defacto standard X-ray spectral analysis tool, XSPEC, and ISIS, the Interactive Spectral Interpretation System. Our emphasis is on customized analysis, with ISIS offered as a strong example of configurable software. While noting that XSPEC has been of immense value to astronomers, and that its scientific core is moderately extensible--most commonly via the inclusion of user contributed "local models"--we identify a series of limitations with its use beyond conventional spectral modeling. We argue that from the viewpoint of the astronomical user, the XSPEC internal structure presents a Black Box Problem, with many of its important features hidden from the top-level interface, thus discouraging user customization. Drawing from examples in custom modeling, numerical analysis, parallel computation, visualization, data management, and automated code generation, we show how a numerically scriptable, modular, and extensible analysis platform such as ISIS facilitates many forms of advanced astrophysical inquiry.Comment: Accepted by PASP, for July 2008 (15 pages

    Characterisation and Classification of Hidden Conducting Security Threats using Magnetic Polarizability Tensors

    Get PDF
    The early detection of terrorist threat objects, such as guns and knives, through improved metal detection, has the potential to reduce the number of attacks and improve public safety and security. Walk through metal detectors (WTMDs) are commonly deployed for security screening purposes in applications where these attacks are of particular con-cern such as in airports, transport hubs, government buildings and at concerts. However, there is scope to improve the identification of an object’s shape and its material proper-ties. Using current techniques there is often the requirement for any metallic objects to be inspected or scanned separately before a patron may be determined to pose no threat, making the process slow. This can often lead to build ups of large queues of unscreened people waiting to be screened which becomes another security threat in itself. To improve the current method, there is considerable potential to use the fields applied and measured by a metal detector since, hidden within the field perturbation, is object characterisation information. The magnetic polarizability tensor (MPT) offers an economical characteri-sation of metallic objects and its spectral signature provides additional object character-isation information. The MPT spectral signature can be determined from measurements of the induced voltage over a range of frequencies for a hidden object. With classification in mind, it can also be computed in advance for different threat and non-threat objects, producing a dataset of these objects from which a machine learning (ML) classifier can be trained. There is also potential to generate this dataset synthetically, via the application of a method based on finite elements (FE). This concept of training an ML classifier trained on a synthetic dataset of MPT based characterisations is at the heart of this work.In this thesis, details for the production and use of a first of its kind synthetic dataset of realistic object characterisations are presented. To achieve this, first a review of re-cent developments of MPT object characterisations is provided, motivating the use of MPT spectral signatures. A problem specific, H(curl) based, hp-finite element discreti-sation is presented, which allows for the development of a reduced order model (ROM), using a projection based proper orthogonal decomposition (PODP), that benefits from a-posteriori error estimates. This allows for the rapid production of MPT spectral signatures the accuracy of which is guaranteed. This methodology is then implemented in Python, using the NGSolve finite element package, where other problem specific efficiencies are also included along with a series of additional outputs of interest, this software is then packaged and released as the open source MPT-Calculator. This methodology and software are then extensively tested by application to a series of illustrative examples. Using this software, MPT spectral signatures are then produced for a series of realistic threat and non-threat objects, creating the first of its kind synthetic dataset, which is also released as the open source MPT-Library dataset. Lastly, a series of ML classifiers are documented and applied to several supervised classification problems using this new syn-thetic dataset. A series of challenging numerical examples are included to demonstrate the success of the proposed methodology

    Analyzing and Visualizing Cosmological Simulations with ParaView

    Full text link
    The advent of large cosmological sky surveys - ushering in the era of precision cosmology - has been accompanied by ever larger cosmological simulations. The analysis of these simulations, which currently encompass tens of billions of particles and up to trillion particles in the near future, is often as daunting as carrying out the simulations in the first place. Therefore, the development of very efficient analysis tools combining qualitative and quantitative capabilities is a matter of some urgency. In this paper we introduce new analysis features implemented within ParaView, a parallel, open-source visualization toolkit, to analyze large N-body simulations. The new features include particle readers and a very efficient halo finder which identifies friends-of-friends halos and determines common halo properties. In combination with many other functionalities already existing within ParaView, such as histogram routines or interfaces to Python, this enhanced version enables fast, interactive, and convenient analyses of large cosmological simulations. In addition, development paths are available for future extensions.Comment: 9 pages, 8 figure

    Upgrading Cracking Waste to Rubber Precursors via Oxidative Dehydrogenation

    Get PDF
    To: Dr. Yasar Demirel, Professor, University of Nebraska-Lincoln As the result of process changes within an ethylene cracking plant, the amount of a C4 byproduct waste stream has significantly increased (Fabiano, Nedwick 1999). A system of extractive distillation and catalytic oxidative dehydrogenation can be used to add value to this C4 waste stream by producing high purity 1,3-Butadiene, an important rubber precursor. 1,3-Butadiene is a critical component of multiple consumer goods, including automobile tires and synthetic rubber and has a steadily increasing demand, reaching 10 million metric tons in 2012 (Biddy, Scarlata, Kinchin, 2016). The simulation assumes a feed flow rate of 30,000 lb/hr of mixed low grade fuel containing the composition provided in Table 1, with outputs of 16,900 lb/hr of 99% pure 1,3-Butadiene and 10,800 lb/hr of fuel byproduct. The fuel byproduct is mixed and sold under the same low grade fuel rating the mixed feed was previously sold by

    Hfs Plus File System Exposition And Forensics

    Get PDF
    The Macintosh Hierarchical File System Plus, HFS +, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can’t interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process. The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that iv isn’t available in other tools. We will also use Apple’s command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated
    • …
    corecore