8,817 research outputs found

    Estimating the volatility of property assets

    Get PDF
    When an investor is allocating assets between equities, bonds and property, this allocation needs to provide a portfolio with an appropriate risk/return trade-off: for instance, a pension scheme may prefer a robust portfolio that holds its aggregate value in a number of different situations. In order to do this, some estimate needs to be made of the volatility or uncertainty in the property assets, in order to use that in the same way as the volatilities of equities and bonds are used in the allocation. However, property assets are only valued monthly or quarterly (and are sold only rarely) whereas equities and bonds are priced continuously and recorded daily. Currently many actuaries may assume that the volatility of property assets is between those of equities and bonds, but without quantifying it from real data. The challenge for the Study Group is to produce a model for estimating the volatility or uncertainty in property asset values, for use in portfolio planning. The Study Group examined contexts for the use of volatility estimates, particularly in relation to solvency calculations as required by the Financial Services Authority, fund trustees and corporate boards, and it proposed a number of possible approaches. This report summarises that work, and it suggests directions for further investigation

    Migrating agile methods to standardized development practice

    Get PDF
    Situated process and quality frame-works offer a way to resolve the tensions that arise when introducing agile methods into standardized software development engineering. For these to be successful, however, organizations must grasp the opportunity to reintegrate software development management, theory, and practice

    Process-oriented Iterative Multiple Alignment for Medical Process Mining

    Full text link
    Adapted from biological sequence alignment, trace alignment is a process mining technique used to visualize and analyze workflow data. Any analysis done with this method, however, is affected by the alignment quality. The best existing trace alignment techniques use progressive guide-trees to heuristically approximate the optimal alignment in O(N2L2) time. These algorithms are heavily dependent on the selected guide-tree metric, often return sum-of-pairs-score-reducing errors that interfere with interpretation, and are computationally intensive for large datasets. To alleviate these issues, we propose process-oriented iterative multiple alignment (PIMA), which contains specialized optimizations to better handle workflow data. We demonstrate that PIMA is a flexible framework capable of achieving better sum-of-pairs score than existing trace alignment algorithms in only O(NL2) time. We applied PIMA to analyzing medical workflow data, showing how iterative alignment can better represent the data and facilitate the extraction of insights from data visualization.Comment: accepted at ICDMW 201

    The price of payment delay

    Get PDF

    CSM Testbed Development and Large-Scale Structural Applications

    Get PDF
    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    The planning process and its formalization in computer models

    Get PDF
    "Paper delivered to the Second Congress on the Information Systems Sciences, Hot Springs, Va., Nov. 22-25, 1964. -- Rev. January 1965.
    • 

    corecore