6,733 research outputs found

    Performance assessment of urban precinct design: a scoping study

    Get PDF
    Executive Summary: Significant advances have been made over the past decade in the development of scientifically and industry accepted tools for the performance assessment of buildings in terms of energy, carbon, water, indoor environment quality etc. For resilient, sustainable low carbon urban development to be realised in the 21st century, however, will require several radical transitions in design performance beyond the scale of individual buildings. One of these involves the creation and application of leading edge tools (not widely available to built environment professions and practitioners) capable of being applied to an assessment of performance across all stages of development at a precinct scale (neighbourhood, community and district) in either greenfield, brownfield or greyfield settings. A core aspect here is the development of a new way of modelling precincts, referred to as Precinct Information Modelling (PIM) that provides for transparent sharing and linking of precinct object information across the development life cycle together with consistent, accurate and reliable access to reference data, including that associated with the urban context of the precinct. Neighbourhoods are the ‘building blocks’ of our cities and represent the scale at which urban design needs to make its contribution to city performance: as productive, liveable, environmentally sustainable and socially inclusive places (COAG 2009). Neighbourhood design constitutes a major area for innovation as part of an urban design protocol established by the federal government (Department of Infrastructure and Transport 2011, see Figure 1). The ability to efficiently and effectively assess urban design performance at a neighbourhood level is in its infancy. This study was undertaken by Swinburne University of Technology, University of New South Wales, CSIRO and buildingSMART Australasia on behalf of the CRC for Low Carbon Living

    Scanner Data, Time Aggregation and the Construction of Price Indexes

    Get PDF
    The impact of weekly, monthly and quarterly time aggregation on estimates of price change is examined for nineteen different supermarket item categories over a fifteen month period using scanner data. We find that time aggregation choices (the choice of a weekly, monthly or quarterly unit value concept for prices) have a considerable impact on estimates of price change. When chained indexes are used, the difference in price change estimates can be huge, ranging from 0.28% to 29.73% for a superlative (Fisher) index and an incredible 14.88% to 46,463.71% for a non-superlative (Laspeyres) index. The results suggest that traditional index number theory breaks down when weekly data with severe price bouncing are used, even for superlative indexes. Monthly and (in some cases even) quarterly time aggregation were found to be insufficient to eliminate downward drift in superlative indexes. In order to eliminate chain drift, multilateral index number methods are adapted to provide drift free measures of price change.Price indexes, aggregation, scanner data, chain drift, superlative indexes, unit values, multilateral index number methods, rolling window GEKS, rolli

    COMPARING THE METHODS OF PREDICTION AND BUSINESS COST ESTIMATION BASED ON INDUSTRY-SPECIFIC, CLUSTERING, AND REGRESSION MODELING

    Get PDF
    Background: Being the most common, the relative valuation method plays a special role in estimating the value of a business. Many studies consider various applications of multipliers. However, the study results are often contradictory. Objective: This article aims to determine the best method for assigning a fair value to the multiplier of the assessee company. Methods: Within the empirical study, the effectiveness of three forecasting methods (industry-specific, cluster, and regression) was compared. Results: Regression modeling is the most accurate approach and outperforms other methods in terms of MAE and. The best multiplier is considered the one that can reach the maximum metrics for assessing the quality of models. The largest variance within the existing data set can be explained for multiples based on sales P/Sales and EV/Sales. Other issues were also solved in the course of the study. The best method for determining groups of peer companies has been determined. Conclusion: The proposed cluster approach is superior to the industry-specific approach. While comparing these approaches, the authors identify the best measure for calculating the typical value of multipliers within a group of peer companies. The simple average and median indicators were more accurate than the other calculation methods

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm

    Structural credit risk models and the determinants of credit default swap spreads

    Get PDF
    Following the financial innovation and the consequences of the recent 2008-2009 global financial crisis, the interest and resources allocated into measuring and modelling credit risk has seen a major increase by researchers and practitioners over the last decades. The main objective of this thesis is to explore the determinants of credit spreads, analysing first the performance of theoretical variables of default risk in explaining credit default swap (CDS) spreads, while introducing other firm-specific, macroeconomic, liquidity and credit rating factors. The dataset used is composed of non-financial European companies, for the period of 2010 to 2018 and we use panel data models to perform the econometric analysis. In addition, this study also analyses structural credit risk models, namely the Merton (1974) model and some of its limitations and extensions. Our empirical results show that theoretical determinants are statistically and economically significant and are able to explain 27% of the observed CDS spreads levels. After controlling for market liquidity, credit rating, firm and market-specific factors, we are capable of explaining 57% of the total CDS spreads levels and 21% of the spreads changes. Moreover, by performing a robustness analysis, we conclude that the investigated determinants perform better in explaining credit spreads, when the overall level of credit risk in the market is higher, which is consistent with previous evidence. Lastly, our results suggest that structural credit risk models would benefit if they were further developed to account for both theoretical and non-theoretical factors, such as macro-financial variables.Na sequĂȘncia da inovação financeira e das consequĂȘncias da recente crise financeira global de 2008-2009, o interesse e recursos alocados Ă  medição e modelação do risco de crĂ©dito registaram um grande aumento, por parte de investigadores e profissionais ao longo das Ășltimas dĂ©cadas. Esta tese tem como objetivo explorar os determinantes dos spreads de crĂ©dito. Analisando primeiro o desempenho de variĂĄveis teĂłricas de risco de crĂ©dito em explicar credit default swap (CDS) spreads, introduzindo tambĂ©m outros fatores especĂ­ficos das empresas, macroeconĂłmicos, de liquidez e de qualidade de crĂ©dito. O conjunto de dados utilizado Ă© composto por empresas europeias nĂŁo financeiras, com um perĂ­odo de tempo de 2010 a 2018 e usando modelos de dados em painel para realizar a anĂĄlise economĂ©trica. Os nossos resultados empĂ­ricos mostram que os determinantes teĂłricos sĂŁo estatisticamente e economicamente significativos e sĂŁo capazes de explicar 27% dos nĂ­veis observados de CDS spreads. Depois de controlarmos para a liquidez do mercado, a notação de crĂ©dito, os fatores especĂ­ficos das empresas e mercado, somos capazes de explicar 57% do total dos CDS spreads e 21% das suas variaçÔes. AlĂ©m disso, atravĂ©s de uma anĂĄlise de robustez, concluĂ­mos que os determinantes investigados tĂȘm um melhor desempenho na explicação dos spreads, quando o nĂ­vel de risco de crĂ©dito no mercado Ă© maior, o que Ă© consistente com estudos anteriores. Por Ășltimo, os nossos resultados sugerem que os modelos estruturais de risco de crĂ©dito beneficiariam se fossem desenvolvidos para responder a fatores teĂłricos e nĂŁo teĂłricos, como as variĂĄveis macro-financeiras

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin

    Analyzing Firm Performance in the Insurance Industry Using Frontier Efficiency Methods

    Get PDF
    In this introductory chapter to an upcoming book, the authors discuss the two principal types of efficiency frontier methodologies - the econometric (parametric) approach and the mathematical programming (non-parametric) approach. Frontier efficiency methodologies are discussed as useful in a variety of contexts: they can be used for testing economic hypotheses; providing guidance to regulators and policymakers; comparimg economic performance across countries; and informing management of the effects of procedures and strategies adapted by the firm. The econometric approach requires the specification of a production, cost, revenue, or profit function as well as assumptions about error terms. But this methodology is vulnerable to errors in the specification of the functional form or error term. The mathematical programming or linear programming approach avoids this type of error and measures any departure from the frontier as a relative inefficiency. Because each of these methods has advantages and disadvantages, it is recommended to estimate efficiency using more than one method. An important step in efficiency analysis is the definition of inputs and outputs and their prices. Insurer inputs can be classified into three principal groups: labor, business services and materials, and capital. Three principal approaches have been used to measure outputs in the financial services sector: the asset or intermediation approach, the user-cost approach, and the value-added approach. The asset approach treats firms as pure financial intermediaries and would be inappropriate for insurers because they provide other services. The user-cost method determines whether a financial product is an input or output based on its net contribution to the revenues of the firm. This method requires precise data on products, revenues and opportunity costs which are difficult to estimate in insurance. The value-added approach is judged the most appropriate method for studying insurance efficiency. it considers all asset and liability categories to have some output characteristics rather than distinguishing inputs from outputs. In order to measure efficiency in the insurance industry in which outputs are mostly intangible, measurable services must be defined. The three principal services provided by insurance companies are risk pooling and risk-bearing, "real" financial services relating to insured losses, and intermediation. The authors discuss how these services can be measured as outputs in value-added analysis. They then summarize existing efficiency literature.

    High-equity multi-asset investing versus pure equity investing : a study of risk-adjusted performance

    Get PDF
    Abstract : Investors are faced with a daunting number of decisions and options that can be made and taken on their path to their wealth accumulation over the course of their investing lives. As a result, investors often find the wealth accumulation process to be an overwhelming task. Investors, and their advisors, also do not have the time and means to adequately assess the trade-offs associated with asset allocations decisions and therefore having to trust processes on a tacit basis. Research shows that Equity, as an asset class in isolation, has provided the largest cumulative return for the South African context for the last century. However, empirical research assessing this against the outcomes of multi-asset portfolios is rare for the South African context, with studies also not being specifically considered for a time horizon more appropriate to a period when Property has explicitly been separated in the South African context. The purpose of the study was twofold: firstly, to test whether investors are rewarded by moving from multi-asset high Equity investing into a pure Equity portfolio. Secondly, to contrast how the risk-adjusted reward presented to an investor changes across the risk-spectrum of the efficient frontier as an investor moves from less volatile asset classes to volatile asset classes. The general stylised graphical depiction efficient frontiers, and understanding by retail investors, is that Equity provides investors with the highest rate of return and the largest ending wealth level over time. This stylisation implies that marginal returns for the risk taken by investing into Equity only are still offered at the far end of the efficient frontier where the assets with the largest volatilities reside. The perception that Equities may offer the largest cumulative return over time, relative to other single asset classes, is often confused with Equity investing providing the highest ending wealth levels for retail investors. Investors also have no direct means of considering volatility and risk in the allocations to Equity. This could potentially result in Equities receiving a large percentage, or full, allocation within an investors’ portfolio. Retail investors are not generally equipped with the tools to consider the incremental return versus the incremental risk of decisions, nor are they v equipped with the tools to consider the effects such as volatility drag on assets in their portfolio. In this research, the risk-adjusted return metrics were determined for the FTSE/JSE ALSI, SA Listed Property and BEASSA ALBI Indexes. The performance return series data for each respective index was then used to construct risk-adjusted return metrics for multi-asset portfolios. Three sets of multi-asset investments were created, using the asset class proxy’s historical returns, to represent the general risk profiling outcomes offered to investors, which entailed low-Equity, medium-Equity, and high-Equity investment portfolios. The results were then assessed to gauge whether investors received marginal benefits by increasing their allocation to Equities. The FTSE/JSE ALSI Index was used as the comparison for a pure Equity portfolio. It was found that the risk-adjusted returns of the multi-asset portfolios generated larger risk-adjusted returns than an Equity-only portfolio provided. It was further found that the ending wealth levels for multi-asset portfolios were larger in the majority of instances for an investor over the sample period of the study. The marginal returns relative to risk taken rapidly diminished over the study period for allocations to Equity larger than 50% of total assets. The conclusion was that an investor did not need to accept the additional risk required by investing in an Equity-only portfolio in order to achieve a greater ending wealth level over the sample period studied. An investor can potentially achieve a greater ending wealth level by accepting a smoother return profile as reported by lower return standard deviation, while not forgoing returns contributing towards ending wealth levels over a cumulative 15-year period within the South African investment context.M.Com. (Investment Management
    • 

    corecore