156,552 research outputs found

    Crash risk estimation and assessment tool

    Get PDF
    Currently in Australia, there are no decision support tools for traffic and transport engineers to assess the crash risk potential of proposed road projects at design level. A selection of equivalent tools already exists for traffic performance assessment, e.g. aaSIDRA or VISSIM. The Urban Crash Risk Assessment Tool (UCRAT) was developed for VicRoads by ARRB Group to promote methodical identification of future crash risks arising from proposed road infrastructure, where safety cannot be evaluated based on past crash history. The tool will assist practitioners with key design decisions to arrive at the safest and the most cost -optimal design options. This paper details the development and application of UCRAT software. This professional tool may be used to calculate an expected mean number of casualty crashes for an intersection, a road link or defined road network consisting of a number of such elements. The mean number of crashes provides a measure of risk associated with the proposed functional design and allows evaluation of alternative options. The tool is based on historical data for existing road infrastructure in metropolitan Melbourne and takes into account the influence of key design features, traffic volumes, road function and the speed environment. Crash prediction modelling and risk assessment approaches were combined to develop its unique algorithms. The tool has application in such projects as road access proposals associated with land use developments, public transport integration projects and new road corridor upgrade proposals

    Geodesign in Pampulha cultural and heritage urban area: Visualization tools to orchestrate urban growth and dynamic transformations

    Get PDF
    This paper discusses the role of visualization in Geodesign methodology consideringits applications in the case study of the region of Pampulha in Belo Horizonte,Minas Gerais, Brazil. In order to consider the opinion of the participants, their effortswere recorded in different steps of the process, at different stages of Geodesign iterations,and different possibilities of visualization were tested. The methodology of Geodesignwas applied in different applications and with different tools. The goal was to determinewhether different techniques and tools used in the process of Geodesign contributed toimproved understanding of data and problem context, and to derive guidelines for improvedGeodesign techniques and tools

    Outsourcing and acquisition models comparison related to IT supplier selection decision analysis

    Get PDF
    This paper presents a comparison of acquisition models related to decision analysis of IT supplier selection. The main standards are: Capability Maturity Model Integration for Acquisition (CMMI-ACQ), ISO / IEC 12207 Information Technology / Software Life Cycle Processes, IEEE 1062 Recommended Practice for Software Acquisition, the IT Infrastructure Library (ITIL) and the Project Management Body of Knowledge (PMBOK) guide. The objective of this paper is to compare the previous models to find the advantages and disadvantages of them for the future development of a decision model for IT supplier selection

    From flowers to palms: 40 years of policy for online learning

    Get PDF
    This year sees the 40th anniversary of the first policy paper regarding the use of computers in higher education in the United Kingdom. The publication of this paper represented the beginning of the field of learning technology research and practice in higher education. In the past 40 years, policy has at various points drawn from different communities and provided the roots for a diverse field of learning technology researchers and practitioners. This paper presents a review of learning technology-related policy over the past 40 years. The purpose of the review is to make sense of the current position in which the field finds itself, and to highlight lessons that can be learned from the implementation of previous policies. Conclusions drawn from the review of 40 years of learning technology policy suggest that there are few challenges that have not been faced before as well as a potential return to individual innovation

    Evaluating Innovation

    Get PDF
    In their pursuit of the public good, foundations face two competing forces -- the pressure to do something new and the pressure to do something proven. The epigraph to this paper, "Give me something new and prove that it works," is my own summary of what foundations often seek. These pressures come from within the foundations -- their staff or boards demand them, not the public. The aspiration to fund things that work can be traced to the desire to be careful, effective stewards of resources. Foundations' recognition of the growing complexity of our shared challenges drives the increased emphasis on innovation. Issues such as climate change, political corruption, and digital learning andwork environments have enticed new players into the social problem-solving sphere and have con-vinced more funders of the need to find new solutions. The seemingly mutually exclusive desires for doing something new and doing something proven are not new, but as foundations have grown in number and size the visibility of the paradox has risen accordingly.Even as foundations seek to fund innovation, they are also seeking measurements of those investments success. Many people's first response to the challenge of measuring innovation is to declare the intention oxymoronic. Innovation is by definition amorphous, full of unintended consequences, and a creative, unpredictable process -- much like art. Measurements, assessments, evaluation are -- also by most definitions -- about quantifying activities and products. There is always the danger of counting what you can count, even if what you can count doesn't matter.For all our awareness of the inherent irony of trying to measure something that we intend to be unpredictable, many foundations (and others) continue to try to evaluate their innovation efforts. They are, as John Westley, Brenda Zimmerman, and Michael Quinn Patton put it in "Getting to Maybe", grappling with "....intentionality and complexity -- (which) meet in tension." It is important to see the struggles to measure for what they are -- attempts to evaluate the success of the process of innovation, not necessarily the success of the individual innovations themselves. This is not a semantic difference.What foundations are trying to understand is how to go about funding innovation so that more of it can happenExamples in this report were chosen because they offer a look at innovation within the broader scope of a foundation's work. This paper is the fifth in a series focused on field building. In this context I am interested in where evaluation fits within an innovation strategy and where these strategies fit within a foundation's broader funding goals. I will present a typology of innovation drawn from the OECD that can be useful inother areas. I lay the decisions about evaluation made by Knight, MacArthur, and the Jewish NewMedia Innovation Funders against their program-matic goals. Finally, I consider how evaluating innovation may improve our overall use of evaluation methods in philanthropy
    • …
    corecore