9,582 research outputs found

    A Critical Assessment of the Traditional Residential Real Estate Broker Commission Rate Structure

    Get PDF
    While real estate brokers have long set their fee as a straight percentage of a home's sale price, this formula is an anomaly and a primary reason why such fees may be inflated by more than $30 billion annually. Although competitive pressures ordinarily produce a fee structure reflecting costs, real estate broker commissions are strangely unrelated to either the quantity or quality of the service rendered or even to the value provided. Rather, this fee has been based solely on the price of the home. (It is as if divorce lawyers set their fee as a flat percentage of a client's net value, irrespective of whether the divorce was amicable without kids or involved bitterly contested custody and other issues . Oddly, not only is there no evidence that it is any more costly to sell higher-priced homes than median-priced properties, but it is possible that the opposite may be true! Furthermore, the straight percentage fee formula creates little incentive for real estate agents to provide home buyers or sellers with additional value. The article analyzes five elements of the traditional residential real estate broker rate structure, the most important of which are: 1) setting fees as a percentage-of-sale-price, 2) letting the seller's broker set the fee received by the buyer's broker, and 3) refusing to unbundle the price of a full package of services. After explaining the conditions under which such rate elements would be justified, this article finds that those conditions do not generally exist in the real estate brokerage market. Moreover, it identifies more than a half dozen harms that the rate elements cause to home buyers and sellers. For example, buyers are often not alerted to attractive homes because the rate structure leads traditional agents to intentionally avoid showing them. Meanwhile, many buyers do not even consider negotiating the fee paid to their broker because the rate structure causes them to believe their brokers' services cost them nothing. After this criticism, the article suggests that consumers would benefit most from a fee-for-service approach, combining flat fees, hourly fees, and bonuses, including percentages of extra value created, and it identifies currently available examples of some of these options. After reviewing eight reasons why incumbents are able to protect the current structure, the article suggests four questions that consumer media should teach consumers to ask to help undermine the industry's protectionist practices.Other Topics

    Recreational demand modelling for whitewater kayaking in Ireland.

    Get PDF
    The primary objective of the thesis is to study the demand for an outdoor recreational pursuit in Ireland. The thesis uses and extends the different travel cost methods of valuation for non-market goods. The vehicle for the research is whitewater kayaking recreation in Ireland. A new method for dealing with the contentious issue of measuring the opportunity cost of time in recreational demand modeling is developed and a number of approaches are adopted to investigate the heterogeneity of tastes and preferences in the Irish kayaking community. Approaches to collecting travel cost data using the internet are also discussed. The first part of the thesis {chapter 2) describes some of the main use and non-use values associated with whitewater river systems. It also reviews the development of the sport of whitewater kayaking in Ireland. Chapter 3 examines the numerous valuation methodologies (and their applications) that are being used in the field of non-market valuation. Following this, chapter 4 reviews the single site study on the Roughty river, where the non-market benefits accruing from the preservation of "natural" conditions on one Irish river are estimated. This chapter focuses on one single river and the development threat coming from investments in new hydroelectric plants on Irish rivers. In chapter 5 the design and development of the main survey instrument are described. This chapter also gives details on survey administration, procedures, database structure and an analysis of the responses to the survey. Chapter 6 then investigates the valuation of time in recreation demand models. It uses a RUM model to analyze site choices made by Irish kayaking participants, with emphasis placed on constructing estimates for individuals’ opportunity cost of time using secondary data. The idea is motivated by a standard two-constraint model in which people can smoothly trade time for money at the market wage rate. Chapters 7 and 8 make use of the multi-attribute kayaking data to investigate the heterogeneity of tastes in the kayaking community. Chapter 7 develops an exogenous approach of incorporating preference heterogeneity using a “clustered” RUM model of whitewater kayaking site choice. In Chapter 8 two empirical models are used to endogenously take account of individual heterogeneity in analyzing whitewater kayaking site choice decisions. The two models are the random parameter logit model and the latent class model (LCM)

    Capturing time in space : Dynamic analysis of accessibility and mobility to support spatial planning with open data and tools

    Get PDF
    Understanding the spatial patterns of accessibility and mobility are a key (factor) to comprehend the functioning of our societies. Hence, their analysis has become increasingly important for both scientific research and spatial planning. Spatial accessibility and mobility are closely related concepts, as accessibility describes the potential to move by modeling, whereas spatial mobility describes the realized movements of individuals. While both spatial accessibility and mobility have been widely studied, the understanding of how time and temporal change affects accessibility and mobility has been rather limited this far. In the era of ‘big data’, the wealth of temporally sensitive spatial data has made it possible, better than ever, to capture and understand the temporal realities of spatial accessibility and mobility, and hence start to understand better the dynamics of our societies and complex living environment. In this thesis, I aim to develop novel approaches and methods to study the spatio-temporal realities of our living environments via concepts of accessibility and mobility: How people can access places, how they actually move, and how they use space. I inspect these dynamics on several temporal granularities, covering hourly, daily, monthly, and yearly observations and analyses. With novel big data sources, the methodological development and careful assessment of the information extracted from them is extremely important as they are increasingly used to guide decision-making. Hence, I investigate the opportunities and pitfalls of different data sources and methodological approaches in this work. Contextually, I aim to reveal the role of time and the mode of transportation in relation to spatial accessibility and mobility, in both urban and rural environments, and discuss their role in spatial planning. I base my findings on five scientific articles on studies carried out in: Peruvian Amazonia; national parks of South Africa and Finland; Tallinn, Estonia; and Helsinki metropolitan area, Finland. I use and combine data from various sources to extract knowledge from them, including GPS devices; transportation schedules; mobile phones; social media; statistics; land-use data; and surveys. My results demonstrate that spatial accessibility and mobility are highly dependent on time, having clear diurnal and seasonal changes. Hence, it is important to consider temporality when analyzing accessibility, as people, transport and activities all fluctuate as a function of time that affects e.g. the spatial equality of reaching services. In addition, different transport modes should be considered as there are clear differences between them. Furthermore, I show that, in addition to the observed spatial population dynamics, also nature’s own dynamism affects accessibility and mobility on a regional level due to the seasonal variation in river-levels. Also, the visitation patterns in national parks vary significantly over time, as can be observed from social media. Methodologically, this work demonstrates that with a sophisticated fusion of methods and data, it is possible to assess; enrich; harmonize; and increase the spatial and temporal accuracy of data that can be used to better inform spatial planning and decision-making. Finally, I wish to emphasize the importance of bringing scientific knowledge and tools into practice. Hence, all the tools, analytical workflows, and data are openly available for everyone whenever possible. This approach has helped to bring the knowledge and tools into practice with relevant stakeholders in relation to spatial planning

    Guidelines for Using StreetLight Data for Planning Tasks

    Get PDF
    The Virginia Department of Transportation (VDOT) has purchased a subscription to the StreetLight (SL) Data products that mainly offer origin-destination (OD) related metrics through crowdsourcing data. Users can manipulate a data source like this to quickly estimate origin-destination trip tables. Nonetheless, the SL metrics heavily rely on the data points sampled from smartphone applications and global positioning services (GPS) devices, which may be subject to potential bias and coverage issues. In particular, the quality of the SL metrics in relation to meeting the needs of various VDOT work tasks is not clear. Guidelines on the use of the SL metrics are of interest to VDOT. This study aimed to help VDOT understand the performance of the SL metrics in different application contexts. Specifically, existing studies that examined the potential of SL metrics have been reviewed and summarized. In addition, the experiences, comments, and concerns of existing users and potential users have been collected through online surveys. The developed surveys were primarily distributed to VDOT engineers and planners as well as other professionals in planning organizations and consultants in Virginia. Their typical applications of the SL metrics have been identified and feedback has been used to guide and inform the design of the guidelines. To support the development of a set of guidelines, the quality of the SL metrics has been independently evaluated with six testing scenarios covering annual average daily traffic (AADT), origin-destination trips, traffic flow on road links, turning movements at intersections, and truck traffic. The research team has sought ground-truth data from different sources such as continuous count stations, toll transaction data, VDOT’s internal traffic estimations, etc. Several methods were used to perform the comparison between the benchmark data and the corresponding SL metrics. The evaluation results were mixed. The latest SL AADT estimates showed relatively small absolute percentage errors, whereas using the SL metrics to estimate OD trips, traffic counts on roadway segments and at intersections, and truck traffic did not show a relatively low and stable error rate. Large percentage errors were often found to be associated with lower volume levels estimated based on the SL metrics. In addition, using the SL metrics from individual periods as the input for estimating these traffic measures resulted in larger errors. Instead, the aggregation of data from multi-periods helped reduce the errors, especially for low volume conditions. Depending on project purposes, the aggregation can be based on metrics of multiple days, weeks, or months. The results from the literature review, surveys, and independent evaluations were synthesized to help develop the guidelines for using SL data products. The guidelines focused on five main aspects: (1) a summary for using SL data for typical planning work tasks; (2) general guidance for data extraction and preparation; (3) using the SL metrics in typical application scenarios; (4) quality issues and calibration of the SL metrics; and (5) techniques and tools for working with the SL metrics. The developed guidelines were accompanied with illustrative examples to allow users to go through the given use cases. Based on the results, the study recommends that VDOT’s Transportation and Mobility Planning Division (TMPD) should encourage and support the use of the guidelines in projects involving SL data, and that TMPD should adopt a checklist (table) for reporting performance, calibration efforts, and benchmark data involved in projects that use the SL metrics

    Storing and querying evolving knowledge graphs on the web

    Get PDF

    Data-driven methodologies for evaluation and recommendation of energy efficiency measures in buildings. Applications in a big data environment

    Get PDF
    Tesi en modalitat de compendi de publicacionsIn order to reach the goal set in the Paris agreement of limiting the rise in global average temperature well below 2 ºC compared to pre-industrial levels, massive efforts to reduce global greenhouse gas emissions are required. The building sector is currently responsible for about 28% of total global CO2 emissions, meaning that there is substantial savings potential lying in the correct energy management of buildings and the implementation of renovation strategies. Digital tools and data-driven techniques are rapidly gaining momentum as approaches that are able to harness the large amount of data gathered in the building sector and provide solutions able to reduce the carbon footprint of the built environment. The objective of this doctoral thesis is to investigate the potential of data-driven techniques in different applications aimed at improving energy efficiency in buildings. More specifically, different novel approaches to verify energy savings, characterize consumption patterns, and recommend energy retrofitting strategies are described. The presented methodologies prove to be powerful tools that can produce valuable, actionable insights for energy managers and other stakeholders. Initially, a comprehensive and detailed overview is provided of different state-of-the-art methodologies to quantify energy efficiency savings and to predict the impact of retrofitting strategies in buildings. Strengths and weaknesses of the analyzed approaches are discussed, and guidance is provided to assess the best performing methodology depending on the case in analysis and data available. Among the reviewed approaches there are statistical and machine learning models, Bayesian methods, deterministic approaches, and hybrid techniques combining deterministic and data-driven models. Subsequently, a novel data-driven methodology is proposed to perform measurement and verification calculations, with the main focus on non-residential buildings and facilities. The approach is based on the extraction of frequent consumption profile patterns and on a novel technique able to evaluate the building’s weather dependence. This information is used to design a model that can accurately estimate achieved energy savings at daily scale. The method was tested on two use-cases, one using synthetic data generated using a building energy simulation software and one using monitoring data from three existing buildings in Catalonia. The results obtained with the proposed methodology are compared with the ones provided by a state-of-the-art model, showing accuracy improvement and increased robustness to missing data. The second data-driven tool that developed in this research work is a Bayesian linear regression methodology to calculate hourly energy baseline predictions in non-residential buildings and characterize their consumption patterns. The approach was tested on 1578 non-residential buildings that are part of a large building energy consumption open dataset. The results show that the Bayesian methodology is able to provide accurate baseline estimations with an explainable and intuitive model. Special focus is also given to uncertainty estimations, which are inherently provided by Bayesian techniques and have great importance in risk assessments for energy efficiency projects. Finally, a concept methodology that can be used to recommend and prioritize energy efficiency projects in buildings and facilities is presented. This data-driven approach is based on the comparison of groups of similar buildings and on an algorithm that can map savings obtained with energy renovation strategies to the characteristics of the buildings where they were implemented. Recommendation for implementation of such a methodology in big data building energy management platforms is provided.Para alcanzar el objetivo fijado en el acuerdo de París de limitar el aumento de la temperatura media mundial muy por debajo de los 2 °C con respecto a los niveles preindustriales, es necesario realizar esfuerzos masivos para reducir las emisiones mundiales de gases de efecto invernadero. El sector de la edificación es actualmente responsable de alrededor del 28% de las emisiones totales de CO2 a nivel mundial, lo que significa que existe un potencial de ahorro sustancial en la correcta gestión energética de los edificios y en la aplicación de estrategias de renovación. Las herramientas digitales y las técnicas basadas en datos están ganando rápidamente impulso como enfoques capaces de aprovechar la gran cantidad de datos recopilados en el sector de la edificación y proporcionar soluciones capaces de reducir la huella de carbono del entorno construido. El objetivo de esta tesis doctoral es investigar el potencial de las técnicas basadas en datos en diferentes aplicaciones destinadas a mejorar la eficiencia energética de los edificios. Más concretamente, se describen diferentes enfoques novedosos para verificar el ahorro de energía, caracterizar los patrones de consumo y recomendar estrategias de rehabilitación energética. Las metodologías presentadas demuestran ser poderosas herramientas que pueden producir valiosos conocimientos para los gestores energéticos y otras partes interesadas. En primer lugar, se ofrece una visión general y detallada de las distintas metodologías más avanzadas para cuantificar el ahorro de energía y predecir el impacto de las estrategias de rehabilitación en los edificios. Se discuten los puntos fuertes y débiles de los enfoques analizados y se ofrecen orientaciones para evaluar la metodología más eficaz en función del caso en análisis y de los datos disponibles. Entre los enfoques revisados hay modelos estadísticos y de aprendizaje automático, métodos Bayesianos, enfoques deterministas y técnicas híbridas que combinan modelos deterministas y basados en datos. Posteriormente, se propone una novedosa metodología basada en datos para realizar cálculos de medición y verificación, centrada principalmente en edificios e instalaciones no residenciales. El enfoque se basa en la extracción de patrones de perfiles de consumo frecuentes y en una técnica innovadora capaz de evaluar la dependencia climática del edificio. Esta información se utiliza para diseñar un modelo que puede estimar con precisión el ahorro energético conseguido a escala diaria. El método se ha probado en dos casos de uso, uno con datos sintéticos generados mediante un software de simulación energética de edificios, y otro con datos de monitorización de tres edificios existentes en Cataluña. Los resultados obtenidos con la metodología propuesta se comparan con los proporcionados por un modelo de última generación, mostrando una mejora de la precisión y una mayor robustez ante la falta de datos. La segunda herramienta basada en datos que se desarrolló en este trabajo de investigación es una metodología de regresión lineal Bayesiana para calcular las predicciones de línea base de energía horaria en edificios no residenciales y para caracterizar sus patrones de consumo. El enfoque se probó en 1578 edificios no residenciales que forman parte de un gran conjunto de datos abiertos de consumo energético de edificios. Los resultados muestran que la metodología Bayesiana es capaz de proporcionar estimaciones precisas de la línea de base con un modelo explicable e intuitivo. También se presta especial atención a las estimaciones de incertidumbre, que son inherentes a las técnicas bayesianas y que tienen gran importancia en las evaluaciones de riesgo de los proyectos de eficiencia energética. Por último, se presenta una metodología conceptual que puede utilizarse para recomendar y priorizar proyectos de eficiencia energética en edificios e instalaciones. Este enfoque basado en datos se basa en la comparación de grupos de edificios similares y en un algoritmo que puede asociar los ahorros obtenidos con las estrategias de renovación energética a las características de los edificios en los que se aplicaron. Se recomiendan las aplicaciones de esta metodología en plataformas de gestión energética de edificios de big data.Postprint (published version

    Optimal integration of wind energy with a renewable based microgrid for industrial applications.

    Get PDF
    Wind energy in urban environments is a rapidly developing technology influenced by the terrain specifications, local wind characteristics and urban environments such as buildings architecture. The urban terrain is more complex than for open spaces and has a critical influence on wind flow at the studied site. This approach proposes an integration of the surrounding buildings in the studied site and then simulating the wind flow, considering both simple and advanced turbulence models to quantify and simulate the wind flow fields in an urban environment and evaluate the potential wind energy. These simulations are conducted with an accessible computational fluid dynamic tool (Windsim) implementing available commercial wind turbines and performed on a case study at Agder county in the southern part of Norway for an industrial facility specialized in food production. Several simulations were considered and repeated to achieve a convergence after adding the buildings to the domain, which mainly simulates the wind flow patterns, power density, and annual energy production. These simulations will be compared with previous results, which adapted different manipulation techniques applied on the same site where the elevation and roughness data were manipulated to mimic the actual conditions in the studied urban site. The current approach (adding the buildings) showed a reduction in the average wind speed and annual energy production for certain levels with increased turbulence intensity surrounding the buildings. Moreover, a feasibility study is conducted to analyze the techno-economic of the facility's hybrid system, including the planned installation of a wind energy system using commercial software (HOMER). The simulation results indicated that HOMER is conservative in estimating the annual energy production of both wind and solar power systems. Nevertheless, the analysis showed that integrating a wind turbine of 600 kW would significantly reduce the dependence on the grid and transform the facility into a prosumer with more than 1.6 GWh traded with the grid annually. However, the proposed system's net present cost would be 1.43 M USD based on installation, maintenance, and trading with the grid, without including self-consumption, which counts for approximately 1.5 GWh annually. Moreover, the proposed system has a low levelized cost of energy of 0.039$ per kWh, which is slightly above the levelized cost of wind energy but 2 to 4 times less than the installed solar panels

    Load-shifting inside production lines

    Get PDF
    This thesis will have the goal to investigate possibilities for optimization of schedules of factory s production lines, to minimize the cost of electrical energy used. During this research, different operational constrains will have to be identified and taken into account, during optimization. At the end, based on discovered limitations, and other necessary assumptions, models for factory process will be devised in order to be optimized based on suitable algorithms

    The Negotiations Process and Structures

    Get PDF
    [Excerpt] This chapter examines the process by which unions and employers negotiate collective agreements and the structures they use for those negotiations, continuing the analysis of the middle (functional) level of labor relations activity. It explains the dynamics of negotiations and the factors that lead to strikes and then goes on to discuss the different bargaining structures used in negotiations

    The High-Pressure U.S. Labor Market of the 1990s

    Get PDF
    macroeconomics, high pressure, U.S. Labor market, labor market, 1990s
    corecore