237 research outputs found

    Multiple linear regression analysis of factors affecting the consumption

    Get PDF
    Econometrics provides the researchers with methods, theoretical basements, and procedures that allow the formulation and estimation of economic models that explain the study variable during a reference time period, as well as making predictions about the behavior of the studied reality based on the explanatory variables. The entire process, analyzed from econometrics after having formulated and estimated the model, leads to a very important phase: the statistical validation, which helps the researcher to ensure that the model satisfactorily passes a series of tests. These tests will allow the use of the model not just to explain the behavior of the independent variable under study, but to make predictions based on scenarios of occurrence based on those explanatory variables included in the model, offering a theoretical-practical support to formulate policies related to the studied phenomenon. This research aims to generate the first elements to know the private consumption behavior in India in the period from 2012 to 201

    The permafrost carbon inventory on the Tibetan Plateau : a new evaluation using deep sediment cores

    Get PDF
    Acknowledgements We are grateful for Dr. Jens Strauss and the other two anonymous reviewers for their insightful comments on an earlier version of this MS, and appreciate members of the IBCAS Sampling Campaign Teams for their assistance in field investigation. This work was supported by the National Basic Research Program of China on Global Change (2014CB954001 and 2015CB954201), National Natural Science Foundation of China (31322011 and 41371213), and the Thousand Young Talents Program.Peer reviewedPostprin

    Valuing map validation: the need for rigorous land cover map accuracy assessment in economic valuations of ecosystem services

    Get PDF
    Valuations of ecosystem services often use data on land cover class areal extent. Area estimates from land cover maps may be biased by misclassification error resulting in flawed assessments and inaccurate valuations. Adjustment for misclassification error is possible for maps subjected to a rigorous validation program including an accuracy assessment. Unfortunately, validation is rare and/or poorly undertaken as often not regarded as a high priority. The benefit of map validation and hence its value is indicated with two maps. The International Geosphere Biosphere Programme’s DISCover map was used to estimate wetland value globally. The latter changed from US1.92trillionyr1toUS1.92 trillion yr-1 to US2.79 trillion yr-1 when adjusted for misclassification bias. For the conterminous USA, ecosystem services value based on six land cover classes from the National Land Cover Database (2006) changed from US1118billionyr1toUS1118 billion yr-1 to US600 billion yr-1 after adjustment for misclassification bias. The effect of error-adjustment on the valuations indicates the value of map validation to rigorous evidence-based science and policy work in relation to aspects of natural capital. The benefit arising from validation was orders of magnitude larger than mapping costs and it is argued that validation should be a high priority in mapping programs and inform valuations

    Landslide susceptibility mapping at VAZ watershed (Iran) using an artificial neural network model: a comparison between multilayer perceptron (MLP) and radial basic function (RBF) algorithms

    Get PDF
    Landslide susceptibility and hazard assessments are the most important steps in landslide risk mapping. The main objective of this study was to investigate and compare the results of two artificial neural network (ANN) algorithms, i.e., multilayer perceptron (MLP) and radial basic function (RBF) for spatial prediction of landslide susceptibility in Vaz Watershed, Iran. At first, landslide locations were identified by aerial photographs and field surveys, and a total of 136 landside locations were constructed from various sources. Then the landslide inventory map was randomly split into a training dataset 70 % (95 landslide locations) for training the ANN model and the remaining 30 % (41 landslides locations) was used for validation purpose. Nine landslide conditioning factors such as slope, slope aspect, altitude, land use, lithology, distance from rivers, distance from roads, distance from faults, and rainfall were constructed in geographical information system. In this study, both MLP and RBF algorithms were used in artificial neural network model. The results showed that MLP with Broyden–Fletcher–Goldfarb–Shanno learning algorithm is more efficient than RBF in landslide susceptibility mapping for the study area. Finally the landslide susceptibility maps were validated using the validation data (i.e., 30 % landslide location data that was not used during the model construction) using area under the curve (AUC) method. The success rate curve showed that the area under the curve for RBF and MLP was 0.9085 (90.85 %) and 0.9193 (91.93 %) accuracy, respectively. Similarly, the validation result showed that the area under the curve for MLP and RBF models were 0.881 (88.1 %) and 0.8724 (87.24 %), respectively. The results of this study showed that landslide susceptibility mapping in the Vaz Watershed of Iran using the ANN approach is viable and can be used for land use planning

    An investigation of the design and use of feed forward artificial neural networks in the classification of remotely sensed images

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN046297 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    SPATIALLY AWARE LANDSLIDE SUSCEPTIBILITY PREDICTION USING A GEOGRAPHICAL RANDOM FOREST APPROACH

    No full text
    Landslide susceptibility prediction practices have been increasingly reliant on non-geographically-oriented (i.e., aspatial) machine learning algorithms. While these approaches have exhibited increasing success, they have often faced criticism for their limited consideration of spatial autocorrelations and local variations across geographical space, thereby neglecting the concept of spatial non-stationarity. To fulfill the research gap, this work applies a geographical random forest (GRF) approach, contrasting it with the conventional random forest (RF) algorithm. To this end, the study area, encompassing the Lake Sapanca Basin and its surroundings, was subdivided into 4,452 slope-based mapping units. The effectiveness of both predictive models was then measured by using overall accuracy (OA) and area under the curve (AUC). The results revealed that the GRF (OA = 80.82% and AUC = 85.22%) outperformed the RF algorithm (OA = 75.34% and AUC = 82.50%) by approximately 5% in OA, and demonstrated a 3% improvement in AUC score. The Wilcoxon signed-rank test confirmed significant differences (95% level) between the predictions of both models. The slope parameter emerged as the globally most influential factor, but local interpretations disclosed notable variations in the importance of causative factors contingent upon location. For instance, the curvature parameter was the most important geospatial covariate in around one-third (34.23%) of the slope units, mostly concentrated in the northernmost zones of the study area. On the other hand, elevation was the most important factor for 14.67% of the slope units primarily located in the southern region
    corecore