25 research outputs found

    Microfinance Banks and Entrepreneurship Development in Nigeria: A Case of Ogun State

    Get PDF
    The purpose of this study was to determine how microfinance impacts on entrepreneurship development in Nigeria with a special reference to Ogun State. The survey research design was adopted and data collected through questionnaires. The impact of microfinance on entrepreneurship development in Nigeria was analyzed using the Ordinary least squares (OLS) regression method. The study revealed the existence of positive relationship between microfinance and entrepreneurship development in Nigeria. It further revealed that microfinance contribute to entrepreneurial activities that can lead to sustainable development in Nigeria. The findings of this study show that microfinance institutions go a long way in the determination of the level of entrepreneurial productivity and development in the Nigerian economy. In order to enhance entrepreneurship which is a potent instrument of activating the economic growth in developing countries, job creation, wealth creation, poverty eradication, innovations, and its related welfare effects by microfinance, the study recommended that in order to enable the beneficiaries of micro finance schemes to fully appreciate the utility of the facility, the monetary authority (CBN) must continue to appraise the credit delivery channels and formulate policies that would facilitate the delivery of the facilities to the rural communities. Microfinance institutions need to put in more effort in financing entrepreneurial activities that can promote economic growth accessible to the poor, reform of the legal system to offer more protection to investors and creditors and also establishment of credit information exchange mechanism that would track all borrowings and repayments in the banking system no matter the size. Key words: Entrepreneur, entrepreneurship, microfinance bank, microfinance, loan

    Capital Ratios As Predictors of Distress: A Case Study of the Nigerian Banking System

    Get PDF
    We examine the relationship between capital ratios and bank distress and also compare the efficiency of three capital ratios risk-weighted leverage and gross revenue ratios in the prediction of bank distress The above objective is based on the recent global failure of banks which is a pointer to the fact that the Early Warning Systems EWS Models with the aim of identifying weaknesses and vulnerabilities among financial institutions have either failed or have been wrongly applied In addition some studies show that the risk-weighted capital ratio used in bank distress prediction may become obsolete and ineffective within a short time and that it may give rise to economic problems Some other studies also show that capital ratios may in fact not be related to bank distress and should not be used to monitor it Data on bank distress in Nigeria from 1991 to 2004 are used and the OLS regression autoregression and the Granger causality test are used to analyse the data The study show that the three capital ratios predicted bank distress significantly and that there is no significant difference in the level of efficiency of the three capital ratios in distress prediction The continued use of capital ratios in the prediction of bank distress is suggested The leverage capital ratio and the gross revenue capital ratio may be used to replace the risk-weighted capital ratio since they are simpler and may not be influenced by the ever changing risk pattern of the bank

    A comparison of plume rise algorithms to stack plume measurements in the Athabasca oil sands

    Get PDF
    Plume rise parameterizations calculate the rise of pollutant plumes due to effluent buoyancy and exit momentum. Some form of these parameterizations is used by most air quality models. In this paper, the performance of the commonly used Briggs plume rise algorithm was extensively evaluated, through a comparison of the algorithm's results when driven by meteorological observations with direct observations of plume heights in the Athabasca oil sands region. The observations were carried out as part of the Canada-Alberta Joint Oil Sands Monitoring Plan in August and September of 2013. Wind and temperature data used to drive the algorithm were measured in the region of emissions from various platforms, including two meteorological towers, a radio-acoustic profiler, and a research aircraft. Other meteorological variables used to drive the algorithm include friction velocity, boundary-layer height, and the Obukhov length. Stack emissions and flow parameter information reported by continuous emissions monitoring systems (CEMSs) were used to drive the plume rise algorithm. The calculated plume heights were then compared to interpolated aircraft SO2 measurements, in order to evaluate the algorithm's prediction for plume rise. We demonstrate that the Briggs algorithm, when driven by ambient observations, significantly underestimated plume rise for these sources, with more than 50&thinsp;% of the predicted plume heights falling below half the observed values from this analysis. With the inclusion of the effects of effluent momentum, the choice of different forms of parameterizations, and the use of different stability classification systems, this essential finding remains unchanged. In all cases, approximately 50&thinsp;% or more of the predicted plume heights fall below half the observed values. These results are in contrast to numerous plume rise measurement studies published between 1968 and 1993. We note that the observations used to drive the algorithms imply the potential presence of significant spatial heterogeneity in meteorological conditions; we examine the potential impact of this heterogeneity in our companion paper (Akingunola et al., 2018). It is suggested that further study using long-term in situ measurements with currently available technologies is warranted to investigate this discrepancy, and that wherever possible, meteorological input variables are observed in the immediate vicinity of the emitting stacks.</p

    Improving air quality model predictions of organic species using measurement-derived organic gaseous and particle emissions in a petrochemical-dominated region

    Get PDF
    This study assesses the impact of revised volatile organic compound (VOC) and organic aerosol (OA) emissions estimates in the GEM-MACH (Global Environmental Multiscale–Modelling Air Quality and CHemistry) chemical transport model (CTM) on air quality model predictions of organic species for the Athabasca oil sands (OS) region in Northern Alberta, Canada. The first emissions data set that was evaluated (base-case run) makes use of regulatory-reported VOC and particulate matter emissions data for the large oil sands mining facilities. The second emissions data set (sensitivity run) uses total facility emissions and speciation profiles derived from box-flight aircraft observations around specific facilities. Large increases in some VOC and OA emissions in the revised-emissions data set for four large oil sands mining facilities and decreases for others were found to improve the modeled VOC and OA concentration maxima in facility plumes, as shown with the 99th percentile statistic and illustrated by case studies. The results show that the VOC emission speciation profile from each oil sand facility is unique and different from standard petrochemical-refinery emission speciation profiles used for other regions in North America. A significant increase in the correlation coefficient is reported for the long-chain alkane predictions against observations when using the revised emissions based on aircraft observations. For some facilities, larger long-chain alkane emissions resulted in higher secondary organic aerosol (SOA) production, which improved OA predictions in those plumes. Overall, the use of the revised-emissions data resulted in an improvement of the model mean OA bias; however, a decrease in the OA correlation coefficient and a remaining negative bias suggests the need for further improvements to model OA emissions and formation processes. The weight of evidence suggests that the top-down emission estimation technique helps to better constrain the fugitive organic emissions in the oil sands region, which are a challenge to estimate given the size and complexity of the oil sands operations and the number of steps in the process chain from bitumen extraction to refined oil product. This work shows that the top-down emissions estimation technique may help to constrain bottom-up emission inventories in other industrial regions of the world with large sources of VOCs and OA.</p

    INVESTIGATING FOR POZZOLANIC ACTIVITY IN THE BLEND OF GROUND GLASS WASTE WITH CEMENT FOR SUSTAINABLE CONCRETE

    Get PDF
    This research work investigates the tensile and flexural strengths of concrete containing ground glass as a partial replacement for cement. Sustainability is to be attained by the reduced volume and cost of disposing the glass wastes that would have been meant for landfills, reduce the cost of glass-cements blend because the glass are unwanted wastes that needs lesser heat to process as a pozzolan for use in concrete, and reduce greenhouse gas emissions due to the reduced use of cement needed per unit concrete due to the replacement. The ground glass was obtained from waste louver blades, pulverized and sieved with a 100μm sieve size. The physical properties such as moisture content, bulk density and specific gravity of the ground glass (GG) were determined. Sixty (60) cylinders of 150mm diameter and 300mm high were cast, three (3) samples for each of the percentage replacement groups of 0%, 10%, 20%, 30% and 40% corresponding to the curing ages of 3, 7, 28 and 56 days for each test. Thirty-six (36) beams of 100x100x500mm were also cast for flexural strength test, three (3) samples for each of the percentage replacement groups of 0%, 20% and 40% corresponding to the curing ages of 3, 7, 28 and 56 days. The specific gravity of the GG was found to be 3.67 and the bulk density was 1275kg/m3. The results of the findings show that as the partial replacement of cement with ground glass increases from 10% to 40%, the tensile strength of specimens for all curing periods of 3 to 56 days decreases. The flexural strength tests show a similar pattern of reducing flexural strength for all the curing ages as percentage replacement increases. However, it was observed that concrete with 10% replacement of cement with ground glass at 100μm fineness level had slight improved tensile strength of the concrete than the control (0% replacement with the GG) for the later curing period indicating the pozzolanic activity of ground glass

    A chemical transport model study of plume-rise and particle size distribution for the Athabasca oil sands

    Get PDF
    We evaluate four high-resolution model simulations of pollutant emissions, chemical transformation, and downwind transport for the Athabasca oil sands using the Global Environmental Multiscale – Modelling Air-quality and Chemistry (GEM-MACH) model, and compare model results with surface monitoring network and aircraft observations of multiple pollutants, for simulations spanning a time period corresponding to an aircraft measurement campaign in the summer of 2013. We have focussed here on the impact of different representations of the model's aerosol size distribution and plume-rise parameterization on model results. The use of a more finely resolved representation of the aerosol size distribution was found to have a significant impact on model performance, reducing the magnitude of the original surface PM2.5 negative biases 32 %, from −2.62 to −1.72 µg m−3. We compared model predictions of SO2, NO2, and speciated particulate matter concentrations from simulations employing the commonly used Briggs (1984) plume-rise algorithms to redistribute emissions from large stacks, with stack plume observations. As in our companion paper (Gordon et al., 2017), we found that Briggs algorithms based on estimates of atmospheric stability at the stack height resulted in under-predictions of plume rise, with 116 out of 176 test cases falling below the model : observation 1 : 2 line, 59 cases falling within a factor of 2 of the observed plume heights, and an average model plume height of 289 m compared to an average observed plume height of 822 m. We used a high-resolution meteorological model to confirm the presence of significant horizontal heterogeneity in the local meteorological conditions driving plume rise. Using these simulated meteorological conditions at the stack locations, we found that a layered buoyancy approach for estimating plume rise in stable to neutral atmospheres, coupled with the assumption of free rise in convectively unstable atmospheres, resulted in much better model performance relative to observations (124 out of 176 cases falling within a factor of 2 of the observed plume height, with 69 of these cases above and 55 of these cases below the 1 : 1 line and within a factor of 2 of observed values). This is in contrast to our companion paper, wherein this layered approach (driven by meteorological observations not co-located with the stacks) showed a relatively modest impact on predicted plume heights. Persistent issues with over-fumigation of plumes in the model were linked to a more rapid decrease in simulated temperature with increasing height than was observed. This in turn may have led to overestimates of near-surface diffusivity, resulting in excessive fumigation

    An evaluation of the efficacy of very high resolution air-quality modelling over the Athabasca oil sands region, Alberta, Canada

    Get PDF
    We examine the potential benefits of very high resolution for air-quality forecast simulations using a nested system of the Global Environmental Multiscale-Modelling Air-quality and Chemistry chemical transport model. We focus on simulations at 1 and 2.5 km grid-cell spacing for the same time period and domain (the industrial emissions region of the Athabasca oil sands). Standard grid cell to observation station pair analyses show no benefit to the higher-resolution simulation (and a degradation of performance for most metrics using this standard form of evaluation). However, when the evaluation methodology is modified, to include a search over equivalent representative regions surrounding the observation locations for the closest fit to the observations, the model simulation with the smaller grid-cell size had the better per

    The use of hierarchical clustering for the design of optimized monitoring networks

    No full text
    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to\ud assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov–Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 − R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different sampling methodologies as well as outliers (stations' time series which are markedly different from all others in a given dataset)
    corecore