845 research outputs found

    Gender Inequality and Trade Liberalization: A Case Study of Pakistan

    Get PDF
    The main focus of this study is to explore the impact of trade liberalization on gender inequalities in Pakistan. The overall gender inequality based on three dimensions, including labour market, education and health facilities are analyzed in this paper using data from 1973 to 2005. Exports and imports to GDP ratio, per capita GDP, and number of girls’ school to number of boys’ school ratio are identified as important determinants of overall gender inequality in Pakistan and gender inequality in labor market of Pakistan. Further, gender inequality in education attainment is explained by per capita GDP, number of girls’ school to number of boys’ school ratio and number of female teachers per school.Gender inequality; discrimination; trade liberalization; determinants; Pakistan

    Pakistan’s Ranking in Social Development: Have We Always Been Backward?

    Get PDF
    Consensus is emerging between development thinkers and practitioners that social progress is a necessary pre-condition for sustained economic growth. Social development leads to higher levels of literacy, better health standards and overall improvement in the society’s living conditions. In fact, empirical evidence suggests that there is a two-way relationship between economic growth and social development [Ghaus-Pasha et al. (1998)]. Economic growth leads to higher revenues for government and higher per capita income, encouraging both public and private spendings on human development. Improvements in social indicators feedback as higher economic growth through enhanced productivity for labour and capital. In other words, well-developed human capital makes a significant contribution to economic growth which, in turn, offers improved welfare and better living conditions. However, if there is a breakdown in this chain and economic development is not translated into social development, then the pace of economic development eventually suffers. Pakistan is an example of a country where this chain has broken. Despite moderate economic growth of about 5 percent during the last decade or so, the state of social indicators leaves a lot to be desired. Currently, the female literacy rate is 33 percent, being somewhat higher for males at 56 percent; primary school enrolment for females is 55 percent, for males 78 percent; and infant mortality rate is 105 out of 1000. Today, Pakistan is ranked 138 in the human development index by the UNDP (1999) among 174 countries. The purpose of this paper is to see the state of social development in Pakistan in the international context.

    Modeling the dynamics of large conditional heteroskedastic covariance matrices

    Get PDF
    Many economic and financial time series exhibit time-varying volatility. GARCH models are tools for forecasting and analyzing the dynamics of this volatility. The co-movements in financial markets and financial assets around the globe have recently become the main area of interest of financial econometricians; hence, multivariate GARCH models have been introduced in order to capture these co-movements. A large variety of multivariate GARCH models exists in the financial world, and each of these models has its advantages and limitations. An important goal in constructing multivariate GARCH models is to make them parsimonious enough without compromising their adequacy in real-world applications. Another aspect is to ensure that the conditional covariance matrix is a positive-definite one. Motivated by the idea that volatility in financial markets is driven by a few latent variables, a new parameterization in multivariate context is proposed in this thesis. The factors in our proposed model are obtained through a recursive use of the singular value decomposition (SVD). This recursion enables us to sequentially extract the volatility clustering from the data set; accordingly, our model is called Sequential Volatility Extraction (SVX model in short). Logarithmically transformed singular values and the components of their corresponding singular vectors were modeled using the ARMA approach. We can say that in terms of basic idea and modeling approach our model resembles a stochastic volatility model. Empirical analysis and the comparison with the already existing multivariate GARCH models show that our proposed model is parsimonious because it requires lower number of parameters to estimate when compared to the two alternative models (i.e., DCC and GOGARCH). At the same time, the resulting covariance matrices from our model are positive-(semi)-definite. Hence, we can argue that our model fulfills the basic requirements of a multivariate GARCH model. Based on the findings, it can be concluded that SVX model can be applied to financial data of dimensions ranging from low to high

    Measurement errors in recall food consumption data

    Get PDF
    Recall food consumption data, which is the basis of a great deal of empirical work, is believed to suffer from considerable measurement error. Diary records are believed to be very accurate. We study a unique data set that collects recall and diary data from the same households. Measurement errors in recall food consumption data appear to be substantial, and they do not have the properties of classical measurement error. We also find evidence that the diary measures are themselves imperfect. We consider the implications of our findings for modelling demand, measuring inequality, and estimating inter-temporal preference parameters. Keywords: expenditure, consumption, measurement error, survey data

    Measurement Errors in Recall Food Expenditure Data

    Get PDF
    Household expenditure data is an important input into the study of consumption and savings behaviour and of living standards and inequality. Because it is collected in many surveys, food expenditure data has formed the basis of much work in these areas. Recently, there has been considerable interest in properties of different ways of collecting expenditure information. It has also been suggested that measurement error in expenditure data seriously affects empirical work based on such data. The Canadian Food Expenditure Survey asks respondents to first estimate their household's food expenditures and then record food expenditures in a diary for two weeks. This unique experiment allows us to compare recall and diary based expenditure data collected from the same individuals. Under the assumption that the diary measures are "true" food consumption, this allows us to observe errors in measures of recall food consumption directly, and to study the properties of those errors. Under this assumption, measurement errors in recall food consumption data appear to be substantial, and they do not have many of the properties of classical measurement error. In particular, they are neither uncorrelated with true consumption nor conditionally homoscedastic. In addition, they are not well approximated by either a normal or log normal distribution. We also show evidence that diary measures are themselves imperfect, suffering for example, from "diary exhaustion". This suggests alternative interpretations for the differences between recall and diary consumption measures. Finally, we compare estimates of income and household size elasticities of per capita food consumption based on the two kinds of expenditure data and, in contrast to some previous work, find little difference between the two.expenditure, consumption, surveys

    Measurement Errors in Recall Food Expenditure Data

    Get PDF
    Household expenditure data is an important input into the study of consumption and savings behaviour and of living standards and inequality. Because it is collected in many surveys, food expenditure data has formed the basis of much work in these areas. Recently, there has been considerable interest in properties of different ways of collecting expenditure information. It has also been suggested that measurement error in expenditure data seriously affects empirical work based on such data. The Canadian Food Expenditure Survey asks respondents to first estimate their household's food expenditures and then record food expenditures in a diary for two weeks. This unique experiment allows us to compare recall and diary based expenditure data collected from the same individuals. Under the assumption that the diary measures are "true" food consumption, this allows us to observe errors in measures of recall food consumption directly, and to study the properties of those errors. Under this assumption, measurement errors in recall food consumption data appear to be substantial, and they do not have many of the properties of classical measurement error. In particular, they are neither uncorrelated with true consumption nor conditionally homoscedastic. In addition, they are not well approximated by either a normal or log normal distribution. We also show evidence that diary measures are themselves imperfect, suffering for example, from "diary exhaustion". This suggests alternative interpretations for the differences between recall and diary consumption measures. Finally, we compare estimates of income and household size elasticities of per capita food consumption based on the two kinds of expenditure data and, in contrast to some previous work, find little difference between the two.expenditure, consumption, surveys

    Data mining techniques for complex application domains

    Get PDF
    The emergence of advanced communication techniques has increased availability of large collection of data in electronic form in a number of application domains including healthcare, e- business, and e-learning. Everyday a large amount of records are stored electronically. However, finding useful information from such a large data collection is a challenging issue. Data mining technology aims automatically extracting hidden knowledge from large data repositories exploiting sophisticated algorithms. The hidden knowledge in the electronic data may be potentially utilized to facilitate the procedures, productivity, and reliability of several application domains. The PhD activity has been focused on novel and effective data mining approaches to tackle the complex data coming from two main application domains: Healthcare data analysis and Textual data analysis. The research activity, in the context of healthcare data, addressed the application of different data mining techniques to discover valuable knowledge from real exam-log data of patients. In particular, efforts have been devoted to the extraction of medical pathways, which can be exploited to analyze the actual treatments followed by patients. The derived knowledge not only provides useful information to deal with the treatment procedures but may also play an important role in future predictions of potential patient risks associated with medical treatments. The research effort in textual data analysis is twofold. On the one hand, a novel approach to discovery of succinct summaries of large document collections has been proposed. On the other hand, the suitability of an established descriptive data mining to support domain experts in making decisions has been investigated. Both research activities are focused on adopting widely exploratory data mining techniques to textual data analysis, which require overcoming intrinsic limitations for traditional algorithms for handling textual documents efficiently and effectively

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Traditional Methods to Measure Volatility: Case Study of Selective Developed and Emerging Markets

    Get PDF
    Importance of volatility in developed as well as emerging markets can never be under estimated. Volatility is measured by traditional measures such as standard deviation. This study measures volatility and examines the relative volatility during 1997-2009. Using  global  stock  market  indexes  of  countries  categorized  as  an  emerging  and developed  capital  markets  are  utilized. All the selected stock returns shown non-normality. Emerging market indexes show more non-normality and higher kurtosis values indicate high peakedness of return distributions. Evidences  during  this  time  period  highlight that  volatility is  not  the  only phenomena of emerging capital markets. Some developed capital markets are more volatile than emerging in the selected sample. Keywords: Volatility, standard deviation, emerging markets; International diversification
    • 

    corecore