22 research outputs found
Impact of Corporate Governance Practices on Firm Capital Structure and Profitability: A Study of Selected Hotels and Restaurant Companies in Sri Lanka.
Corporate governance issues have been a growing area of management research especially among large and listed firms. Good corporate governance practices are regarded as important in reducing risk for investors, attracting investment capital and improving the performance of companies. Companies need financial resources and better earnings to promote their objectives. Therefore, factorsmay affect the capital structure and profitability of companies should be considered carefully. The purpose of the present study is to investigate whether there is any relationship among some specific characters of corporate governance, capital structure and profitability of listedHotels &Restaurant companies in Colombo Stock Exchange (CSE). To do so, 18 companies were selected from those which were listed inCSE during the 2007-2012. The ‘Board Composition(BC)’, ‘Board Size (BS)’ and ‘CEOduality (CEOD)’ were considered as independent variables, whereas,’ Debt Ratio(DR)’,‘Debt-to-Equity Ratio(DER)’,‘Returns on Equity(ROE)’,and ‘Return on Assets(ROA)’ as dependent variable. The results indicate a positive relationship between ‘BS; BC; CEOD; ROE; ROA and DERwhereas negative relationship between BS; BID and DR.in addition CEOD have a positive relationship with DR.In addition, none of the variables have a significant relationship with capital structure and profitability. Key words: Corporate Governance; Capital Structure and Profitability
A Nexus Between Liquidity & Profitability: A Study Of Trading Companies In Sri Lanka.
This study has investigated the relationship between liquidity and profitability of trading companies in Sri Lanka. The main objective was to examine the nature and extent of the nexus between liquidity and profitability in profit-oriented quoted trading companies and also to determine whether any relationship exist between the two performance measures. Analysis was based on data extracted from annual reports and accounts of the companies for the relevant period. Correlation and regression analysis respectively were employed to examine the nature and extent of the relationship between the variables and determine whether any cause and effect relationship between them. The study covered 08 listed trading companies in Sri Lanka over a period of past 5 years from 2008 to 2012. Correlation& regression analysis and descriptive statistics were used in the analysis and findings suggest that there is a significant relationship exists between liquidity and profitability among the listed trading companies in Sri Lanka. However, the findings of this paper are based on a study conducted on the selected companies. Hence, the results are not generalizable to non-quoted companies. Secondly, the sample only comprises trading companies. Therefore, the results are valid for this sector. Â Key Words: Liquidity, Profitability &Trading companie
Corporate Governance and Banking Performance: a Comparative Study between Private and State Banking Sector in Sri Lanka.
The main objectives of this study are to find out the relationship between corporate governance and banking performance and also find out the impact of corporate governance on banking performance. This study focused on four aspects of corporate governance namely; Board Size (BS), Board Diversity (BD), Outside Directors Percentage (OSDP) & Board Meeting Frequency (BMF). Banking performance has been measured through Return on Equity (ROE) and Return on Assets (ROA). The results revealed that all variables of corporate governance are positively correlated with ROE in state banks as well as, in private banks except BD and BMF other variables have strong negative relation with ROE, which is significant at 5percent level of significance. Similarly, except BMF other variables have negative relationship with ROA in state banks. Private Banks also show same relation except the variable BD. BD have strong negative relationship with ROA in state banks which is significant at 5 percent level of significance, but in private banks; positive relationship is denoted by BD which is not significant. Further corporate governance has a moderate impact on performance of both private and state banks. Keywords: Corporate Governance, Banking Sector, Banking Performanc
Riemannian walk for incremental learning: Understanding forgetting and intransigence
Incremental learning (IL) has received a lot of attention recently, however, the literature lacks a precise problem definition, proper evaluation settings, and metrics tailored specifically for the IL problem. One of the main objectives of this work is to fill these gaps so as to provide a common ground for better understanding of IL. The main challenge for an IL algorithm is to update the classifier whilst preserving existing knowledge. We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, its inability to update knowledge. We introduce two metrics to quantify forgetting and intransigence that allow us to understand, analyse, and gain better insights into the behaviour of IL algorithms. Furthermore, we present RWalk, a generalization of EWC++ (our efficient version of EWC [6]) and Path Integral [25] with a theoretically grounded KL-divergence based perspective. We provide a thorough analysis of various IL algorithms on MNIST and CIFAR-100 datasets. In these experiments, RWalk obtains superior results in terms of accuracy, and also provides a better trade-off for forgetting and intransigence
Efficient linear programming for dense CRFs
The fully connected conditional random field (CRF) with Gaussian pairwise potentials has proven popular and effective for multi-class semantic segmentation. While the energy of a dense CRF can be minimized accurately using a linear programming (LP) relaxation, the state-of-the-art algorithm is too slow to be useful in practice. To alleviate this deficiency, we introduce an efficient LP minimization algorithm for dense CRFs. To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent. We show that each block of variables can be efficiently optimized. Specifically, for one block, the problem decomposes into significantly smaller subproblems, each of which is defined over a single pixel. For the other block, the problem is optimized via conditional gradient descent. This has two advantages: 1) the conditional gradient can be computed in a time linear in the number of pixels and labels; and 2) the optimal step size can be computed analytically. Our experiments on standard datasets provide compelling evidence that our approach outperforms all existing baselines including the previous LP based approach for dense CRFs
Learning to adapt for stereo
Real world applications of stereo depth estimation require models that are robust to dynamic variations in the environment. Even though deep learning based stereo methods are successful, they often fail to generalize to unseen variations in the environment, making them less suitable for practical applications such as autonomous driving. In this work, we introduce a ``learning-to-adapt'' framework that enables deep stereo methods to continuously adapt to new target domains in an unsupervised manner. Specifically, our approach incorporates the adaptation procedure into the learning objective to obtain a base set of parameters that are better suited for unsupervised online adaptation. To further improve the quality of the adaptation, we learn a confidence measure that effectively masks the errors introduced during the unsupervised adaptation. We evaluate our method on synthetic and real-world stereo datasets and our experiments evidence that learning-to-adapt is, indeed beneficial for online adaptation on vastly different domains
A conditional deep generative model of people in natural images
We propose a deep generative model of humans in natural images which keeps 2D pose separated from other latent factors of variation, such as background scene and clothing. In contrast to methods that learn generative models of low-dimensional representations, e.g., segmentation masks and 2D skeletons, our single-stage end-to-end conditional-VAEGAN learns directly on the image space. The flexibility of this approach allows the sampling of people with independent variations of pose and appearance. Moreover, it enables the reconstruction of images conditioned to a given posture, allowing, for instance, pose-transfer from one person to another. We validate our method on the Human3.6M dataset and achieve state-of-the-art results on the ChictopiaPlus benchmark. Our model, named Conditional-DGPose, outperforms the closest related work in the literature. It generates more realistic and accurate images regarding both, body posture and image quality, learning the underlying factors of pose and appearance variation
A conditional deep generative model of people in natural images
We propose a deep generative model of humans in natural images which keeps 2D pose separated from other latent factors of variation, such as background scene and clothing. In contrast to methods that learn generative models of low-dimensional representations, e.g., segmentation masks and 2D skeletons, our single-stage end-to-end conditional-VAEGAN learns directly on the image space. The flexibility of this approach allows the sampling of people with independent variations of pose and appearance. Moreover, it enables the reconstruction of images conditioned to a given posture, allowing, for instance, pose-transfer from one person to another. We validate our method on the Human3.6M dataset and achieve state-of-the-art results on the ChictopiaPlus benchmark. Our model, named Conditional-DGPose, outperforms the closest related work in the literature. It generates more realistic and accurate images regarding both, body posture and image quality, learning the underlying factors of pose and appearance variation
Continual learning with tiny episodic memories
In continual learning (CL), an agent learns from a stream of tasks leveraging prior experience to transfer knowledge to future tasks. It is an ideal framework to decrease the amount of supervision in the existing learning algorithms. But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks. One way to endow the learner the ability to perform tasks seen in the past is to store a small memory, dubbed episodic memory, that stores few examples from previous tasks and then to replay these examples when training for future tasks. In this work, we empirically analyze the effectiveness of a very small episodic memory in a CL setup where each training example is only seen once. Surprisingly, across four rather different supervised learning benchmarks adapted to CL, a very simple baseline, that jointly trains on both examples from the current task as well as examples stored in the episodic memory, significantly outperforms specifically designed CL approaches with and without episodic memory. Interestingly, we find that repetitive training on even tiny memories of past tasks does not harm generalization, on the contrary, it improves it, with gains between 7\% and 17\% when the memory is populated with a single example per class
A semi-supervised deep generative model for human body analysis
Deep generative modelling for human body analysis is an emerging problem with many interesting applications. However, the latent space learned by such models is typically not interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised approach and present a deep generative model for human body analysis where the body pose and the visual appearance are disentangled in the latent space. Such a disentanglement allows independent manipulation of pose and appearance, and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, our setting allows for semi-supervised pose estimation, relaxing the need for labelled data. We demonstrate the capabilities of our generative model on the Human3.6M and on the DeepFashion datasets