1,878 research outputs found

    Environmental Disclosures and Size of Selected Indian Firms

    Get PDF
    Business responsibility is an easily said but hard to assume construct of sustainability literature. Out of the nine principles of Business Responsibility Reporting (BRR), the sixth principle envisages the environmental concerns of the businesses. The objective of this study is to explain the response of corporate entities towards Environmental Concerns (EC). The environmental concern of an organization has been gauged through environmental disclosures by these firms under the sixth principle of BRR. The general lack of emphasis on environmental disclosures still remains to be a key challenge to encourage Indian corporate houses to develop and adopt clean technologies, energy efficiency and renewable energy initiatives. The role of clean technologies/environmental technologies is pivotal in ensuring adequate environmental disclosures. But the moot point is, do the firms of certain size would disclose more on EC. There is plenty of literature which suffices the relationship of size and environmental disclosure but by appearing green (disclosures) an organization cannot be green. An organization will be green through its clean technology and energy initiatives. There is a major shift in the sustainability literature by focusing on prevention rather than damaging and curing later. Clean energy initiatives are the first steps to towards preventing/minimizing the environmental damage. Therefore, the next important question arises what explains the variation in clean energy initiatives in an organization. Is it the size of the firm or regulation which leads to disclosing environmental concern (EC.?) The relationship between size of the firm and environmental disclosures related to EC has been found to be significant by applying‘t’ test in the selected sample of 40 companies, while the variation in clean technology initiatives in the same sample has been measured using binary logistic regression. Out of the two independent variables i.e. size and environmental concern it is established that instead of size it is the regulation which significantly pushes companies towards clean technologies and energy initiatives

    A memory-based method to select the number of relevant components in Principal Component Analysis

    Get PDF
    We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation

    A cluster driven log-volatility factor model: a deepening on the source of the volatility clustering

    Get PDF
    We introduce a new factor model for log volatilities that considers contributions, and performs dimen- sionality reduction, at a global level through the market, and at a local level through clusters and their interactions. We do not assume a-priori the number of clusters in the data, instead using the Directed Bubble Hierarchical Tree (DBHT) algorithm to fix the number of factors. We use the factor model to study how the log volatility contributes to volatility clustering, quantifying the strength of the volatility clustering using a new non parametric integrated proxy. Indeed finding a link between volatility and volatility clustering, we find that a global analysis reveals that only the market contributes to the volatil- ity clustering. A local analysis reveals that for some clusters, the cluster itself contributes statistically to the volatility clustering effect. This is significantly advantageous over other factor models, since it offers a way of selecting factors in a statistical way, whilst also keeping economically relevant factors. Finally, we show that the log volatility factor model explains a similar amount of memory to a Principal Components Analysis (PCA) factor model and an exploratory factor model

    Estimating Time to Clear Pendency of Cases in High Courts in India using Linear Regression

    Full text link
    Indian Judiciary is suffering from burden of millions of cases that are lying pending in its courts at all the levels. The High Court National Judicial Data Grid (HC-NJDG) indexes all the cases pending in the high courts and publishes the data publicly. In this paper, we analyze the data that we have collected from the HC-NJDG portal on 229 randomly chosen days between August 31, 2017 to March 22, 2020, including these dates. Thus, the data analyzed in the paper spans a period of more than two and a half years. We show that: 1) the pending cases in most of the high courts is increasing linearly with time. 2) the case load on judges in various high courts is very unevenly distributed, making judges of some high courts hundred times more loaded than others. 3) for some high courts it may take even a hundred years to clear the pendency cases if proper measures are not taken. We also suggest some policy changes that may help clear the pendency within a fixed time of either five or fifteen years. Finally, we find that the rate of institution of cases in high courts can be easily handled by the current sanctioned strength. However, extra judges are needed only to clear earlier backlogs.Comment: 12 pages, 9 figures, JURISIN 2022. arXiv admin note: text overlap with arXiv:2307.1061

    Efficacy of Intense Pulse Light with Triple Combination Cream Versus Triple Combination Cream alone in the Treatment of Melasma

    Get PDF
    Introduction: Various studies explored the use of intense pulse light (IPL) therapy in treating melasma, but only a few randomized clinical trials have evaluated the combination of triple combination cream (TCC) with IPL so far. Objective: This study compared the efficacy and safety of the combination of IPL and triple combination cream with triple combination cream alone in treating melasma. Material and Methods: Sixty patients with melasma were enrolled in this assessor-blinded, parallel-group randomized controlled study. Thirty patients were treated with IPL (15J/cm2, two sessions at 2-week intervals) and TCC (Hydroquinone 2%, tretinoin 0.025%, fluocinolone acetonide 0.01%) at night and broad-spectrum sunscreen during day time whereas other groups received only TCC and broad-spectrum sunscreen. The median percentage reduction in melasma area and severity index (MASI) and physician’s global assessment scale was assessed at 12-week to determine the efficacy of the treatment. The incidence of adverse effects at each follow-up and relapse at 16-week were also noted during the study period as the secondary outcome measure. Results: The median reduction in MASI achieved at 12 weeks was 48% in the IPL+TCC group and 13.1% in the TCC group from the baseline.  The incidence of relapse was seen in 7.14% and 13.04% patients in the IPL+TCC group and TCC alone group respectively at 16 weeks however, this difference was not statistically significant (p<0.05). Conclusion: Our study supports that IPL and TCC are more effective than TCC therapy alone in treating melasma

    Enhancing healthcare recommendation: transfer learning in deep convolutional neural networks for Alzheimer disease detection

    Get PDF
    Neurodegenerative disorders such as Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI) significantly impact brain function and cognition. Advanced neuroimaging techniques, particularly Magnetic Resonance Imaging (MRI), play a crucial role in diagnosing these conditions by detecting structural abnormalities. This study leverages the ADNI and OASIS datasets, renowned for their extensive MRI data, to develop effective models for detecting AD and MCI. The research conducted three sets of tests, comparing multiple groups: multi-class classification (AD vs. Cognitively Normal (CN) vs. MCI), binary classification (AD vs. CN, and MCI vs. CN), to evaluate the performance of models trained on ADNI and OASIS datasets. Key preprocessing techniques such as Gaussian filtering, contrast enhancement, and resizing were applied to both datasets. Additionally, skull stripping using U-Net was utilized to extract features by removing the skull. Several prominent deep learning architectures including DenseNet-201, EfficientNet-B0, ResNet-50, ResNet-101, and ResNet-152 were investigated to identify subtle patterns associated with AD and MCI. Transfer learning techniques were employed to enhance model performance, leveraging pre-trained datasets for improved Alzheimer’s MCI detection. ResNet-101 exhibited superior performance compared to other models, achieving 98.21% accuracy on the ADNI dataset and 97.45% accuracy on the OASIS dataset in multi-class classification tasks encompassing AD, CN, and MCI. It also performed well in binary classification tasks distinguishing AD from CN. ResNet-152 excelled particularly in binary classification between MCI and CN on the OASIS dataset. These findings underscore the utility of deep learning models in accurately identifying and distinguishing neurodegenerative diseases, showcasing their potential for enhancing clinical diagnosis and treatment monitoring

    Measurement of the t(t)over-barb(b)over-bar production cross section in the all-jet final state in pp collisions at root s=13 TeV

    Get PDF
    A measurement of the production cross section of top quark pairs in association with two b jets (t (t) over barb (b) over bar) is presented using data collected in proton-proton collisions at root s=13 TeV by the CMS detector at the LHC corresponding to an integrated luminosity of 35.9 fb(-1). The cross section is measured in the all-jet decay channel of the top quark pair by selecting events containing at least eight jets, of which at least two are identified as originating from the hadronization of b quarks. A combination of multivariate analysis techniques is used to reduce the large background from multijet events not containing a top quark pair, and to help discriminate between jets originating from top quark decays and other additional jets. The cross section is determined for the total phase space to be 5.5 +/- 0.3 (stat)(-1.3)(+)(1.6) (syst)pb and also measured for two fiducial t (t) over barb (b) over bar, definitions. The measured cross sections are found to be larger than theoretical predictions by a factor of 1.5-2.4, corresponding to 1-2 standard deviations. (C) 2020 The Author. Published by Elsevier B.V.Peer reviewe

    Search for top squark pair production using dilepton final states in pp collision data collected at root s=13TeV

    Get PDF
    A search is presented for supersymmetric partners of the top quark (top squarks) in final states with two oppositely charged leptons (electrons or muons), jets identified as originating from bquarks, and missing transverse momentum. The search uses data from proton-proton collisions at root s = 13 TeV collected with the CMS detector, corresponding to an integrated luminosity of 137 fb(-1). Hypothetical signal events are efficiently separated from the dominant top quark pair production background with requirements on the significance of the missing transverse momentum and on transverse mass variables. No significant deviation is observed from the expected background. Exclusion limits are set in the context of simplified supersymmetric models with pair-produced lightest top squarks. For top squarks decaying exclusively to a top quark and a lightest neutralino, lower limits are placed at 95% confidence level on the masses of the top squark and the neutralino up to 925 and 450 GeV, respectively. If the decay proceeds via an intermediate chargino, the corresponding lower limits on the mass of the lightest top squark are set up to 850 GeV for neutralino masses below 420 GeV. For top squarks undergoing a cascade decay through charginos and sleptons, the mass limits reach up to 1.4 TeV and 900 GeV respectively for the top squark and the lightest neutralino.Peer reviewe

    Development and validation of HERWIG 7 tunes from CMS underlying-event measurements

    Get PDF
    This paper presents new sets of parameters (“tunes”) for the underlying-event model of the HERWIG7 event generator. These parameters control the description of multiple-parton interactions (MPI) and colour reconnection in HERWIG7, and are obtained from a fit to minimum-bias data collected by the CMS experiment at s=0.9, 7, and 13Te. The tunes are based on the NNPDF 3.1 next-to-next-to-leading-order parton distribution function (PDF) set for the parton shower, and either a leading-order or next-to-next-to-leading-order PDF set for the simulation of MPI and the beam remnants. Predictions utilizing the tunes are produced for event shape observables in electron-positron collisions, and for minimum-bias, inclusive jet, top quark pair, and Z and W boson events in proton-proton collisions, and are compared with data. Each of the new tunes describes the data at a reasonable level, and the tunes using a leading-order PDF for the simulation of MPI provide the best description of the dat
    corecore