675 research outputs found

    The use of the altman model in evaluation of economic performance of a corporation in the crisis period in the building sector in the Czech Republic

    Get PDF
    The article is focused on verification of the presumption of poor financial management in companies operating in the building sector. Many authors have written about a financial situation of enterprises in the building sector, especially after the economic crisis in the year 2008, when some of them claim and their results confirm that the main reason of bankruptcy of these companies was not the economic crisis but mainly poor financial management. Our results, which were obtained especially by the method of financial analysis and further by a mathematical and statistical method, support this statement. Within the mathematical and statistical methods, there was return on equity used as an explanatory variable, mainly because all variants of the Altman Z-Score are based on the calculation of ratio indicators, which do not include this type of return. Based on the conducted tests it is possible to state that it is highly desirable for the monitored enterprises in the building industry to reach positive values of return on equity.O

    Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables.</p> <p>Methods</p> <p>Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure.</p> <p>Results</p> <p>The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-<it>t </it>interval did not perform satisfactorily.</p> <p>Conclusions</p> <p>The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.</p

    Application of infrared thermography in computer aided diagnosis

    Get PDF
    The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care

    Identifying Activated T Cells in Reconstituted RAG Deficient Mice Using Retrovirally Transduced Pax5 Deficient Pro-B Cells

    Get PDF
    Various methods have been used to identify activated T cells such as binding of MHC tetramers and expression of cell surface markers in addition to cytokine-based assays. In contrast to these published methods, we here describe a strategy to identify T cells that respond to any antigen and track the fate of these activated T cells. We constructed a retroviral double-reporter construct with enhanced green fluorescence protein (EGFP) and a far-red fluorescent protein from Heteractis crispa (HcRed). LTR-driven EGFP expression was used to enrich and identify transduced cells, while HcRed expression is driven by the CD40Ligand (CD40L) promoter, which is inducible and enables the identification and cell fate tracing of T cells that have responded to infection/inflammation. Pax5 deficient pro-B cells that can give rise to different hematopoietic cells like T cells, were retrovirally transduced with this double-reporter cassette and were used to reconstitute the T cell pool in RAG1 deifcient mice that lack T and B cells. By using flow cytometry and histology, we identified activated T cells that had developed from Pax5 deficient pro-B cells and responded to infection with the bacterial pathogen Listeria monocytogenes. Microscopic examination of organ sections allowed visual identification of HcRed-expressing cells. To further characterize the immune response to a given stimuli, this strategy can be easily adapted to identify other cells of the hematopoietic system that respond to infection/inflammation. This can be achieved by using an inducible reporter, choosing the appropriate promoter, and reconstituting mice lacking cells of interest by injecting gene-modified Pax5 deficient pro-B cells

    Superconductivity close to the Mott state: From condensed-matter systems to superfluidity in optical lattices

    Full text link
    Since the discovery of high-temperature superconductivity in 1986 by Bednorz and Mueller, great efforts have been devoted to finding out how and why it works. From the d-wave symmetry of the order parameter, the importance of antiferromagnetic fluctuations, and the presence of a mysterious pseudogap phase close to the Mott state, one can conclude that high-Tc superconductors are clearly distinguishable from the well-understood BCS superconductors. The d-wave superconducting state can be understood through a Gutzwiller-type projected BCS wave-function. In this review article, we revisit the Hubbard model at half-filling and focus on the emergence of exotic superconductivity with d-wave symmetry in the vicinity of the Mott state, starting from ladder systems and then studying the dimensional crossovers to higher dimensions. This allows to confirm that short-range antiferromagnetic fluctuations can mediate superconductivity with d-wave symmetry. Ladders are also nice prototype systems allowing to demonstrate the truncation of the Fermi surface and the emergence of a Resonating Valence Bond (RVB) state with preformed pairs in the vicinity of the Mott state. In two dimensions, a similar scenario emerges from renormalization group arguments. We also discuss theoretical predictions for the d-wave superconducting phase as well as the pseudogap phase, and address the crossover to the overdoped regime. Finally, cold atomic systems with tunable parameters also provide a complementary insight into this outstanding problem.Comment: 98 pages and 18 figures; Final version (references added and misprints corrected

    The Absolute of Advaita and the Spirit of Hegel: Situating Vedānta on the Horizons of British Idealisms

    Get PDF
    Purpose\textit{Purpose} A significant volume of philosophical literature produced by Indian academic philosophers in the first half of the twentieth century can be placed under the rubric of ‘Śaṁkara and X’, where X is Hegel, or a German or a British philosopher who had commented on, elaborated or critiqued the Hegelian system. We will explore in this essay the philosophical significance of Hegel-influenced systems as an intellectual conduit for these Indo-European conceptual encounters, and highlight how for some Indian philosophers the British variations on Hegelian systems were both a point of entry into debates over ‘idealism’ and ‘realism’ in contemporary European philosophy and an occasion for defending Advaita against the charge of propounding a doctrine of world illusionism. Methodology\textit{Methodology} Our study of the philosophical enquiries of A.C. Mukerji, P.T. Raju, and S.N.L. Shrivastava indicates that they developed distinctive styles of engaging with Hegelian idealisms as they reconfigured certain aspects of the classical Advaita of Śaṁkara through contemporary vocabulary. Result and Conclusion\textit{Result and Conclusion} These appropriations of Hegelian idioms can be placed under three overlapping styles: (a) Mukerji was partly involved in locating Advaita in an intermediate conceptual space between, on the one hand, Kantian agnosticism and, on the other hand, Hegelian absolutism; (b) Raju and Shrivastava presented Advaitic thought as the fulfilment of certain insights of Hegel and F.H. Bradley; and (c) the interrogations of Hegel’s ‘idealism’ provided several Indian academic philosophers with a hermeneutic opportunity to revisit the vexed question of whether the ‘idealism’ of Śaṁkara reduces the phenomenal world, structured by maˉyaˉ\textit{māyā}, to a bundle of ideas

    Prediction of cardiovascular risk using Framingham, ASSIGN and QRISK2: how well do they predict individual rather than population risk?

    Get PDF
    BACKGROUND: The objective of this study was to evaluate the performance of risk scores (Framingham, Assign and QRISK2) in predicting high cardiovascular disease (CVD) risk in individuals rather than populations. METHODS AND FINDINGS: This study included 1.8 million persons without CVD and prior statin prescribing using the Clinical Practice Research Datalink. This contains electronic medical records of the general population registered with a UK general practice. Individual CVD risks were estimated using competing risk regression models. Individual differences in the 10-year CVD risks as predicted by risk scores and competing risk models were estimated; the population was divided into 20 subgroups based on predicted risk. CVD outcomes occurred in 69,870 persons. In the subgroup with lowest risks, risk predictions by QRISK2 were similar to individual risks predicted using our competing risk model (99.9% of people had differences of less than 2%); in the subgroup with highest risks, risk predictions varied greatly (only 13.3% of people had differences of less than 2%). Larger deviations between QRISK2 and our individual predicted risks occurred with calendar year, different ethnicities, diabetes mellitus and number of records for medical events in the electronic health records in the year before the index date. A QRISK2 estimate of low 10-year CVD risk (<15%) was confirmed by Framingham, ASSIGN and our individual predicted risks in 89.8% while an estimate of high 10-year CVD risk (≥ 20%) was confirmed in only 48.6% of people. The majority of cases occurred in people who had predicted 10-year CVD risk of less than 20%. CONCLUSIONS: Application of existing CVD risk scores may result in considerable misclassification of high risk status. Current practice to use a constant threshold level for intervention for all patients, together with the use of different scoring methods, may inadvertently create an arbitrary classification of high CVD risk
    corecore