504 research outputs found

    Analyzing Factors Accountable for Success in Completing University Studies

    Get PDF
    The analysis of success in completing university studies and implicitly of student dropout rates have recently become topics of great interest for researchers and policy-makers. This may be explained by the fact that completion rates play an important role in assessing the quality of teaching which further influences potential students in choosing the higher education institution. A large number of publications investigating the success rates and dropout rates focus on institutional policies, measuring the effectiveness of specific institutional measures to improve graduation success and reduce student dropout. In this paper we analyze several factors accountable for success in higher education. We perform an exploratory factor analysis on a dataset consisting of 61 items, by using the method of extraction of principal components and subsequent Varimax rotation. The results obtained enable identifying several relevant factors accountable for success in higher education and assessing their impact on reducing the dropout rate

    Determinants of Success in Graduation: An Empirical Study

    Get PDF
    One of the most concerning problems that higher education institutions are facing recently is the high dropout rate of students enrolled at all levels of study. Consequently, the dropout phenomenon and the determinants of success in completing university studies are increasingly attracting the attention of both researchers and decision makers. This paper revisits the problem of identifying the factors accountable for success in higher education and complements the aspects addressed in Agapie et al. (2020) by suggesting a basic set of measures based on Agglomerative Hierarchical Clustering (AHC)

    Multidifferential study of identified charged hadron distributions in ZZ-tagged jets in proton-proton collisions at s=\sqrt{s}=13 TeV

    Full text link
    Jet fragmentation functions are measured for the first time in proton-proton collisions for charged pions, kaons, and protons within jets recoiling against a ZZ boson. The charged-hadron distributions are studied longitudinally and transversely to the jet direction for jets with transverse momentum 20 <pT<100< p_{\textrm{T}} < 100 GeV and in the pseudorapidity range 2.5<η<42.5 < \eta < 4. The data sample was collected with the LHCb experiment at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 1.64 fb1^{-1}. Triple differential distributions as a function of the hadron longitudinal momentum fraction, hadron transverse momentum, and jet transverse momentum are also measured for the first time. This helps constrain transverse-momentum-dependent fragmentation functions. Differences in the shapes and magnitudes of the measured distributions for the different hadron species provide insights into the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb public pages

    Study of the BΛc+ΛˉcKB^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} decay

    Full text link
    The decay BΛc+ΛˉcKB^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} is studied in proton-proton collisions at a center-of-mass energy of s=13\sqrt{s}=13 TeV using data corresponding to an integrated luminosity of 5 fb1\mathrm{fb}^{-1} collected by the LHCb experiment. In the Λc+K\Lambda_{c}^+ K^{-} system, the Ξc(2930)0\Xi_{c}(2930)^{0} state observed at the BaBar and Belle experiments is resolved into two narrower states, Ξc(2923)0\Xi_{c}(2923)^{0} and Ξc(2939)0\Xi_{c}(2939)^{0}, whose masses and widths are measured to be m(Ξc(2923)0)=2924.5±0.4±1.1MeV,m(Ξc(2939)0)=2938.5±0.9±2.3MeV,Γ(Ξc(2923)0)=0004.8±0.9±1.5MeV,Γ(Ξc(2939)0)=0011.0±1.9±7.5MeV, m(\Xi_{c}(2923)^{0}) = 2924.5 \pm 0.4 \pm 1.1 \,\mathrm{MeV}, \\ m(\Xi_{c}(2939)^{0}) = 2938.5 \pm 0.9 \pm 2.3 \,\mathrm{MeV}, \\ \Gamma(\Xi_{c}(2923)^{0}) = \phantom{000}4.8 \pm 0.9 \pm 1.5 \,\mathrm{MeV},\\ \Gamma(\Xi_{c}(2939)^{0}) = \phantom{00}11.0 \pm 1.9 \pm 7.5 \,\mathrm{MeV}, where the first uncertainties are statistical and the second systematic. The results are consistent with a previous LHCb measurement using a prompt Λc+K\Lambda_{c}^{+} K^{-} sample. Evidence of a new Ξc(2880)0\Xi_{c}(2880)^{0} state is found with a local significance of 3.8σ3.8\,\sigma, whose mass and width are measured to be 2881.8±3.1±8.5MeV2881.8 \pm 3.1 \pm 8.5\,\mathrm{MeV} and 12.4±5.3±5.8MeV12.4 \pm 5.3 \pm 5.8 \,\mathrm{MeV}, respectively. In addition, evidence of a new decay mode Ξc(2790)0Λc+K\Xi_{c}(2790)^{0} \to \Lambda_{c}^{+} K^{-} is found with a significance of 3.7σ3.7\,\sigma. The relative branching fraction of BΛc+ΛˉcKB^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} with respect to the BD+DKB^{-} \to D^{+} D^{-} K^{-} decay is measured to be 2.36±0.11±0.22±0.252.36 \pm 0.11 \pm 0.22 \pm 0.25, where the first uncertainty is statistical, the second systematic and the third originates from the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb public pages

    Measurement of the ratios of branching fractions R(D)\mathcal{R}(D^{*}) and R(D0)\mathcal{R}(D^{0})

    Full text link
    The ratios of branching fractions R(D)B(BˉDτνˉτ)/B(BˉDμνˉμ)\mathcal{R}(D^{*})\equiv\mathcal{B}(\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}\to D^{*}\mu^{-}\bar{\nu}_{\mu}) and R(D0)B(BD0τνˉτ)/B(BD0μνˉμ)\mathcal{R}(D^{0})\equiv\mathcal{B}(B^{-}\to D^{0}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(B^{-}\to D^{0}\mu^{-}\bar{\nu}_{\mu}) are measured, assuming isospin symmetry, using a sample of proton-proton collision data corresponding to 3.0 fb1{ }^{-1} of integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The tau lepton is identified in the decay mode τμντνˉμ\tau^{-}\to\mu^{-}\nu_{\tau}\bar{\nu}_{\mu}. The measured values are R(D)=0.281±0.018±0.024\mathcal{R}(D^{*})=0.281\pm0.018\pm0.024 and R(D0)=0.441±0.060±0.066\mathcal{R}(D^{0})=0.441\pm0.060\pm0.066, where the first uncertainty is statistical and the second is systematic. The correlation between these measurements is ρ=0.43\rho=-0.43. Results are consistent with the current average of these quantities and are at a combined 1.9 standard deviations from the predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb public pages

    Volatility Dynamics of Non-Linear Volatile Time Series and Analysis of Information Flow: Evidence from Cryptocurrency Data

    No full text
    This paper aims to empirically examine long memory and bi-directional information flow between estimated volatilities of highly volatile time series datasets of five cryptocurrencies. We propose the employment of Garman and Klass (GK), Parkinson’s, Rogers and Satchell (RS), and Garman and Klass-Yang and Zhang (GK-YZ), and Open-High-Low-Close (OHLC) volatility estimators to estimate cryptocurrencies’ volatilities. The study applies methods such as mutual information, transfer entropy (TE), effective transfer entropy (ETE), and Rényi transfer entropy (RTE) to quantify the information flow between estimated volatilities. Additionally, Hurst exponent computations examine the existence of long memory in log returns and OHLC volatilities based on simple R/S, corrected R/S, empirical, corrected empirical, and theoretical methods. Our results confirm the long-run dependence and non-linear behavior of all cryptocurrency’s log returns and volatilities. In our analysis, TE and ETE estimates are statistically significant for all OHLC estimates. We report the highest information flow from BTC to LTC volatility (RS). Similarly, BNB and XRP share the most prominent information flow between volatilities estimated by GK, Parkinson’s, and GK-YZ. The study presents the practicable addition of OHLC volatility estimators for quantifying the information flow and provides an additional choice to compare with other volatility estimators, such as stochastic volatility models

    Using Probabilistic Models for Data Compression

    No full text
    Our research objective is to improve the Huffman coding efficiency by adjusting the data using a Poisson distribution, which avoids the undefined entropies too. The scientific value added by our paper consists in the fact of minimizing the average length of the code words, which is greater in the absence of applying the Poisson distribution. Huffman Coding is an error-free compression method, designed to remove the coding redundancy, by yielding the smallest number of code symbols per source symbol, which in practice can be represented by the intensity of an image or the output of a mapping operation. We shall use the images from the PASCAL Visual Object Classes (VOC) to evaluate our methods. In our work we use 10,102 randomly chosen images, such that half of them are for training, while the other half is for testing. The VOC data sets display significant variability regarding object size, orientation, pose, illumination, position and occlusion. The data sets are composed by 20 object classes, respectively: aeroplane, bicycle, bird, boat, bottle, bus, car, motorbike, train, sofa, table, chair, tv/monitor, potted plant, person, cat, cow, dog, horse and sheep. The descriptors of different objects can be compared to give a measurement of their similarity. Image similarity is an important concept in many applications. This paper is focused on the measure of similarity in the computer science domain, more specifically information retrieval and data mining. Our approach uses 64 descriptors for each image belonging to the training and test set, therefore the number of symbols is 64. The data of our information source are different from a finite memory source (Markov), where its output depends on a finite number of previous outputs. When dealing with large volumes of data, an effective approach to increase the Information Retrieval speed is based on using Neural Networks as an artificial intelligent technique

    An Integrative Approach to Assess Subjective Well-Being. A Case Study on Romanian University Students

    No full text
    Subjective well-being (SWB) has presented long-lasting interest for researchers and the recent focus on the economic approach to SWB led to increased awareness of the topic. Despite the significant number of studies, conceptualizing and assessing SWB, along with finding predictors of SWB, need further empirical exploration. Following this rationale, using statistical and econometric methods (correlation analysis, Principal Component Analysis (PCA), Multinomial Logistic Regression (MLR)) applied on data collected via a survey on students from Bucharest University of Economic Studies (363 respondents), this study explores and provides insights that support a better understanding of defining and measuring SWB. Additionally, the study offers valuable information on the main determinants of SWB for a particular group, in this case, Romanian business students. According to findings, we argue that: (1) when assessing perception of life satisfaction and happiness, Romanian students tend to make slight distinctions between these two concepts; (2) question order effect is not significant, whereas negative sentiments (such as pessimism) impact self-assessment of happiness, but not of life satisfaction; (3) the main predictors for SWB are satisfaction with current activities, level of optimism/pessimism, health, and safety of the neighborhood. This paper proposes a new approach to modeling SWB by MLR, which features expressing the dependent variable with respect to the principal factors obtained by PCA
    corecore