2,268 research outputs found
Psychosocial risk and protective factors associated with perpetration of gender-based violence in a community sample of men in rural KwaZulu-Natal, South Africa
Background. Rates of gender-based violence (GBV) in South Africa (SA) are among the highest in the world. In societies where social ideals of masculinity encourage male dominance and control over women, gender power imbalances contribute to male perpetration and women’s vulnerability. The drivers that cause men to perpetrate GBV and those that lead to HIV overlap and interact in multiple and complex ways. Multiple risk and protective factors for GBV perpetration by males operate interdependently at a number of levels; at the individual level, these include chronic anxiety and depression, which have been shown to lead to risky sexual behaviours.Objectives. (i) To examine psychosocial risk factors (symptoms of anxiety and depression) as well as protective factors (social support and self-esteem) as self-reported by a cohort of males in rural KwaZulu-Natal (KZN) Province, SA; and (ii) to determine whether there are differences in anxiety, depression, social support and self-esteem between perpetrators and non-perpetrators.Methods. A cross-sectional study using quasi-probability cluster sampling of 13 of 28 wards in Harry Gwala District, KZN. Participants were then randomly chosen from each ward proportionate to size.Results. The participants were relatively young (median age 22 years); over half were schoolgoers, and 91.3% had never married. Over 43% of the sample reported clinical levels of anxiety and depressive symptoms on the Brief Symptom Inventory. Rates of GBV perpetration were 60.9%, 23.6% and 10.0% for psychological abuse, non-sexual physical violence and sexual violence, respectively. GBV perpetration was associated with higher depression, higher anxiety, lower self-esteem and lower social support.Conclusions. Interventions to address GBV need to take modifiable individual-level factors into account
A critical look at studies applying over-sampling on the TPEHGDB dataset
Preterm birth is the leading cause of death among young children and has a large prevalence globally. Machine learning models, based on features extracted from clinical sources such as electronic patient files, yield promising results. In this study, we review similar studies that constructed predictive models based on a publicly available dataset, called the Term-Preterm EHG Database (TPEHGDB), which contains electrohysterogram signals on top of clinical data. These studies often report near-perfect prediction results, by applying over-sampling as a means of data augmentation. We reconstruct these results to show that they can only be achieved when data augmentation is applied on the entire dataset prior to partitioning into training and testing set. This results in (i) samples that are highly correlated to data points from the test set are introduced and added to the training set, and (ii) artificial samples that are highly correlated to points from the training set being added to the test set. Many previously reported results therefore carry little meaning in terms of the actual effectiveness of the model in making predictions on unseen data in a real-world setting. After focusing on the danger of applying over-sampling strategies before data partitioning, we present a realistic baseline for the TPEHGDB dataset and show how the predictive performance and clinical use can be improved by incorporating features from electrohysterogram sensors and by applying over-sampling on the training set
A brief version of the Scale of Emotional Development – Short
Background
The Scale of Emotional Development – Short (SED-S) captures the level of emotional development in persons with a disorder of intellectual development (DID) with 200 items on five developmental levels. The study aims to develop a brief version of the SED-S.
Methods
Based on item analysis (proportions, χ2-test, Spearman's ρ and corrected item–total correlation), a brief version of the SED-S was developed in a sample of 224 adults with a DID (n1) and validated in a second independent matched sample (n2 = 223).
Results
Item reliability ranged per item set from Cronbach's α = 0.835 to 0.924. Weighted kappa resulted in κω = 0.743 (P < 0.001, 95% confidence interval = 0.690–0.802). Overall agreement of the brief version with the original SED-S was PO = 0.7. The brief version of the SED-S showed weaknesses in distinguishing level 2 from the adjacent levels.
Conclusions
The brief version of the SED-S showed good reliability and moderate to good validity results. Items of phase 2 and, to some degree, of phase 5 should be revised to further improve the psychometric properties of the scale.Peer Reviewe
Determining the stellar masses of submillimetre galaxies: the critical importance of star formation histories
Submillimetre (submm) galaxies are among the most rapidly star-forming and
most massive high-redshift galaxies; thus, their properties provide important
constraints on galaxy evolution models. However, there is still a debate about
their stellar masses and their nature in the context of the general galaxy
population. To test the reliability of their stellar mass determinations, we
used a sample of simulated submm galaxies for which we derived stellar masses
via spectral energy distribution (SED) modelling (with Grasil, Magphys, Hyperz
and LePhare) adopting various star formation histories (SFHs). We found that
the assumption of SFHs with two independent components leads to the most
accurate stellar masses. Exponentially declining SFHs (tau) lead to lower
masses (albeit still consistent with the true values), while the assumption of
single-burst SFHs results in a significant mass underestimation. Thus, we
conclude that studies based on the higher masses inferred from fitting the SEDs
of real submm galaxies with double SFHs are most likely to be correct, implying
that submm galaxies lie on the high-mass end of the main sequence of
star-forming galaxies. This conclusion appears robust to assumptions of whether
or not submm galaxies are driven by major mergers, since the suite of simulated
galaxies modelled here contains examples of both merging and isolated galaxies.
We identified discrepancies between the true and inferred stellar ages (rather
than the dust attenuation) as the primary determinant of the success/failure of
the mass recovery. Regardless of the choice of SFH, the SED-derived stellar
masses exhibit a factor of ~2 scatter around the true value; this scatter is an
inherent limitation of the SED modelling due to simplified assumptions.
Finally, we found that the contribution of active galactic nuclei does not have
any significant impact on the derived stellar masses.Comment: Accepted to A&A. 11 pages, 9 figures, 1 table. V2 main changes: 1)
discussion of the stellar age as the main parameter influencing the success
of an SED model (Fig. 4, 5, 7); 2) discussion of the age-dust degeneracy (Fig
9); 3) the comparison of real and simulated submm galaxies (Fig 1
A Cyber-Support System for Distributed Infrastructures
The Internet is now heavily relied upon by the Critical Infrastructures (CI). This has led to different security threats facing interconnected security systems. By understanding the complexity of critical infrastructure interdependency, and how to take advantage of it in order to minimize the cascading problem, enables the prediction of potential problems before they happen. Our proposed system, detailed in this paper, is able to detect cyber-attacks and share the knowledge with interconnected partners to create an immune system network. In order to demonstrate our approach, a realistic simulation is used to construct data and evaluate the system put forward. This paper provides a summary of the work to-date, on the development of a system titled Critical Infrastructure Auto-Immune Response System (CIAIRS). It provides a view of the main CIAIRS segments, which comprise the framework and illustrates the functioning of the system
Motion Deblurring in the Wild
The task of image deblurring is a very ill-posed problem as both the image
and the blur are unknown. Moreover, when pictures are taken in the wild, this
task becomes even more challenging due to the blur varying spatially and the
occlusions between the object. Due to the complexity of the general image model
we propose a novel convolutional network architecture which directly generates
the sharp image.This network is built in three stages, and exploits the
benefits of pyramid schemes often used in blind deconvolution. One of the main
difficulties in training such a network is to design a suitable dataset. While
useful data can be obtained by synthetically blurring a collection of images,
more realistic data must be collected in the wild. To obtain such data we use a
high frame rate video camera and keep one frame as the sharp image and frame
average as the corresponding blurred image. We show that this realistic dataset
is key in achieving state-of-the-art performance and dealing with occlusions
- …