73,584 research outputs found
The practice of risk management by cost consultants in Northern Ireland
This research endeavoured to explore the practice of risk management by cost consultants in Northern Ireland. It attempted to subjectively investigate the cost consultant’s appreciation of risk management practices and then further appraise the cost consultant’s understanding and usage of the theories and techniques available to manage risk under the risk management framework. A case study based approach involving five consultancy practices was adopted. A series of semi structured interviews (one per each case study) was carried out. The data collected was analysed using the Delphi technique. The practice of risk management for each organisation was documented using an analysis and evaluation of project documentation substantiated with interviews. The research indicated that consultants have a broad awareness of risk management but disparity exists on considering it as a core service. All consultants were unequivocal in identifying the need for an improved risk management framework. It was evident that there was a lack of knowledge of the array of risk identification and analysis techniques available. The research has established that there is a severe need to bridge the void between the theories and techniques used to manage risk and those which are implemented in practice. There is a necessity to train consultants in the practice of risk management and educate clients in the benefits of enforcing risk management practices as an integral part of project delivery
Zappa-Sz\'ep products of Garside monoids
A monoid is the internal Zappa-Sz\'ep product of two submonoids, if every
element of admits a unique factorisation as the product of one element of
each of the submonoids in a given order. This definition yields actions of the
submonoids on each other, which we show to be structure preserving.
We prove that is a Garside monoid if and only if both of the submonoids
are Garside monoids. In this case, these factors are parabolic submonoids of
and the Garside structure of can be described in terms of the Garside
structures of the factors. We give explicit isomorphisms between the lattice
structures of and the product of the lattice structures on the factors that
respect the Garside normal forms. In particular, we obtain explicit natural
bijections between the normal form language of and the product of the
normal form languages of its factors.Comment: Published versio
The Small and Large Time Implied Volatilities in the Minimal Market Model
This paper derives explicit formulas for both the small and large time limits
of the implied volatility in the minimal market model. It is shown that
interest rates do impact on the implied volatility in the long run even though
they are negligible in the short time limit.Comment: 50 pages, 4 figures, typo on page 18 correcte
Radio Galaxy Zoo: Knowledge Transfer Using Rotationally Invariant Self-Organising Maps
With the advent of large scale surveys the manual analysis and classification
of individual radio source morphologies is rendered impossible as existing
approaches do not scale. The analysis of complex morphological features in the
spatial domain is a particularly important task. Here we discuss the challenges
of transferring crowdsourced labels obtained from the Radio Galaxy Zoo project
and introduce a proper transfer mechanism via quantile random forest
regression. By using parallelized rotation and flipping invariant Kohonen-maps,
image cubes of Radio Galaxy Zoo selected galaxies formed from the FIRST radio
continuum and WISE infrared all sky surveys are first projected down to a
two-dimensional embedding in an unsupervised way. This embedding can be seen as
a discretised space of shapes with the coordinates reflecting morphological
features as expressed by the automatically derived prototypes. We find that
these prototypes have reconstructed physically meaningful processes across two
channel images at radio and infrared wavelengths in an unsupervised manner. In
the second step, images are compared with those prototypes to create a
heat-map, which is the morphological fingerprint of each object and the basis
for transferring the user generated labels. These heat-maps have reduced the
feature space by a factor of 248 and are able to be used as the basis for
subsequent ML methods. Using an ensemble of decision trees we achieve upwards
of 85.7% and 80.7% accuracy when predicting the number of components and peaks
in an image, respectively, using these heat-maps. We also question the
currently used discrete classification schema and introduce a continuous scale
that better reflects the uncertainty in transition between two classes, caused
by sensitivity and resolution limits
The Spectral Energy Distribution of Powerful Starburst Galaxies I: Modelling the Radio Continuum
We have acquired radio continuum data between 70\,MHz and 48\,GHz for a
sample of 19 southern starburst galaxies at moderate redshifts () with the aim of separating synchrotron and free-free emission
components. Using a Bayesian framework we find the radio continuum is rarely
characterised well by a single power law, instead often exhibiting low
frequency turnovers below 500\,MHz, steepening at mid-to-high frequencies, and
a flattening at high frequencies where free-free emission begins to dominate
over the synchrotron emission. These higher order curvature components may be
attributed to free-free absorption across multiple regions of star formation
with varying optical depths. The decomposed synchrotron and free-free emission
components in our sample of galaxies form strong correlations with the
total-infrared bolometric luminosities. Finally, we find that without
accounting for free-free absorption with turnovers between 90 to 500\,MHz the
radio-continuum at low frequency (\,MHz) could be overestimated by
upwards of a factor of twelve if a simple power law extrapolation is used from
higher frequencies. The mean synchrotron spectral index of our sample is
constrained to be , which is steeper then the canonical value of
for normal galaxies. We suggest this may be caused by an intrinsically
steeper cosmic ray distribution
Investigating the status of disaster management within a world-wide context: a case study analysis
Disasters can be described as feats of spontaneous occurrences, in that they can happen at any minute at any time. There are two classifications of disasters, which are, natural disasters that cannot be predicted and continuously occur throughout society. While the other classification of disaster is that of man-made disasters, where disasters are caused not by natural phenomena, but by man's or society's actions, involuntary or voluntary, sudden or slow, with grave consequences to the population and the environment (Hays, 2008). Both these types of disasters can be controlled to a certain extent through appropriate disaster management plans and if managed efficiently have the potential to reduce the likelihood of overwhelming loss of lives and property. The Disaster Management cycle is split into four elements of response, recovery, mitigation and preparedness which contribute to emergency protocols of a nation when disaster strikes. Therefore, nations should incorporate them in their development plans and ensure efficient follow-up measures at community, national and international levels. This paper investigates worldwide disasters in order to examine how these disasters were managed and to identify the lessons learned. It provides an analysis of five worldwide case studies of recent disasters (Tsunami in Sri Lanka, Hurricane Katrina in New Orleans, Earthquake in Pakistan, Summer floods in the UK and Flooding of the West-Link in Northern Ireland) mapping those to the four staged disaster management cycle. The paper analyses in detail the strategies adopted at each stage of the cycle comparing strengths and weaknesses of each case. It concludes that there had been satisfactory progress in both response and recovery phases but more attention is needed for disaster mitigation and preparedness
Enhancing Parent-Child Communication and Parental Self-Esteem With a Video-Feedback Intervention: Outcomes With Prelingual Deaf and Hard-of-Hearing Children
Evidence on best practice for optimizing communication with prelingual deaf and hard-of-hearing (DHH) children is lacking. This study examined the effect of a family-focused psychosocial video intervention program on parent–child communication in the context of childhood hearing loss. Fourteen hearing parents with a prelingual DHH child (Mage = 2 years 8 months) completed three sessions of video interaction guidance intervention. Families were assessed in spontaneous free play interactions at pre and postintervention using the Emotional Availability (EA) Scales. The Rosenberg Self-esteem Scale was also used to assess parental report of self-esteem. Compared with nontreatment baselines, increases were shown in the EA subscales: parental sensitivity, parental structuring, parental nonhostility, child responsiveness, and child involvement, and in reported self-esteem at postintervention. Video-feedback enhances communication in families with prelingual DHH children and encourages more connected parent–child interaction. The results raise implications regarding the focus of early intervention strategies for prelingual DHH children
Dance training shapes action perception and its neural implementation within the young and older adult brain
How we perceive others in action is shaped by our prior experience. Many factors influence brain responses when observing others in action, including training in a particular physical skill, such as sport or dance, and also general development and aging processes. Here, we investigate how learning a complex motor skill shapes neural and behavioural responses among a dance-naïve sample of 20 young and 19 older adults. Across four days, participants physically rehearsed one set of dance sequences, observed a second set, and a third set remained untrained. Functional MRI was obtained prior to and immediately following training. Participants’ behavioural performance on motor and visual tasks improved across the training period, with younger adults showing steeper performance gains than older adults. At the brain level, both age groups demonstrated decreased sensorimotor cortical engagement after physical training, with younger adults showing more pronounced decreases in inferior parietal activity compared to older adults. Neural decoding results demonstrate that among both age groups, visual and motor regions contain experience-specific representations of new motor learning. By combining behavioural measures of performance with univariate and multivariate measures of brain activity, we can start to build a more complete picture of age-related changes in experience-dependent plasticity
Flux density measurements for 32 pulsars in the 20 cm band
Flux density measurements provide fundamental observational parameters that
describe a pulsar. In the current pulsar catalogue, 27% of radio pulsars have
no flux density measurement in the 20 cm observing band. Here, we present the
first measurements of the flux densities in this band for 32 pulsars observed
using the Parkes radio telescope and provide updated pulse profiles for these
pulsars. We have used both archival and new observations to make these
measurements. Various schemes exist for measuring flux densities. We show how
the flux densities measured vary between these methods and how the presence of
radio-frequency-interference will bias flux density measurementsComment: Accepted by RA
Artificial neural networks for selection of pulsar candidates from the radio continuum surveys
Pulsar search with time-domain observation is very computationally expensive
and data volume will be enormous with the next generation telescopes such as
the Square Kilometre Array. We apply artificial neural networks (ANNs), a
machine learning method, for efficient selection of pulsar candidates from
radio continuum surveys, which are much cheaper than time-domain observation.
With observed quantities such as radio fluxes, sky position and compactness as
inputs, our ANNs output the "score" that indicates the degree of likeliness of
an object to be a pulsar. We demonstrate ANNs based on existing survey data by
the TIFR GMRT Sky Survey (TGSS) and the NRAO VLA Sky Survey (NVSS) and test
their performance. Precision, which is the ratio of the number of pulsars
classified correctly as pulsars to that of any objects classified as pulsars,
is about 96. Finally, we apply the trained ANNs to unidentified radio
sources and our fiducial ANN with five inputs (the galactic longitude and
latitude, the TGSS and NVSS fluxes and compactness) generates 2,436 pulsar
candidates from 456,866 unidentified radio sources. These candidates need to be
confirmed if they are truly pulsars by time-domain observations. More
information such as polarization will narrow the candidates down further.Comment: 11 pages, 13 figures, 3 tables, accepted for publication in MNRA
- …