32,357 research outputs found
Assessing Organizational Capacity in Housing Provision: a Survey of Public Housing Agencies in Ogun State, Nigeria
Organizational capacity is essential for effective implementation of policies and programmes. Consequently, assessment of organizational capacity helps organizations to identify their strength and weakness in order to make informed decisions about how best to address challenges they face. The goal of this study was to assess the status of organizational capacity of public housing agencies in housing provision in Ogun State Southwest Nigeria. It was motivated by a gap in literature on the specific areas that contribute most to organizational capacity of public agencies in housing provision in Nigeria. Using questionnaire as the principal data collection instrument, primary data were collected from randomly selected 90 staff members involved in the design, planning, implementation and management of public housing projects in four public housing agencies in the study area. The data were analysed using descriptive statistics, and the result showed that most respondents felt that the overall organizational capacity of the agencies in housing provision was adequate. Management capacity was found to be slightly higher than resource capacity with the agencies having most strength in leadership style and weakness in the methods of administration of funds for housing projects. Substantial need and capacity building was found in critical areas such as funding, staff motivation and methods of dispensing of funds for housing projects. The paper suggests that partnerships with private sector organizations, robust staff well-fare schemes and re-training of staff can enhance organizational capacity of public agencies in public housing provision in Nigeria and other developing countries
Recommended from our members
On the adequacy of current empirical evaluations of formal models of categorization
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus)
Architectural Adequacy and Evolutionary Adequacy as Characteristics of a Candidate Informational Money
For money-like informational commodities the notions of architectural
adequacy and evolutionary adequacy are proposed as the first two stages of a
moneyness maturity hierarchy. Then three classes of informational commodities
are distinguished: exclusively informational commodities, strictly
informational commodities, and ownable informational commodities. For each
class money-like instances of that commodity class, as well as monies of that
class may exist.
With the help of these classifications and making use of previous assessments
of Bitcoin, it is argued that at this stage Bitcoin is unlikely ever to evolve
into a money. Assessing the evolutionary adequacy of Bitcoin is perceived in
terms of a search through its design hull for superior design alternatives.
An extensive comparison is made between the search for superior design
alternatives to Bitcoin and the search for design alternatives to a specific
and unconventional view on the definition of fractions.Comment: 25 page
Recommended from our members
Bayesian belief network model for the safety assessment of nuclear computer-based systems
The formalism of Bayesian Belief Networks (BBNs) is being increasingly applied to probabilistic modelling and decision problems in a widening variety of fields. This method provides the advantages of a formal probabilistic model, presented in an easily assimilated visual form, together with the ready availability of efficient computational methods and tools for exploring model consequences. Here we formulate one BBN model of a part of the safety assessment task for computer and software based nuclear systems important to safety. Our model is developed from the perspective of an independent safety assessor who is presented with the task of evaluating evidence from disparate sources: the requirement specification and verification documentation of the system licensee and of the system manufacturer; the previous reputation of the various participants in the design process; knowledge of commercial pressures;information about tools and resources used; and many other sources. Based on these multiple sources of evidence, the independent assessor is ultimately obliged to make a decision as to whether or not the system should be licensed for operation within a particular nuclear plant environment. Our BBN model is a contribution towards a formal model of this decision problem. We restrict attention to a part of this problem: the safety analysis of the Computer System Specification documentation. As with other BBN applications we see this modelling activity as having several potential benefits. It employs a rigorous formalism as a focus for examination, discussion, and criticism of arguments about safety. It obliges the modeller to be very explicit about assumptions concerning probabilistic dependencies, correlations, and causal relationships. It allows sensitivity analyses to be carried out. Ultimately we envisage this BBN, or some later development of it, forming part of a larger model, which might well take the form of a larger BBN model, covering all sources of evidence about pre-operational life-cycle stages. This could provide an integrated model of all aspects of the task of the independent assessor, leading up to the final judgement about system safety in a particular context. We expect to offer some results of this further work later in the DeVa project
Bayesian inferencing for wind resource characterisation
The growing role of wind power in power systems has motivated R&D on methodologies to characterise the wind resource at sites for which no wind speed data is available. Applications such as feasibility assessment of prospective installations and system integration analysis of future scenarios, amongst others, can greatly benefit from such methodologies. This paper focuses on the inference of wind speeds for such potential sites using a Bayesian approach to characterise the spatial distribution of the resource. To test the approach, one year of wind speed data from four weather stations was modelled and used to derive inferences for a fifth site. The methodology used is described together with the model employed and simulation results are presented and compared to the data available for the fifth site. The results obtained indicate that Bayesian inference can be a useful tool in spatial characterisation of wind
Recommended from our members
Improving the accuracy and realism of Bayesian phylogenetic analyses
textCentral to the study of Life is knowledge both about the underlying relationships
among living things and the processes that have molded them into their diverse forms.
Phylogenetics provides a powerful toolkit for investigating both aspects. Bayesian
phylogenetics has gained much popularity, due to its readily interpretable notion of
probability. However, the posterior probability of a phylogeny, as well as any dependent
biological inferences, is conditioned on the assumed model of evolution and its priors,
necessitating care in model formulation. In Chapter 1, I outline the Bayesian perspective
of phylogenetic inference and provide my view on its most outstanding questions. I then
present results from three studies that aim to (i) improve the accuracy of Bayesian
phylogenetic inference and (ii) assess when the model assumed in a Bayesian analysis is
insufficient to produce an accurate phylogenetic estimate. As phylogenetic data sets increase in size, they must also accommodate a greater
diversity of underlying evolutionary processes. Partitioned models represent one way of
accounting for this heterogeneity. In Chapter 2, I describe a simulation study to
investigate whether support for partitioning of empirical data sets represents a real signal
of heterogeneity or whether it is merely a statistical artifact. The results suggest that
empirical data are extremely heterogeneous. The incorporation of heterogeneity into
inferential models is important for accurate phylogenetic inference.
Bayesian phylogenetic estimates of branch lengths are often wildly unreasonable.
However, branch lengths are important input for many other analyses. In Chapter 3, I
study the occurrence of this phenomenon, identify the data sets most likely to be affected,
demonstrate the causes of the bias, and suggest several solutions to avoid inaccurate
inferences.
Phylogeneticists rarely assess absolute fit between an assumed model of evolution
and the data being analyzed. While an approach to assessing fit in a Bayesian framework
has been proposed, it sometimes performs quite poorly in predicting a model’s
phylogenetic utility. In Chapter 4, I propose and evaluate new test statistics for assessing
phylogenetic model adequacy, which directly evaluate a model’s phylogenetic
performance.Biological Sciences, School o
Synapse: automatic behaviour inference and implementation comparison for Erlang
In the open environment of the world wide web, it is natural that there will be multiple providers of services, and that these service provisions — both specifications and implementations — will evolve. This multiplicity gives the user of these services a set of questions about how to choose between different providers, as well as how these choices work in an evolving environment.
The challenge, therefore, is to concisely represent to the user the behaviour of a particular implementation, and the differences between this implementation and alternative versions. Inferred models of software behaviour – and automatically derived and graphically presented comparisons between them – serve to support effective decision making in situations where there are competing implementations of requirements.
In this paper we use state machine models as the abstract representation of the behaviour of an implementation, and using these we build a tool by which one can visualise in an intuitive manner both the initial implementation and the differences between alternative versions. In this paper we describe our tool Synapse which implements this functionality by means of our grammar inference tool StateChum and a model-differencing algorithm. We describe the main functionality of Synapse, and demonstrate its usage by comparing different implementations of an example program from the existing literature
What Can Artificial Intelligence Do for Scientific Realism?
The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling
- …