9,111 research outputs found
Democratic Development and the Role of Citizenship Education in Sub-Saharan African with a Case Focus on Zambia
In addressing issues related to problems of democratisation in Africa, this paper attempts to relate the issue to the need for citizenship education and the role that can play in social development. Citizenship should be central to the formation of viable civil societies that claim a tangible stake in national public spaces in post-Cold War Africa. These and related topics are discussed relative to new possibilities that could lead to the full realisation of the concept as well as the practice of enfranchised citizenship and inclusive social development in aspiring democracies in the Sub Saharan African context. The complexity of the development ‘problematique’ that Sub-Saharan Africa is facing is unique in that it is multi-dimensional, but above all else, politically located. It is, therefore, central to our discussions here that to correct the continent’s current schemes of underdevelopment, pragmatic schemes of governance must be achieved. To do that, we are suggesting, new possibilities of citizenship education should be formulated for the general African scene in general, and for democratising but still both institutionally and economically weakened Zambia
Lift & Project Systems Performing on the Partial-Vertex-Cover Polytope
We study integrality gap (IG) lower bounds on strong LP and SDP relaxations
derived by the Sherali-Adams (SA), Lovasz-Schrijver-SDP (LS+), and
Sherali-Adams-SDP (SA+) lift-and-project (L&P) systems for the
t-Partial-Vertex-Cover (t-PVC) problem, a variation of the classic Vertex-Cover
problem in which only t edges need to be covered. t-PVC admits a
2-approximation using various algorithmic techniques, all relying on a natural
LP relaxation. Starting from this LP relaxation, our main results assert that
for every epsilon > 0, level-Theta(n) LPs or SDPs derived by all known L&P
systems that have been used for positive algorithmic results (but the Lasserre
hierarchy) have IGs at least (1-epsilon)n/t, where n is the number of vertices
of the input graph. Our lower bounds are nearly tight.
Our results show that restricted yet powerful models of computation derived
by many L&P systems fail to witness c-approximate solutions to t-PVC for any
constant c, and for t = O(n). This is one of the very few known examples of an
intractable combinatorial optimization problem for which LP-based algorithms
induce a constant approximation ratio, still lift-and-project LP and SDP
tightenings of the same LP have unbounded IGs.
We also show that the SDP that has given the best algorithm known for t-PVC
has integrality gap n/t on instances that can be solved by the level-1 LP
relaxation derived by the LS system. This constitutes another rare phenomenon
where (even in specific instances) a static LP outperforms an SDP that has been
used for the best approximation guarantee for the problem at hand. Finally, one
of our main contributions is that we make explicit of a new and simple
methodology of constructing solutions to LP relaxations that almost trivially
satisfy constraints derived by all SDP L&P systems known to be useful for
algorithmic positive results (except the La system).Comment: 26 page
A Metric for Linear Temporal Logic
We propose a measure and a metric on the sets of infinite traces generated by
a set of atomic propositions. To compute these quantities, we first map
properties to subsets of the real numbers and then take the Lebesgue measure of
the resulting sets. We analyze how this measure is computed for Linear Temporal
Logic (LTL) formulas. An implementation for computing the measure of bounded
LTL properties is provided and explained. This implementation leverages SAT
model counting and effects independence checks on subexpressions to compute the
measure and metric compositionally
The Case for the Precision Timed (PRET) Machine
It is time for a new era of processors whose temporal behavior is as easily controlled as their logical function. We call them precision timed (PRET) machines. Our basic argument is that real-time systems, in which temporal behavior is as important as logical function, are an important and growing application; processor architecture needs to follow suit
Economics of Managing Invasive Species in Tropical and Sub-Tropical Areas of the U.S.A.: Case Study Development
Resource /Energy Economics and Policy,
The asymptotic relative efficiency of mixed statistical tests
Mixed statistical tests are described. It is shown that these tests have a much higher efficiency than conventionally used statistics such as the sign test and polarity coincidence correlation without the high operational complexity of the Wilcoxon, Mann-Whitney, Kendall\tau, or Fisher-Yates: Terry-Hoeffding tests
Efficient Parallel Reinforcement Learning Framework using the Reactor Model
Parallel Reinforcement Learning (RL) frameworks are essential for mapping RL
workloads to multiple computational resources, allowing for faster generation
of samples, estimation of values, and policy improvement. These computational
paradigms require a seamless integration of training, serving, and simulation
workloads. Existing frameworks, such as Ray, are not managing this
orchestration efficiently, especially in RL tasks that demand intensive
input/output and synchronization between actors on a single node. In this
study, we have proposed a solution implementing the reactor model, which
enforces a set of actors to have a fixed communication pattern. This allows the
scheduler to eliminate work needed for synchronization, such as acquiring and
releasing locks for each actor or sending and processing coordination-related
messages. Our framework, Lingua Franca (LF), a coordination language based on
the reactor model, also supports true parallelism in Python and provides a
unified interface that allows users to automatically generate dataflow graphs
for RL tasks. In comparison to Ray on a single-node multi-core compute
platform, LF achieves 1.21x and 11.62x higher simulation throughput in OpenAI
Gym and Atari environments, reduces the average training time of synchronized
parallel Q-learning by 31.2%, and accelerates multi-agent RL inference by
5.12x.Comment: 10 pages, 11 figure
Predictive Models for Maximum Recommended Therapeutic Dose of Antiretroviral Drugs
A novel method for predicting maximum recommended therapeutic dose (MRTD) is presented using quantitative structure property relationships (QSPRs) and artificial neural networks (ANNs). MRTD data of 31 structurally diverse Antiretroviral drugs (ARVs) were collected from FDA MRTD Database or package inserts. Molecular property descriptors of each compound, that is, molecular mass, aqueous solubility, lipophilicity, biotransformation half life, oxidation half life, and biodegradation probability were calculated from their SMILES codes. A training set (n = 23) was used to construct multiple linear regression and back propagation neural network models. The models were validated using an external test set (n = 8) which demonstrated that MRTD values may be predicted with reasonable accuracy. Model predictability was described by root mean squared errors (RMSEs), Kendall's correlation coefficients (tau), P-values, and Bland Altman plots for method comparisons. MRTD was predicted by a 6-3-1 neural network model (RMSE = 13.67, tau = 0.643, P = 0.035) more accurately than by the multiple linear regression (RMSE = 27.27, tau = 0.714, P = 0.019) model. Both models illustrated a moderate correlation between aqueous solubility of antiretroviral drugs and maximum therapeutic dose. MRTD prediction may assist in the design of safer, more effective treatments for HIV infection
Quantifying the energetic contributions of desolvation and π-electron density during translesion DNA synthesis
This report examines the molecular mechanism by which high-fidelity DNA polymerases select nucleotides during the replication of an abasic site, a non-instructional DNA lesion. This was accomplished by synthesizing several unique 5-substituted indolyl 2′-deoxyribose triphosphates and defining their kinetic parameters for incorporation opposite an abasic site to interrogate the contributions of π-electron density and solvation energies. In general, the Kd, app values for hydrophobic non-natural nucleotides are ∼10-fold lower than those measured for isosteric hydrophilic analogs. In addition, kpol values for nucleotides that contain less π-electron densities are slower than isosteric analogs possessing higher degrees of π-electron density. The differences in kinetic parameters were used to quantify the energetic contributions of desolvation and π-electron density on nucleotide binding and polymerization rate constant. We demonstrate that analogs lacking hydrogen-bonding capabilities act as chain terminators of translesion DNA replication while analogs with hydrogen bonding functional groups are extended when paired opposite an abasic site. Collectively, the data indicate that the efficiency of nucleotide incorporation opposite an abasic site is controlled by energies associated with nucleobase desolvation and π-electron stacking interactions whereas elongation beyond the lesion is achieved through a combination of base-stacking and hydrogen-bonding interactions
- …