1,950 research outputs found

    A comparative analysis of pregnancy outcomes for women with and without disabilities

    Full text link
    In 2010 in the US, there were 4.7 million childbearing age (15-44 years) women with disabilities (WWD) defined as, being limited in any way in any activities because of physical, mental, or emotional problems. Although their proportion and pregnancy rates are growing, there is little empirical evidence about their health, healthcare needs, pregnancy experiences and outcomes. We examined differences and predictors of pregnancy outcomes for women with and without disabilities. We used 2009 Pregnancy Risk Assessment Monitoring System (PRAMS) data from 15,585 Massachusetts and Rhode Island women. We conducted χ2- and t –tests of pregnancy outcome differences for WWD and those without. Applying an economics’ health production framework, we conducted multivariate and partial correlation analysis to determine disability significance in predicting pregnancy outcomes. We found no significant differences in delivery types, the mother’s hospital stay or the likelihood of birth defects. However, relative to infants born to women without disabilities, those born to WWD had higher likelihoods of preterm birth, mortality, need for intensive care, low gestational age, and low birth weights. Health behavior, health capital stock and access to prenatal care were strong pregnancy outcome predictors, but disability was not. Therefore, having a disability is not a guarantee against positive pregnancy outcomes. Improved health behavior, health capital stock and access to prenatal care can improve pregnancy outcomes for WWD. A better understanding of interactions between disability and pregnancy, and between disability and other pregnancy outcome predictors could aid the identification of effective methods for improving outcomes for WWD

    A Practical Hierarchial Model of Parallel Computation: The Model

    Get PDF
    We introduce a model of parallel computation that retains the ideal properties of the PRAM by using it as a sub-model, while simultaneously being more reflective of realistic parallel architectures by accounting for and providing abstract control over communication and synchronization costs. The Hierarchical PRAM (H-PRAM) model controls conceptual complexity in the face of asynchrony in two ways. First, by providing the simplifying assumption of synchronization to the design of algorithms, but allowing the algorithms to work asynchronously with each other; and organizing this control asynchrony via an implicit hierarchy relation. Second, by allowing the restriction of communication asynchrony in order to obtain determinate algorithms (thus greatly simplifying proofs of correctness). It is shown that the model is reflective of a variety of existing and proposed parallel architectures, particularly ones that can support massive parallelism. Relationships to programming languages are discussed. Since the PRAM is a sub-model, we can use PRAM algorithms as sub-algorithms in algorithms for the H-PRAM; thus results that have been established with respect to the PRAM are potentially transferable to this new model. The H-PRAM can be used as a flexible tool to investigate general degrees of locality (“neighborhoods of activity) in problems, considering communication and synchronization simultaneously. This gives the potential of obtaining algorithms that map more efficiently to architectures, and of increasing the number of processors that can efficiently be used on a problem (in comparison to a PRAM that charges for communication and synchronization). The model presents a framework in which to study the extent that general locality can be exploited in parallel computing. A companion paper demonstrates the usage of the H-PRAM via the design and analysis of various algorithms for computing the complete binary tree and the FFT/butterfly graph

    A lower bound for linear approximate compaction

    Get PDF
    The {\em λ\lambda-approximate compaction} problem is: given an input array of nn values, each either 0 or 1, place each value in an output array so that all the 1's are in the first (1+λ)k(1+\lambda)k array locations, where kk is the number of 1's in the input. λ\lambda is an accuracy parameter. This problem is of fundamental importance in parallel computation because of its applications to processor allocation and approximate counting. When λ\lambda is a constant, the problem is called {\em Linear Approximate Compaction} (LAC). On the CRCW PRAM model, %there is an algorithm that solves approximate compaction in \order{(\log\log n)^3} time for λ=1loglogn\lambda = \frac{1}{\log\log n}, using n(loglogn)3\frac{n}{(\log\log n)^3} processors. Our main result shows that this is close to the best possible. Specifically, we prove that LAC requires %Ω(loglogn)\Omega(\log\log n) time using \order{n} processors. We also give a tradeoff between λ\lambda and the processing time. For ϵ<1\epsilon < 1, and λ=nϵ\lambda = n^{\epsilon}, the time required is Ω(log1ϵ)\Omega(\log \frac{1}{\epsilon})

    Quantum Certificate Complexity

    Get PDF
    Given a Boolean function f, we study two natural generalizations of the certificate complexity C(f): the randomized certificate complexity RC(f) and the quantum certificate complexity QC(f). Using Ambainis' adversary method, we exactly characterize QC(f) as the square root of RC(f). We then use this result to prove the new relation R0(f) = O(Q2(f)^2 Q0(f) log n) for total f, where R0, Q2, and Q0 are zero-error randomized, bounded-error quantum, and zero-error quantum query complexities respectively. Finally we give asymptotic gaps between the measures, including a total f for which C(f) is superquadratic in QC(f), and a symmetric partial f for which QC(f) = O(1) yet Q2(f) = Omega(n/log n).Comment: 9 page

    Strong Scaling of Matrix Multiplication Algorithms and Memory-Independent Communication Lower Bounds

    Full text link
    A parallel algorithm has perfect strong scaling if its running time on P processors is linear in 1/P, including all communication costs. Distributed-memory parallel algorithms for matrix multiplication with perfect strong scaling have only recently been found. One is based on classical matrix multiplication (Solomonik and Demmel, 2011), and one is based on Strassen's fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz, 2012). Both algorithms scale perfectly, but only up to some number of processors where the inter-processor communication no longer scales. We obtain a memory-independent communication cost lower bound on classical and Strassen-based distributed-memory matrix multiplication algorithms. These bounds imply that no classical or Strassen-based parallel matrix multiplication algorithm can strongly scale perfectly beyond the ranges already attained by the two parallel algorithms mentioned above. The memory-independent bounds and the strong scaling bounds generalize to other algorithms.Comment: 4 pages, 1 figur

    A Novel Identity Based Blind Signature Scheme using DLP for E-Commerce

    Get PDF
    Abstract— Blind signatures are used in the most of the application where confidentiality and authenticity are the main issue. Blind signature scheme deals with concept where requester sends the request that the signer should sign on a blind message without looking at the content. Many ID based blind signature are proposed using bilinear pairings and elliptic curve. But the relative computation cost of the pairing in bilinear pairings and ID map into an elliptic curve are huge. In order to save the running time and the size of the signature, this paper proposed a scheme having the property of both concepts identity based blind signature that is based on Discrete Logarithm Problem, so as we know that DLP is a computational hard problem and hence the proposed scheme achieves all essential and secondary security prematurity. With the help of the proposed scheme, this paper implemented an E-commerce system in a secure way. E-commerce is one of the most concern applications of ID based blind signature scheme. E-commerce consisting selling and buying of products or services over the internet and open network. ID based blind signature scheme basically has been used enormously as a part of today’s focussed business. Our proposed scheme can be also be used in E-business, E-voting and E-cashing anywhere without any restriction DOI: 10.17762/ijritcc2321-8169.15060

    On the parity complexity measures of Boolean functions

    Get PDF
    The parity decision tree model extends the decision tree model by allowing the computation of a parity function in one step. We prove that the deterministic parity decision tree complexity of any Boolean function is polynomially related to the non-deterministic complexity of the function or its complement. We also show that they are polynomially related to an analogue of the block sensitivity. We further study parity decision trees in their relations with an intermediate variant of the decision trees, as well as with communication complexity.Comment: submitted to TCS on 16-MAR-200
    corecore