940,525 research outputs found

    Conceptual and practical challenges for implementing the communities of practice model on a national scale - a Canadian cancer control initiative

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cancer program delivery, like the rest of health care in Canada, faces two ongoing challenges: to coordinate a pan-Canadian approach across complex provincial jurisdictions, and to facilitate the rapid translation of knowledge into clinical practice. Communities of practice, or CoPs, which have been described by Etienne Wenger as a collaborative learning platform, represent a promising solution to these challenges because they rely on bottom-up rather than top-down social structures for integrating knowledge and practice across regions and agencies. The communities of practice model has been realized in the corporate (e.g., Royal Dutch Shell, Xerox, IBM, etc) and development (e.g., World Bank) sectors, but its application to health care is relatively new. The Canadian Partnership Against Cancer (CPAC) is exploring the potential of Wenger's concept in the Canadian health care context. This paper provides an in-depth analysis of Wenger's concept with a focus on its applicability to the health care sector.</p> <p>Discussion</p> <p>Empirical studies and social science theory are used to examine the utility of Wenger's concept. Its value lies in emphasizing learning from peers and through practice in settings where innovation is valued. Yet the communities of practice concept lacks conceptual clarity because Wenger defines it so broadly and sidelines issues of decision making within CoPs. We consider the implications of his broad definition to establishing an informed nomenclature around this specific type of collaborative group. The CoP Project under CPAC and communities of practice in Canadian health care are discussed.</p> <p>Summary</p> <p>The use of communities of practice in Canadian health care has been shown in some instances to facilitate quality improvements, encourage buy in among participants, and generate high levels of satisfaction with clinical leadership and knowledge translation among participating physicians. Despite these individual success stories, more information is required on how group decisions are made and applied to the practice world in order to leverage the potential of Wenger's concept more fully, and advance the science of knowledge translation within an accountability framework.</p

    Hard Mathematical Problems in Cryptography and Coding Theory

    Get PDF
    In this thesis, we are concerned with certain interesting computationally hard problems and the complexities of their associated algorithms. All of these problems share a common feature in that they all arise from, or have applications to, cryptography, or the theory of error correcting codes. Each chapter in the thesis is based on a stand-alone paper which attacks a particular hard problem. The problems and the techniques employed in attacking them are described in detail. The first problem concerns integer factorization: given a positive integer NN. the problem is to find the unique prime factors of NN. This problem, which was historically of only academic interest to number theorists, has in recent decades assumed a central importance in public-key cryptography. We propose a method for factorizing a given integer using a graph-theoretic algorithm employing Binary Decision Diagrams (BDD). The second problem that we consider is related to the classification of certain naturally arising classes of error correcting codes, called self-dual additive codes over the finite field of four elements, GF(4)GF(4). We address the problem of classifying self-dual additive codes, determining their weight enumerators, and computing their minimum distance. There is a natural relation between self-dual additive codes over GF(4)GF(4) and graphs via isotropic systems. Utilizing the properties of the corresponding graphs, and again employing Binary Decision Diagrams (BDD) to compute the weight enumerators, we can obtain a theoretical speed up of the previously developed algorithm for the classification of these codes. The third problem that we investigate deals with one of the central issues in cryptography, which has historical origins in the theory of geometry of numbers, namely the shortest vector problem in lattices. One method which is used both in theory and practice to solve the shortest vector problem is by enumeration algorithms. Lattice enumeration is an exhaustive search whose goal is to find the shortest vector given a lattice basis as input. In our work, we focus on speeding up the lattice enumeration algorithm, and we propose two new ideas to this end. The shortest vector in a lattice can be written as s=v1b1+v2b2++vnbn{\bf s} = v_1{\bf b}_1+v_2{\bf b}_2+\ldots+v_n{\bf b}_n. where viZv_i \in \mathbb{Z} are integer coefficients and bi{\bf b}_i are the lattice basis vectors. We propose an enumeration algorithm, called hybrid enumeration, which is a greedy approach for computing a short interval of possible integer values for the coefficients viv_i of a shortest lattice vector. Second, we provide an algorithm for estimating the signs ++ or - of the coefficients v1,v2,,vnv_1,v_2,\ldots,v_n of a shortest vector s=i=1nvibi{\bf s}=\sum_{i=1}^{n} v_i{\bf b}_i. Both of these algorithms results in a reduction in the number of nodes in the search tree. Finally, the fourth problem that we deal with arises in the arithmetic of the class groups of imaginary quadratic fields. We follow the results of Soleng and Gillibert pertaining to the class numbers of some sequence of imaginary quadratic fields arising in the arithmetic of elliptic and hyperelliptic curves and compute a bound on the effective estimates for the orders of class groups of a family of imaginary quadratic number fields. That is, suppose f(n)f(n) is a sequence of positive numbers tending to infinity. Given any positive real number LL. an effective estimate is to find the smallest positive integer N=N(L)N = N(L) depending on LL such that f(n)>Lf(n) > L for all n>Nn > N. In other words, given a constant M>0M > 0. we find a value NN such that the order of the ideal class InI_n in the ring RnR_n (provided by the homomorphism in Soleng's paper) is greater than MM for any n>Nn>N. In summary, in this thesis we attack some hard problems in computer science arising from arithmetic, geometry of numbers, and coding theory, which have applications in the mathematical foundations of cryptography and error correcting codes

    A response to “Likelihood ratio as weight of evidence: a closer look” by Lund and Iyer

    Get PDF
    Recently, Lund and Iyer (L&amp;I) raised an argument regarding the use of likelihood ratios in court. In our view, their argument is based on a lack of understanding of the paradigm. L&amp;I argue that the decision maker should not accept the expert’s likelihood ratio without further consideration. This is agreed by all parties. In normal practice, there is often considerable and proper exploration in court of the basis for any probabilistic statement. We conclude that L&amp;I argue against a practice that does not exist and which no one advocates. Further we conclude that the most informative summary of evidential weight is the likelihood ratio. We state that this is the summary that should be presented to a court in every scientific assessment of evidential weight with supporting information about how it was constructed and on what it was based

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute’s subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision’s context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities

    From Comparative Risk to Decision Analysis: Ranking Solutions to Multiple-Value Environmental Problems

    Get PDF
    While recognizing that the making of environmental policy is sufficiently complex that no one method can serve all conditions, Dr. Kadvany urges that more attention be given to multiattribute utility and decision analysis. He suggests this can help, e.g., to illuminate stakeholder values and generate alternative approaches

    The influence of CEO characteristics on corporate environmental performance of SMEs: Evidence from Vietnamese SMEs

    Get PDF
    Drawing on upper echelon theory, this study investigates the impact of CEOs’ (chief executive officers) demographic characteristics on corporate environmental performance (CEP) in small and medium-sized enterprises (SMEs). We hypothesized that CEO characteristics, including gender, age, basic educational level, professional educational level, political connection, and ethnicity, affect SMEs’ environmental performance. Using the cross-sectional data analysis of 810 Vietnamese SMEs, this study provides evidence that female CEOs and CEOs’ educational level (both basic and professional) are positively related to the probability of CEP. We also find that based on the role of institutional environment on CEP, political connections had a negative effect on CEP in the context of Vietnam. Another finding is that SMEs with chief executives from ethnic minority groups show a higher level of the probability of corporate environmental performance than companies operated by Kinh chief executives. Since CEP is an essential dimension of corporate social responsibility, a strategic decision for SMEs, it is crucial for the company to select appropriate CEOs based on their demographic characteristic

    Carving out new business models in a small company through contextual ambidexterity: the case of a sustainable company

    Get PDF
    Business model innovation (BMI) and organizational ambidexterity have been pointed out as mechanisms for companies achieving sustainability. However, especially considering small and medium enterprises (SMEs), there is a lack of studies demonstrating how to combine these mechanisms. Tackling such a gap, this study seeks to understand how SMEs can ambidextrously manage BMI. Our aim is to provide a practical artifact, accessible to SMEs, to operationalize BMI through organizational ambidexterity. To this end, we conducted our study under the design science research to, first, build an artifact for operationalizing contextual ambidexterity for business model innovation. Then, we used an in-depth case study with a vegan fashion small e-commerce to evaluate the practical outcomes of the artifact. Our findings show that the company improves its business model while, at the same time, designs a new business model and monetizes it. Thus, our approach was able to take the first steps in the direction of operationalizing contextual ambidexterity for business model innovation in small and medium enterprises, democratizing the concept. We contribute to theory by connecting different literature strands and to practice by creating an artifact to assist managemen

    Decision by sampling

    Get PDF
    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We assume that the sample reflects both the immediate distribution of attribute values from the current decision's context and also the background, real-world distribution of attribute values. DbS accounts for concave utility functions; losses looming larger than gains; hyperbolic temporal discounting; and the overestimation of small probabilities and the underestimation of large probabilities
    corecore