27 research outputs found

    Function and secret sharing extensions for Blakley and Asmuth-Bloom secret sharing schemes

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 65-69.Threshold cryptography deals with situations where the authority to initiate or perform cryptographic operations is distributed amongst a group of individuals. Usually in these situations a secret sharing scheme is used to distribute shares of a highly sensitive secret, such as the private key of a bank, to the involved individuals so that only when a sufficient number of them can reconstruct the secret but smaller coalitions cannot. The secret sharing problem was introduced independently by Blakley and Shamir in 1979. They proposed two different solutions. Both secret sharing schemes (SSS) are examples of linear secret sharing. Many extensions and solutions based on these secret sharing schemes have appeared in the literature, most of them using Shamir SSS. In this thesis, we apply these ideas to Blakley secret sharing scheme. Many of the standard operations of single-user cryptography have counterparts in threshold cryptography. Function sharing deals with the problem of distribution of the computation of a function (such as decryption or signature) among several parties. The necessary values for the computation are distributed to the participants using a secret sharing scheme. Several function sharing schemes have been proposed in the literature with most of them using Shamir secret sharing as the underlying SSS. In this work, we investigate how function sharing can be achieved using linear secret sharing schemes in general and give solutions of threshold RSA signature, threshold Paillier decryption and threshold DSS signature operations. The threshold RSA scheme we propose is a generalization of Shoup’s Shamir-based scheme. It is similarly robust and provably secure under the static adversary model. In threshold cryptography the authorization of groups of people are decided simply according to their size. There are also general access structures in which any group can be designed as authorized. Multipartite access structures constitute an example of general access structures in which members of a subset are equivalent to each other and can be interchanged. Multipartite access structures can be used to represent any access structure since all access structures are multipartite. To investigate secret sharing schemes using these access structures, we used Mignotte and Asmuth-Bloom secret sharing schemes which are based on the Chinese remainder theorem (CRT). The question we tried to asnwer was whether one can find a Mignotte or Asmuth-Bloom sequence for an arbitrary access structure. For this purpose, we adapted an algorithm that appeared in the literature to generate these sequences. We also proposed a new SSS which solves the mentioned problem by generating more than one sequence.Bozkurt, İlker NadiM.S

    Multi-stakeholder partnerships under the Rajasthan education initiative: if not for profit, then for what?

    Get PDF
    This thesis explores the development of a multi-stakeholder partnership model using a multiple case study research design. Specifically this study examines the rationale for the launch of the Rajasthan Education initiative, its development and its impact on educational development and reaches conclusions about the scalability and sustainability of multistakeholder partnerships (MSPs) in the context of Rajasthan. The literature review shows that there is insufficient independent research evidence to support the widespread claims that public private partnerships (PPPs), of which MSP is a new ‘avatar’, are able to deliver results in terms of developmental gains and added value. This paucity of evidence and profusion of claims is partly explained by the fact, that the research that has been commissioned is not independent and its conclusions have been shaped by vested interests of those promoting the organisations they claim to evaluate. In particular organisations associated with the World Economic Forum (WEF) have been projecting PPPs and programmes of corporate responsibility as a way to engage for-profit organisations and enhance the effectiveness of external support for the delivery of services to basic education. Alongside this not-for-profit PPPs are seldom scrutinised in terms of public accountability, value for money, scalability, or sustainability partly due to the voluntary nature of such inputs to the public system. I believe my research makes a new and unique contribution to the independent evaluation of state enabled, not-for-profit MSPs in action. The research selected eight formal partnerships for case study which were selected using a matrix of organisational characteristics, scale and scope of interventions. The case studies are organised into four thematic groups i.e, School adoption, ICT based interventions, teachers’ training and universalisation of elementary education in underserved urban localities. Each case study is examined using a framework which highlights three dimensions. These are i) the design of the partnership, ii) stakeholder involvement and intra agent dynamics and iii) the Governance of the partnership. A cross case analysis of the eight partnerships is used to arrive at conclusions about MSPs in Rajasthan. This uses the concept of double contingency of power (Sayer 2004), and specifically the concept of causal power and causal susceptibilities and Stake’s (2006) multiple case analysis, to discuss the commonalities and differences across partnerships and emerging themes while cross analysing the partnerships. I have engaged in interpretivist inquiry and sought to understand the workings of an MSP which involves businesses and CSR groups alongside NGOs and government agencies with an aim to place Rajasthan on a fast development track. Rather than looking for an ideal type MSP, I problematise the MSPs in Rajasthan as I explain the workings of an MSP model in action. Given this methodological perspective, I have used semi structured interviews, observations of the partnership programmes in action, and document analysis as methods to collect and corroborate data for this study. The study concludes that the exiting MSP arrangements in REI are not scalable, unsustainable and have very limited impact. Moreover, the MSPs are unstable and reflect fluid inter-organisational evolution, as well as ambiguous public accountability. There was no purposeful financial management at the REI management level. In addition the exit routes for partners supporting interventions were not planned, resulting in the fading away of even those interventions that showed promise in accruing learning gains for children, and by schools and teachers. Non-scalability and lack of sustainability can be inferred from the fact that the partners do not have a long term view of interventions, lack sustained commitment for resource input and the interventions are implemented with temporary work force. The instability of the partnerships can be explained through the absence of involvement of government teachers and communities. Also economic and political power dominated the fate of the programmes. In this MSP it was clear that corporate social responsibility (CSR) was a driving force for establishing the MSP but was not backed by continued and meaningful engagement. The ‘win-win’ situation of greater resources, efficiency and effectiveness, which formed the basic premise for launching the REI was not evident in reality. MSPs are gaining currency globally. This research points to the fact that much more intentional action needs to be taken to ensure that partnerships such as these have a sustained impact on development. The problems and issues of education are historically, politically and socially embedded. Any action that does not take this into account and which is blind to the interests of different stakeholders in MSPs, will surely fall short of achieving what it set out to do. Further independent research examining the ambitions and realities of other MSPs is needed to inform policy development and implementation. This is essential for achieving the goals of education for all before investing further in what appears to be a flawed modality to improve access, equity and outcomes in education

    Multi-stakeholder design of forest governance and accountability arrangements in Equator province, Democratic Republic of Congo

    Get PDF
    Good forest governance is an increasingly important topic for stakeholders in many different settings around the world. Two of the best-known international initiatives to improve forest governance are the regional Forest Law Enforcement and Governance (FLEG) ministerial processes supported by the World Bank, and the European Union’s Forest Law Enforcement, Governance and Trade (FLEGT) Action Plan. Designed to support and complement such initiatives, the IUCN project “Strengthening Voices for Better Choices” (SVBC) is piloting improved forest governance arrangements in six countries in Africa, Asia and South America. In the Democratic Republic of Congo (DRC), one of three project countries in Africa, SVBC has created multi-stakeholder platforms at local, territorial and provincial levels for this purpose

    Quantifying randomness from Bell nonlocality

    Get PDF
    The twentieth century was marked by two scientific revolutions. On the one hand, quantum mechanics questioned our understanding of nature and physics. On the other hand, came the realisation that information could be treated as a mathematical quantity. They together brought forward the age of information. A conceptual leap took place in the 1980's, that consisted in treating information in a quantum way as well. The idea that the intuitive notion of information could be governed by the counter-intuitive laws of quantum mechanics proved extremely fruitful, both from fundamental and applied points of view. The notion of randomness plays a central role in that respect. Indeed, the laws of quantum physics are probabilistic: that contrasts with thousands of years of physical theories that aimed to derive deterministic laws of nature. This, in turn, provides us with sources of random numbers, a crucial resource for information protocols. The fact that quantum theory only describes probabilistic behaviours was for some time regarded as a form of incompleteness. But nonlocality, in the sense of Bell, showed that this was not the case: the laws of quantum physics are inherently random, i.e., the randomness they imply cannot be traced back to a lack of knowledge. This observation has practical consequences: the outputs of a nonlocal physical process are necessarily unpredictable. Moreover, the random character of these outputs does not depend on the physical system, but only of its nonlocal character. For that reason, nonlocality-based randomness is certified in a device-independent manner. In this thesis, we quantify nonlocality-based randomness in various frameworks. In the first scenario, we quantify randomness without relying on the quantum formalism. We consider a nonlocal process and assume that it has a specific causal structure that is only due to how it evolves with time. We provide trade-offs between nonlocality and randomness for the various causal structures that we consider. Nonlocality-based randomness is usually defined in a theoretical framework. In the second scenario, we take a practical approach and ask how much randomness can be certified in a practical situation, where only partial information can be gained from an experiment. We describe a method to optimise how much randomness can be certified in such a situation. Trade-offs between nonlocality and randomness are usually studied in the bipartite case, as two agents is the minimal requirement to define nonlocality. In the third scenario, we quantify how much randomness can be certified for a tripartite process. Though nonlocality-based randomness is device-independent, the process from which randomness is certified is actually realised with a physical state. In the fourth scenario, we ask what physical requirements should be imposed on the physical state for maximal randomness to be certified, and more specifically, how entangled the underlying state should be. We show that maximal randomness can be certified from any level of entanglement.El siglo XX estuvo marcado por dos revoluciones científicas. Por un lado, la mecánica cuántica cuestionó nuestro entendimiento de la naturaleza y de la física. Por otro lado, quedó claro que la información podía ser tratada como un objeto matemático. Juntos, ambas revoluciones dieron inicio a la era de la información. Un salto conceptual ocurrió en los años 80: se descubrió que la información podía ser tratada de manera cuántica. La idea de que la noción intuitiva de información podía ser gobernada por las leyes contra intuitivas de la mecánica cuántica resultó extremadamente fructífera tanto desde un punto de vista teórico como práctico. El concepto de aleatoriedad desempeña un papel central en este respecto. En efecto, las leyes de la física cuántica son probabilistas, lo que contrasta con siglos de teorías físicas cuyo objetivo era elaborar leyes deterministas de la naturaleza. Además, esto constituye una fuente de números aleatorios, un recurso crucial para criptografía. El hecho de que la física cuántica solo describe comportamientos aleatorios fue a veces considerado como una forma de incompletitud en la teoría. Pero la no-localidad, en el sentido de Bell, probó que no era el caso: las leyes cuánticas son intrínsecamente probabilistas, es decir, el azar que contienen no puede ser atribuido a una falta de conocimiento. Esta observación tiene consecuencias prácticas: los datos procedentes de un proceso físico no-local son necesariamente impredecibles. Además, el carácter aleatorio de estos datos no depende del sistema físico, sino solo de su carácter no-local. Por esta razón, el azar basado en la no-localidad está certificado independientemente del dispositivo físico. En esta tesis, cuantificamos el azar basado en la no-localidad en varios escenarios. En el primero, no utilizamos el formalismo cuántico. Estudiamos un proceso no-local dotado de varias estructuras causales en relación con su evolución temporal, y calculamos las relaciones entre aleatoriedad y no-localidad para estas diferentes estructuras causales. El azar basado en la no-localidad suele ser definido en un marco teórico. En el segundo escenario, adoptamos un enfoque práctico, y examinamos la relación entre aleatoriedad y no-localidad en una situación real, donde solo tenemos una información parcial, procedente de un experimento, sobre el proceso. Proponemos un método para optimizar la aleatoriedad en este caso. Hasta ahora, las relaciones entre aleatoriedad y no-localidad han sido estudiadas en el caso bipartito, dado que dos agentes forman el requisito mínimo para definir el concepto de no-localidad. En el tercer escenario, estudiamos esta relación en el caso tripartito. Aunque el azar basado en la no-localidad no depende del dispositivo físico, el proceso que sirve para generar azar debe sin embargo ser implementado con un estado cuántico. En el cuarto escenario, preguntamos si hay que imponer requisitos sobre el estado para poder certificar una máxima aleatoriedad de los resultados. Mostramos que se puede obtener la cantidad máxima de aleatoriedad indiferentemente del nivel de entrelazamiento del estado cuántico.Postprint (published version

    Quantifying randomness from Bell nonlocality

    Get PDF
    The twentieth century was marked by two scientific revolutions. On the one hand, quantum mechanics questioned our understanding of nature and physics. On the other hand, came the realisation that information could be treated as a mathematical quantity. They together brought forward the age of information. A conceptual leap took place in the 1980's, that consisted in treating information in a quantum way as well. The idea that the intuitive notion of information could be governed by the counter-intuitive laws of quantum mechanics proved extremely fruitful, both from fundamental and applied points of view. The notion of randomness plays a central role in that respect. Indeed, the laws of quantum physics are probabilistic: that contrasts with thousands of years of physical theories that aimed to derive deterministic laws of nature. This, in turn, provides us with sources of random numbers, a crucial resource for information protocols. The fact that quantum theory only describes probabilistic behaviours was for some time regarded as a form of incompleteness. But nonlocality, in the sense of Bell, showed that this was not the case: the laws of quantum physics are inherently random, i.e., the randomness they imply cannot be traced back to a lack of knowledge. This observation has practical consequences: the outputs of a nonlocal physical process are necessarily unpredictable. Moreover, the random character of these outputs does not depend on the physical system, but only of its nonlocal character. For that reason, nonlocality-based randomness is certified in a device-independent manner. In this thesis, we quantify nonlocality-based randomness in various frameworks. In the first scenario, we quantify randomness without relying on the quantum formalism. We consider a nonlocal process and assume that it has a specific causal structure that is only due to how it evolves with time. We provide trade-offs between nonlocality and randomness for the various causal structures that we consider. Nonlocality-based randomness is usually defined in a theoretical framework. In the second scenario, we take a practical approach and ask how much randomness can be certified in a practical situation, where only partial information can be gained from an experiment. We describe a method to optimise how much randomness can be certified in such a situation. Trade-offs between nonlocality and randomness are usually studied in the bipartite case, as two agents is the minimal requirement to define nonlocality. In the third scenario, we quantify how much randomness can be certified for a tripartite process. Though nonlocality-based randomness is device-independent, the process from which randomness is certified is actually realised with a physical state. In the fourth scenario, we ask what physical requirements should be imposed on the physical state for maximal randomness to be certified, and more specifically, how entangled the underlying state should be. We show that maximal randomness can be certified from any level of entanglement.El siglo XX estuvo marcado por dos revoluciones científicas. Por un lado, la mecánica cuántica cuestionó nuestro entendimiento de la naturaleza y de la física. Por otro lado, quedó claro que la información podía ser tratada como un objeto matemático. Juntos, ambas revoluciones dieron inicio a la era de la información. Un salto conceptual ocurrió en los años 80: se descubrió que la información podía ser tratada de manera cuántica. La idea de que la noción intuitiva de información podía ser gobernada por las leyes contra intuitivas de la mecánica cuántica resultó extremadamente fructífera tanto desde un punto de vista teórico como práctico. El concepto de aleatoriedad desempeña un papel central en este respecto. En efecto, las leyes de la física cuántica son probabilistas, lo que contrasta con siglos de teorías físicas cuyo objetivo era elaborar leyes deterministas de la naturaleza. Además, esto constituye una fuente de números aleatorios, un recurso crucial para criptografía. El hecho de que la física cuántica solo describe comportamientos aleatorios fue a veces considerado como una forma de incompletitud en la teoría. Pero la no-localidad, en el sentido de Bell, probó que no era el caso: las leyes cuánticas son intrínsecamente probabilistas, es decir, el azar que contienen no puede ser atribuido a una falta de conocimiento. Esta observación tiene consecuencias prácticas: los datos procedentes de un proceso físico no-local son necesariamente impredecibles. Además, el carácter aleatorio de estos datos no depende del sistema físico, sino solo de su carácter no-local. Por esta razón, el azar basado en la no-localidad está certificado independientemente del dispositivo físico. En esta tesis, cuantificamos el azar basado en la no-localidad en varios escenarios. En el primero, no utilizamos el formalismo cuántico. Estudiamos un proceso no-local dotado de varias estructuras causales en relación con su evolución temporal, y calculamos las relaciones entre aleatoriedad y no-localidad para estas diferentes estructuras causales. El azar basado en la no-localidad suele ser definido en un marco teórico. En el segundo escenario, adoptamos un enfoque práctico, y examinamos la relación entre aleatoriedad y no-localidad en una situación real, donde solo tenemos una información parcial, procedente de un experimento, sobre el proceso. Proponemos un método para optimizar la aleatoriedad en este caso. Hasta ahora, las relaciones entre aleatoriedad y no-localidad han sido estudiadas en el caso bipartito, dado que dos agentes forman el requisito mínimo para definir el concepto de no-localidad. En el tercer escenario, estudiamos esta relación en el caso tripartito. Aunque el azar basado en la no-localidad no depende del dispositivo físico, el proceso que sirve para generar azar debe sin embargo ser implementado con un estado cuántico. En el cuarto escenario, preguntamos si hay que imponer requisitos sobre el estado para poder certificar una máxima aleatoriedad de los resultados. Mostramos que se puede obtener la cantidad máxima de aleatoriedad indiferentemente del nivel de entrelazamiento del estado cuántico

    Reticulate Evolution: Symbiogenesis, Lateral Gene Transfer, Hybridization and Infectious heredity

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Bioprospecting for fungal-based biocontrol agents

    Get PDF
    The research objective of this project was to investigate how a virus infecting the fungus can improve its effectiveness as a pesticide. This mycovirus-induced hypervirulence was investigated using microbiological and genomic techniques to characterise the molecular interactions between the virus and the fungus host so that an improved mycopesticide can be deployed commercially. Prior to this work, it was reported that a newly proposed virus family called the Polymycoviridae can confer mild hypervirulence to their fungal host. In this work, attempts to cure a B. bassiana isolate (ATHUM 4946) which harbours a polymycovirus-3 (BbPmV-3) were successful and I built and confirmed two isogenic lines of virus-free and virus-infected. Furthermore, BbPmV-3 has six genomic dsRNA segments and its complete sequence is reported here. Phylogenetic analysis of RNA-dependent RNA polymerase (RdRP) protein sequences revealed that BbPmV-3 is closely related to BbPmV-2 but not BbPmV-1. Consequently, examining the effects of BbPmV-3 and BbPmV-1 on their respective hosts revealed similar phenotypic effects including increased pigmentation, sporulation, and radial growth. However, this polymycovirus-mediated effect on growth is dependent on the carbon and nitrogen sources available to the host fungus. When sucrose is replaced by lactose, trehalose, glucose, or glycerol both BbPmV-3 and BbPmV-1 increase growth of ATHUM 4946 and EABb 92/11-Dm respectively, whereas these effects were reversed on maltose and fructose. Similarly, both BbPmV-3 and BbPmV-1 decrease growth of ATHUM and EABb 92/11-Dm when sodium nitrate is replaced by sodium nitrite, potassium nitrate, or ammonium nitrate. To this extent, this hypervirulent effect was tested on Tenebrio molitor, where a virus-infected EABb 92/11-Dm line demonstrated increased mortality rate when compared to the commercial B. bassiana ATCC 74040, B. bassiana GHA, and the virus-free isogenic line. Furthermore, gene expression data from five timepoints of the two isogenic lines were used in a candidate pathway approach, investigating key pathways known to affect resistance to stresses and carbon uptake. Secretion of organic acid during growth can change the pH of the growth medium creating a toxic environment causing stress and death. Consequently, genes involved in stress tolerance such as heat shock proteins, trehalose and mannitol biosynthesis, and calcium homeostasis were upregulated in virus-infected isolate. Likewise, genes involved in carbon uptake such as BbAGT1 and BbJen1 transporters were upregulated in virus-infected isolate. Equally, here we demonstrate that BbPmV-1 drives the up-regulation of nirA gene which is linked to nitrate uptake and/or assimilation and secondary metabolites such as Tenellin, Beauvericin and Bassianolide. These results reveal a symbiotic relationship between BbPmV-1 and its fungal host. To conclude, these data present a crucial first step in characterising how mycopesticides can be improved to deliver better and safer pest management

    Analysis of host and herpesvirus interactions using bioinformatics.

    Get PDF
    Bioinformatics methods have become central to analysing and organising the sequence data continually produced by new and existing sequencing projects. The field of bioinformatics covers both the static aspects of organising and presenting these raw data, by compiling existing knowledge into accessible databases, ontologies, and libraries; and the more dynamic aspects of knowledge discovery informatics for interpreting and mining existing data. The aim of this thesis is to utilise such methods to analyse the herpesvirus-host relationship. In Chapter 2 comparative host and herpesvirus genome analysis is used to compare the sequences of all currently sequenced herpesvirus open reading frames to the conceptually translated human genome with the aim of identifying herpesvirus-human (host) sequence homologues. Collating in one search all currently known host homologues provides the first complete assessment of herpesvirus-host homologues. This search identified 55 previously identified herpesvirus-host homologues, and 4 previously unknown herpesvirus-host homologues. The work performed in Chapter 2 highlighted the need for consistent annotation of genomes and gene products to allow greater comparative genomics. It is not feasible to manually curate large numbers of genes whose relationships to each other are not immediately clear. Therefore, Chapters 3 and 4 focus upon the use of the Gene Ontology; a resource that is made publicly available for the purpose of annotating gene products with unified vocabulary derived from a structured directed acyclic graph. The Gene Ontology was extended to allow host-pathogen interaction annotation by a) adding 187 new terms relating specifically to virus function and structure (Chapter 3), and b) using both the new and existing terms to annotate the entire Human Herpesvirus 1 genome using references from the available literature (Chapter 4). Finally, Chapter 5 examines the utility of the Gene Ontology when analysing such large-scale host and herpesvirus gene expression datasets as produced experimentally by DNA microarray studies. Using such automated annotation methods a cluster of 12 proteins were identified that increase mitochondrial function in HUVEC cells 24 hours post HCMV infection. A cluster of nine proteins that function in the MAPK pathway were also identified using the Gene Ontology that provide evidence for HCMV inhibition of the MAPK pathway
    corecore