1,346 research outputs found

    Photovoltaic performance of injection solar cells and other applications of nanocrystalline oxide layers

    Get PDF
    The direct conversion of sunlight to electricity via photoelectrochemical solar cells is an attractive option that has been pursued for nearly two decades in several laboratories. In this paper, we review the principles and performance features of very efficient solar cells that are being developed in our laboratories. These are based on the concept of dye-sensitization of wide bandgap semiconductors used in the form of mesoporous nanocrystalline membrane-type films. The key feature is charge injection from the excited state of an anchored dye to the conduction band of an oxide semiconductor such as TiO2. In the use of the semiconductor in the form of high surface area, highly porous film offers several unique advantages: monomeric distribution of a large quantity of the dye in a compact (few micron thick) film, efficient charge collection and drastic inhibition of charge recombination (‘capture of charge carriers by oxidized dye'). Near quantitative efficiency for charge collection for monochromatic light excitation gives rise to sunlight conversion efficiency in the range of 8-10% This has led to fruitful collaboration with several industrial partners. Possible applications and commercialization of these solar cells and also other practical applications of nanosized films are briefly outline

    Depth-Independent Lower bounds on the Communication Complexity of Read-Once Boolean Formulas

    Full text link
    We show lower bounds of Ω(n)\Omega(\sqrt{n}) and Ω(n1/4)\Omega(n^{1/4}) on the randomized and quantum communication complexity, respectively, of all nn-variable read-once Boolean formulas. Our results complement the recent lower bound of Ω(n/8d)\Omega(n/8^d) by Leonardos and Saks and Ω(n/2Ω(dlogd))\Omega(n/2^{\Omega(d\log d)}) by Jayram, Kopparty and Raghavendra for randomized communication complexity of read-once Boolean formulas with depth dd. We obtain our result by "embedding" either the Disjointness problem or its complement in any given read-once Boolean formula.Comment: 5 page

    Toxicity of diatomaceous earth on seed weevil, Sitophilus oryzae L. and its effect on agro-morphological characters of maize seeds

    Get PDF
    Sitophilus oryzae L. (Curculionidae; Coleoptera) is considered to be a serious internal feeder of stored cereals. The use of insecticides results in the development of resistance among the pests and residues in the produce. Diatomaceous Earth (DE) is from a natural source, environment-friendly, safe to humans and natural enemies. In addition, it is highly effective against a wide range of stored pest species and has no toxic residues on the treated seeds. The promising alternative to synthetic insecticides is the application of DE in storage pest management under physical control. With this background, the present study was aimed to find the efficacy of DE against rice weevil, S. oryzae L. and their effect on the agro-morphological characters of maize (Zea mays L.) seeds. Contact toxicity bioassays were carried out with different concentrations of DE against S. oryzae. The results of the bioassay studies revealed LD50 at the concentration of 1.27 mg/100 gm of maize seeds. Further, 100 per cent mortality was achieved at the dose of 15 mg/100 gm of maize seeds within six days of exposure. The effect of DE on the germination provided a significant increase in germinability of maize seeds (LD50= 94%, LD95= 98% and control= 96%). DE at the concentration of LD95 had a beneficial effect on the seedling parameters, especially germination% (98%) and seedling length (53.02 cm) of maize. The present study concluded that DE could be effectively utilised as an alternative management tool to chemical insecticides in the management of rice weevil under storage conditions

    Distributed Minimum Cut Approximation

    Full text link
    We study the problem of computing approximate minimum edge cuts by distributed algorithms. We use a standard synchronous message passing model where in each round, O(logn)O(\log n) bits can be transmitted over each edge (a.k.a. the CONGEST model). We present a distributed algorithm that, for any weighted graph and any ϵ(0,1)\epsilon \in (0, 1), with high probability finds a cut of size at most O(ϵ1λ)O(\epsilon^{-1}\lambda) in O(D)+O~(n1/2+ϵ)O(D) + \tilde{O}(n^{1/2 + \epsilon}) rounds, where λ\lambda is the size of the minimum cut. This algorithm is based on a simple approach for analyzing random edge sampling, which we call the random layering technique. In addition, we also present another distributed algorithm, which is based on a centralized algorithm due to Matula [SODA '93], that with high probability computes a cut of size at most (2+ϵ)λ(2+\epsilon)\lambda in O~((D+n)/ϵ5)\tilde{O}((D+\sqrt{n})/\epsilon^5) rounds for any ϵ>0\epsilon>0. The time complexities of both of these algorithms almost match the Ω~(D+n)\tilde{\Omega}(D + \sqrt{n}) lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11]. Furthermore, we also strengthen the lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which O(wlogn)O(w\log n) bits can be transmitted in each round over an edge of weight ww), even if the diameter is D=O(logn)D=O(\log n). For unweighted simple graphs, we show that even for networks of diameter O~(1λnαλ)\tilde{O}(\frac{1}{\lambda}\cdot \sqrt{\frac{n}{\alpha\lambda}}), finding an α\alpha-approximate minimum cut in networks of edge connectivity λ\lambda or computing an α\alpha-approximation of the edge connectivity requires Ω~(D+nαλ)\tilde{\Omega}(D + \sqrt{\frac{n}{\alpha\lambda}}) rounds

    Lateral drill holes decrease strength of the femur: An observational study using finite element and experimental analyses

    Get PDF
    Background: Internal fixation of femoral fractures requires drilling holes through the cortical bone of the shaft of the femur. Intramedullary suction reduces the fat emboli produced by reaming and nailing femoral fractures but requires four suction port

    On Packet Scheduling with Adversarial Jamming and Speedup

    Full text link
    In Packet Scheduling with Adversarial Jamming packets of arbitrary sizes arrive over time to be transmitted over a channel in which instantaneous jamming errors occur at times chosen by the adversary and not known to the algorithm. The transmission taking place at the time of jamming is corrupt, and the algorithm learns this fact immediately. An online algorithm maximizes the total size of packets it successfully transmits and the goal is to develop an algorithm with the lowest possible asymptotic competitive ratio, where the additive constant may depend on packet sizes. Our main contribution is a universal algorithm that works for any speedup and packet sizes and, unlike previous algorithms for the problem, it does not need to know these properties in advance. We show that this algorithm guarantees 1-competitiveness with speedup 4, making it the first known algorithm to maintain 1-competitiveness with a moderate speedup in the general setting of arbitrary packet sizes. We also prove a lower bound of ϕ+12.618\phi+1\approx 2.618 on the speedup of any 1-competitive deterministic algorithm, showing that our algorithm is close to the optimum. Additionally, we formulate a general framework for analyzing our algorithm locally and use it to show upper bounds on its competitive ratio for speedups in [1,4)[1,4) and for several special cases, recovering some previously known results, each of which had a dedicated proof. In particular, our algorithm is 3-competitive without speedup, matching both the (worst-case) performance of the algorithm by Jurdzinski et al. and the lower bound by Anta et al.Comment: Appeared in Proc. of the 15th Workshop on Approximation and Online Algorithms (WAOA 2017

    Ultrasonic Examination of Thin Walled Stainless Steel Tubes by Synthetic Aperture Focusing Technique

    Get PDF
    The objective of Nondestructive Testing (NDT) is to detect flaws in the components, and characterize them by their size, shape, orientation etc. so that decision on fitness for service of the components can be made. In the case of thin walled tubes, ultrasonic or eddy current examination is generally performed for detection of defects. During the inspection of thin walled stainless steel tubes used in nuclear application, defect indications were obtained by eddy current examination in two of the tubes. From the eddy current examination results, accurate sizing and orientation of these defects could not be determined. Hence a complementary inspection method was required for better characterisation of defects in the two tubes. Even though ultrasonic testing is a most promising technique for detection and characterization of defects, the interpretation of results with A-scan presentation, relies heavily on the skill and experience of the operator performing the test, which comes only by extensive training [1]. This problem is still complicated in the case of thin walled tubes since resolution achievable is poor due to small wall thickness and diameter, and also due to poor signal-to-noise ratio obtainable from fine size defects

    Development of a new integration algorithm for parallel implementation of the finite element elasto-plastic analysis

    Get PDF
    The accurate integration of stress-strain relations is an important factor in element analysis for elasto-plastic problems. The conventional method for this problem is the Euler algorithm which divides the whole integration process into a number of smaller substeps of equal size. It is difficult to control the errors in such integration scheme. In this paper, we will present a new algorithm for integrating strain-stress relations. It is based on the third and the fourth order Runge-Kutta method. This substepping scheme controls the errors in the integration process by adjusting the substep size automatically. In order to implement the substepping scheme on parallel systems, a parallel preconditioned conjugate gradient method is developed. The resulting algorithms have been implemented on a parallel environment defined by a cluster of workstation and their performance will be presented

    Neuroprotective effect of secretin in chronic hypoxia induced neurodegeneration in rats

    Get PDF
    Background: Hypoxia is a condition in any stage in the delivery of oxygen to cells which include decreased partial pressures of oxygen, less diffusion of oxygen in the lungs, insufficient hemoglobin, inefficient blood flow to the end tissue, and breathing rhythm. Secretin is an amino acid which plays proper functioning of gastro intestinal system.Methods: The current study was conducted to evaluvate the effect of exogenously administrated secretin on chronic hypoxic damage of brain in rat model. Experimental design consists of control animals, Control animals + secretin hypoxia exposed animals; hypoxia exposed animals +secretin (20ng/kg.bw).Results: The results of this study point to a possible role of Secretin as neuroprotectant.Conclusions: Further research on secretin needs to be conducted in order to confirm the deductions made by this study

    Approximating k-Forest with Resource Augmentation: A Primal-Dual Approach

    Full text link
    In this paper, we study the kk-forest problem in the model of resource augmentation. In the kk-forest problem, given an edge-weighted graph G(V,E)G(V,E), a parameter kk, and a set of mm demand pairs V×V\subseteq V \times V, the objective is to construct a minimum-cost subgraph that connects at least kk demands. The problem is hard to approximate---the best-known approximation ratio is O(min{n,k})O(\min\{\sqrt{n}, \sqrt{k}\}). Furthermore, kk-forest is as hard to approximate as the notoriously-hard densest kk-subgraph problem. While the kk-forest problem is hard to approximate in the worst-case, we show that with the use of resource augmentation, we can efficiently approximate it up to a constant factor. First, we restate the problem in terms of the number of demands that are {\em not} connected. In particular, the objective of the kk-forest problem can be viewed as to remove at most mkm-k demands and find a minimum-cost subgraph that connects the remaining demands. We use this perspective of the problem to explain the performance of our algorithm (in terms of the augmentation) in a more intuitive way. Specifically, we present a polynomial-time algorithm for the kk-forest problem that, for every ϵ>0\epsilon>0, removes at most mkm-k demands and has cost no more than O(1/ϵ2)O(1/\epsilon^{2}) times the cost of an optimal algorithm that removes at most (1ϵ)(mk)(1-\epsilon)(m-k) demands
    corecore