117 research outputs found

    Decreasing Cost of Intermediation

    Get PDF
    This paper attempts to explain how cost of intermediation can be reduced. One solution we postulate is subsidizing the cost of intermediation. The model uses ex-ante identical, spatially separated agents in an overlapping-generations framework. Agents receive relocation shock when they become old. We conclude that government cannot subsidize the cost of intermediation completely but it can reduce cost partially

    Lipid shell modified with combination of lipid and phospholipids in solid lipid nanoparticles for engineered specificity of paclitaxel in tumor bearing mice.

    Get PDF
    Paclitaxel (PTX) is an anticancer drug belonging to the class of Taxan. It is active against various types of carcinomas. The marketed formulation of paclitaxel is associated with deleterious effects with lack of specificity to tumor. Solid lipid nanoparticles (SLN) are colloidal carriers extensively studied and developed for there potential uses especially for controlled release and site specificity. The present study was designed to develop a formulation of PTX in the form of SLN to be administered via IV route with improved tumor specificity, in which the lipid shell was modified by using combination of lipid with phospholipids. Total eight formulations were prepared and were characterized by various in vitro and in vivo parameters. The microemulsification method was used for the preparation of SLN.The production yield  of resulting process for all SLN was high. Average particle size was ranged between 209 nm to 385 nm. The developed PTX-SLN showed high percentage entrapment efficiency. The zeta potential values showed the good stable feature of the sln.The in vitro dissolution study showed that drug release was more retarded and was found to dependent on concentration of lipids employed. In vitro cytotoxicity study was performed on MCF-7 cancer cell line, which showed that formulation G2 is having more potentiating effect on cancer cell line. Tissue targeting study and tumor growth inhibition studies were performed on mice where the PTX loaded SLN from batch G2 shown more promising outcome. Results obtained from this study indicated strongly that developed SLN are having potential as an efficient drug delivery system for paclitaxel

    Experimental investigation of hardness of FSW and TIG joints of Aluminium alloys of AA7075 and AA6061

    Get PDF
    This paper reports hardness testing conducted on welded butt joints by FSW and TIG welding process on similar and dissimilar aluminium alloys. FSW joints were produced for similar alloys of AA7075T651 and dissimilar alloys of AA7075T651- AA6061T6. The Friction stir welds of AA7075 & AA6061 aluminium alloy were produced at different tool rotational speeds of 650,700, 800, 900, 1000 and transverse speed of 30, 35, 40 mm/min. TIG welding was conducted along the rolling direction of similar and dissimilar aluminium plates. The Brinell hardness testing techniques were employed to conduct the tests; these tests were conducted on the welds to ascertain the joint integrity before characterization to have an idea of the quality of the weld

    A Review on: Efficient Method for Mining Frequent Itemsets on Temporal Data

    Get PDF
    Temporal data can hold time-stamped information that affects the results of data mining. Customary strategies for finding frequent itemsets accept that datasets are static; also the instigated rules are relevant over the whole dataset. In any case, this is not the situation when data is temporal. The work is done to enhance the proficiency of mining frequent itemsets on temporal data. The patterns can hold in either all or, then again a portion of the intervals. It proposes another method with respect to time interval is called as frequent itemsets mining with time cubes. The concentration is building up an efficient algorithm for this mining issue by broadening the notable a priori algorithm. The thought of time cubes is proposed to handle different time hierarchies. This is the route by which the patterns that happen intermittently, amid a time interval or both, are perceived. Another thickness limit is likewise proposed to take care of the overestimating issue of time periods and furthermore ensure that found patterns are valid

    A Review on: Efficient Method for Mining Frequent Itemsets on Temporal Data

    Get PDF
    Temporal data can hold time-stamped information that affects the results of data mining. Customary strategies for finding frequent itemsets accept that datasets are static; also the instigated rules are relevant over the whole dataset. In any case, this is not the situation when data is temporal. The work is done to enhance the proficiency of mining frequent itemsets on temporal data. The patterns can hold in either all or, then again a portion of the intervals. It proposes another method with respect to time interval is called as frequent itemsets mining with time cubes. The concentration is building up an efficient algorithm for this mining issue by broadening the notable a priori algorithm. The thought of time cubes is proposed to handle different time hierarchies. This is the route by which the patterns that happen intermittently, amid a time interval or both, are perceived. Another thickness limit is likewise proposed to take care of the overestimating issue of time periods and furthermore ensure that found patterns are valid

    Prescribing patterns of antihypertensive drugs in tertiary care teaching hospital

    Get PDF
    Background: Prescribing patterns provides prescribing behaviors of prescriber. Hypertension is most common cardiac complication among middle and old age population. Thus, study about prescribing trends helps to select appropriate drug for treatment of hypertension. The study aim was to analyze the patterns of antihypertensive drug prescribed in patients diagnosed with hypertension.Methods: The study was cross sectional and observational study. A questionnaire was specifically designed factoring patients’ demographical profile, diagnosis of disease, drug regimen.Results: The totals of 100 patients were analyzed for the prescribing patterns of antihypertensive drug maximum patients belonged to the age group of 61-80. The proportion of male (62%) patients was more as compared to female patients (38%). Total drug prescribed was 246 in 100 prescriptions. Average drug per prescription was 2.46. Among 246 drugs, 97% were antihypertensive prescribed and 3% were other concomitant drugs. 65% single drug prescribed, 25% two drug prescribed and 10% three drug prescribed. Most commonly single antihypertensive drug prescribed was telmisartan. Most commonly two antihypertensive drug prescribed was telmisartan + hydrochlorothiazide. Most commonly three antihypertensive drug prescribed was telmisartan + amlodipine + hydrochlorothiazide. Among total 246 drug prescribed, angiotensin receptor blocker was prescribed (ARB) 28%, calcium channel blocker (CCB) 17%, angiotensin converting enzyme inhibitor (ACEI) 12%, beta blocker 5% followed by ARB + diuretics 13%, ACEI + diuretics 8%, beta blocker + CCB 4%, ARB + CCB + diuretics 7%, ARB + beta blocker + diuretics 3% and 3% other drug prescribed. There was no fixed dose combination prescribed and no injectable prescribed in present study.Conclusions: The most common drug prescribed was ARB as single drug therapy. The most common drug which was used for combination therapy was diuretics

    Public Opinion Analysis Using Hadoop

    Get PDF
    Recent technological advances in devices, computing, and social networking have revolutionized the world but have also increased the amount of data produced by humans on a large scale. If you collect this data in the form of disks, it may fill an entire football field. According to studies, 2.5 billion gigabytes of new data is generated every day and 2.5 petabytes of data is collected every hour. This rate is still growing enormously. Though all this information produced is meaningful and can be useful when processed, it gets neglected. Social media has gained massive popularity nowadays. Twitter makes it easy to engage users in expressing, sharing and discussing hot latest topics but these public expressions and views are hard to analyze due to the bigger size of the data created by Twitter. In order to perform analysis and predictions over the hot topics in society, latest technologies are needed. The most popular solution for this is Hadoop. Hadoop acts as an open-source framework for developing and executing distributed applications that process very large amounts of data. It stores and process big data in a distributed fashion on large clusters of commodity hardware. The risk, of course, in running on commodity machines is how to handle failure. Hadoop is built with the assumption that hardware will fail and as such, it can easily handle most failures. Hadoop can be used for developing and executing distributed applications that process very large amounts of data. It provides a suitable environment needed for treating or processing huge data. Our job is to extract and store data into its file system and query the data according to the desired output. We propose to perform analysis on Public opinion expressed over Twitter regarding the trending topics of the society by using Apache Hadoop framework along with its services Apache Flume and Apache Hive

    EFFECT OF CANNA INDICA L. EXTRACT AGAINST CAFFEINE-NICOTINE CO-ADMINISTRATION-INDUCED EXAGGERATION IN TYPE 2 DIABETIC RATS

    Get PDF
    Objective: This study was designed to evaluate the protective effect of Canna indica L. extract against caffeine-nicotine administration-induced type 2 diabetes exaggeration in rats.Methods: A study was conducted for three weeks in four rat groups (n=6); viz.  type 2 diabetic control group, a caffeine-nicotine diabetic control group (20mg/kg, 0.4mg/kg, ip twice daily),  and Canna indica L. extract and caffeine-nicotine treatment group and  standard drug treated caffeine-nicotine diabetic group (Glibencamide, 5mg/kg, once daily). Type 2 diabetes was induced by two weeks high fatty diet and a single dose streptozotocin (50mg/kg, ip) on 1th day of the study in all groups. Blood and urine samples were collected every week for serum biochemical analysis.Results: Results of extract treatment and standard drug treatment were compared with untreated caffeine-nicotine co-administration group. Difference in each relevant serum parameter was analyzed through ANOVA and Dunett's t test. Extract treated caffeine-nicotine-diabetic group showed about 150-200mg/dL (p<0.001) reduction in the serum glucose than untreated caffeine-nicotine-diabetic control group. Extract treatment reduces serum glucose by 10-15 mg/dL than glibenclamide treatment with higher significance (p<0.001). Extract treatment showed better results than standard drug in liver and kidney function test and exhibited its better potential in controlling diabetic complications. Extract treatment increased HDL-C and reduced triglycerides, LDL-C, VLDL-C and TC much better and with higher significance than standard drug. Extract treatment reduced TC by at least 60-80mg/dL (p<0.01) in comparison to caffeine-nicotine-diabetic control group. Extract treatment reduced 10-15mg/dL of more total cholesterol than that of standard drug.Conclusion: Caffeine-nicotine co-administration-induced exaggeration of type 2 diabetes was better treated by CI extract than that of standard drug gibenclamide. Keywords: Type 2 diabetes, Streptozotocin, Caffeine, Nicotine, Diabetic complication, Ra
    • …
    corecore