444 research outputs found
Differential modulation of performance in insight and divergent thinking tasks with tDCS
While both insight and divergent thinking tasks are used to study creativity, there are reasons to believe that the two may call upon very different mechanisms. To explore this hypothesis, we administered a verbal insight task (riddles) and a divergent thinking task (verbal fluency) to 16 native English speakers and 16 non-native English speakers after they underwent Transcranial Direct Current Stimulation (tDCS) of the left middle temporal gyrus and right temporo- parietal junction. We found that, in the case of the insight task the depolarization of right temporo-parietal junction and hyperpolarization of left middle temporal gyrus resulted in increased performance, relative to both the control condition and the reverse stimulation condition in both groups (non-native > native speakers). However, in the case of the divergent thinking task, the same pattern of stimulation resulted in a decrease in performance, compared to the reverse stimulation condition, in the non-native speakers. We explain this dissociation in terms of differing task demands of divergent thinking and insight tasks and speculate that the greater sensitivity of non-native speakers to tDCS stimulation may be a function of less entrenched neural networks for non-native languages
Optimal tie-breaking rules
We consider two-player contests with the possibility of ties and study the
effect of different tie-breaking rules on effort. For ratio-form and
difference-form contests that admit pure-strategy Nash equilibrium, we find
that the effort of both players is monotone decreasing in the probability that
ties are broken in favor of the stronger player. Thus, the effort-maximizing
tie-breaking rule commits to breaking ties in favor of the weaker agent. With
symmetric agents, we find that the equilibrium is generally symmetric and
independent of the tie-breaking rule. We also study the design of random
tie-breaking rules that are ex-ante fair and identify sufficient conditions
under which breaking ties before the contest actually leads to greater expected
effort than the more commonly observed practice of breaking ties after the
contest.Comment: 25 page
The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective
Training datasets for machine learning often have some form of missingness.
For example, to learn a model for deciding whom to give a loan, the available
training data includes individuals who were given a loan in the past, but not
those who were not. This missingness, if ignored, nullifies any fairness
guarantee of the training procedure when the model is deployed. Using causal
graphs, we characterize the missingness mechanisms in different real-world
scenarios. We show conditions under which various distributions, used in
popular fairness algorithms, can or can not be recovered from the training
data. Our theoretical results imply that many of these algorithms can not
guarantee fairness in practice. Modeling missingness also helps to identify
correct design principles for fair algorithms. For example, in multi-stage
settings where decisions are made in multiple screening rounds, we use our
framework to derive the minimal distributions required to design a fair
algorithm. Our proposed algorithm decentralizes the decision-making process and
still achieves similar performance to the optimal algorithm that requires
centralization and non-recoverable distributions.Comment: To appear in the Proceedings of AAAI 202
An Implementation of Cardiovascular Disease Prediction in Ultrasonography Images using AWMYOLOv4 Deep Learning Mode
Cardiovascular diseases are one of the most important issues facing the people and their origins also death is contained all over the world the facing issues in past 25 years. Every country’s inversing large amount in health care researches and it’s related to enhanced predict the diseases. Cardio issues are not even physicians can easily be predicted and it is a very challenging task that requires high knowledge and expertise. To identify to create machine language models used to efficiently predict the earliest stage of cardiovascular disease. In this work, we recommend AWMF filter for the pre-process the Input Image after the input move to YOLOv4 neural network method for classification and segmentation to the heart affected areas by using ultrasonic Images with the help of a machine learning algorithm. The proposed algorithm uses ultrasonic picture classification and segmentation to detect cardiovascular disease earlier. This model shows the more accurate result on 96% of training and 98% testing data. And this method shows better results and providing while compared to the existing method
Non-muscle-invasive clear cell carcinoma of the urinary bladder: Is cystectomy necessary?
We report the clinical presentation, histological findings and management of a 49-year-old female patient with non-muscle-invasive clear cell carcinoma of the urinary bladder. In the literature, there are only seven such case reports. We feel that transurethral resection of the bladder tumour followed by close cystoscopy surveillance is a suitable management for non-muscle-invasive clear cell carcinoma of the urinary bladder
Blank optimization in sheet metal forming using finite element simulation
The present study aims to determine the optimum blank shape design for the deep drawing of arbitrary shaped cups with a uniform trimming allowance at the flange i.e. cups without ears. This earing defect is caused by planar anisotropy in the sheet and the friction between the blank and punch/die. In this research, a new method for optimum blank shape design using finite element analysis has been proposed. Explicit non-linear finite element (FE) code LSDYNA is used to simulate the deep drawing process. FE models are constructed incorporating the exact physical conditions of the process such as tooling design like die profile radius, punch corner radius, etc., material used, coefficient of friction, punch speed and blank holder force. The material used for the analysis is mild steel. A quantitative error metric called shape error is defined to measure the amount of earing and to compare the deformed shape and target shape set for each stage of the analysis. This error metric is then used to decide whether the blank needs to be modified or not. The cycle is repeated until the converged results are achieved. This iterative design process leads to optimal blank shape. In order to verify the proposed method, examples of square cup and cylindrical cup have been investigated. In every case converged results are achieved after a few iterations. So through the investigation the proposed systematic method of optimal blank design is found to be very effective in the deep drawing process and can be further applied to other stamping applications
Isolation, identification and antibiotic sensitivity pattern of aerobic bacteria from burn wound patient admitted in tertiary care hospital
Background: Significant burn injuries induce a state of immunosuppression that predisposes patients to infectious complications, thus the rate of nosocomial infections are higher. Rapidly merging multidrug resistant among the various isolate in indoor burn patients are depending on time-line becoming serious threat for managing therapeutically. Objective of this study is to determine the aetiological factor, prevalence, antimicrobial susceptibility pattern and emerging nosocomial pathogens.Methods: A prospective study was carried in burn ward of K.L.E.’s Dr. Prabhakar Kore Hospital and Medical Research Centre, Belgaum for the period of 1 year. Pair of wound swab were collected from patient having burnt more than 30% (RULE OF NINE) on 3rd day of stay. Sample were collected aseptically from 30 patients and processed by convectional culture and biochemical identification procedures and tested against commonly used antibiotics.Results: 30 patients that fall under inclusive criteria were enrolled in the study. The total burn surface area (TBSA) ranges from 30-82%. The ratio of female to male patient suffering burn wound in our study is 1.5:1. Aetiology of burn is heat (moist/dry) mostly. Depending upon degree of burn, most of patient suffered from 20 degree (superficial to deep) injury. From 30 swab cultures, 42 isolates were identified during the study in which mixed were 66.66% and one is fungi. The most commonly isolated is Pseudomonas aeruginosa (45.24%) then Klebsiella pneumoniae (19.04%), Acinetobacter spp. (14.28%), Staphylococccus aureus (11.90%). Among gram positive isolates, isolates are found to be most resistant to Erythromycin (100%) and Co-trimoxazole (100%) and sensitive to Vancomycin (71.42%). Among gram negative isolates are found to be most resistant to Gentamicin (91.65%), Ciprofloxacin (82.35%), Ceftazidime (82.35%) and sensitive to Meropenem (52.95%), Piperacillin (35.30%), Carbenicillin (29.41%).Conclusions: Pseudomonas aeruginosa was found to be the most common isolate. The nature of microbial wound colonization and flora changes with time should be taken into consideration in empirical antimicrobial therapy
Recommended from our members
An experimental study of cost cognizant test case prioritization
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate of fault detection, measures how quickly faults are detected within the testing process. The APFD metric had been proposed for measuring the rate of fault detection. This metric applies, however, only in cases in which test costs and fault costs are uniform. In practice, fault costs and test costs are not uniform. For example, some faults which lead to system failures might be more costly than faults which lead to minor errors. Similarly, a test case that runs for several hours is much more costly than a test case that runs for a few seconds. Previous work has thus provided a second, metric APFD[subscript c], for measuring rate of fault detection, that incorporates test costs and fault costs. However, studies of this metric thus far have been limited to abstract distribution models of costs. These distribution models did not represent actual fault costs and test costs for software systems. In this thesis, we describe some practical ways to estimate real fault costs and test costs for software systems, based on operational profiles and test execution timings. Further we define some new cost-cognizant prioritization techniques which focus on the APFD[subscript c] metric. We report results of an empirical study investigating the rate of "units-of-fault-cost-detected-per-unit-test-cost" across various cost-cognizant prioritization techniques and tradeoffs between techniques. The results of our empirical study indicate that cost-cognizant test case prioritization techniques can substantially improve the rate of fault detection of test suites. The results also provide insights into the tradeoffs among various prioritization techniques. For example: (1) techniques incorporating feedback information (information from previous tests) outperformed those without any feedback information; (2) technique effectiveness differed most when faults are relatively difficult to detect; (3) in most cases, technique performance was similar at function and statement level; (4) surprisingly, techniques considering change location did not perform as well as expected. The study also reveals several practical issues that might arise in applying test case prioritization, as well as opportunities for future work
Multi-processor Scheduling to Minimize Flow Time with epsilon Resource Augmentation
We investigate the problem of online scheduling of jobs to minimize flow time and stretch on m identical machines. We consider the case where the algorithm is given either (1 + ε)m machines or m machines of speed (1 + ε), for arbitrarily small ε \u3e 0. We show that simple randomized and deterministic load balancing algorithms, coupled with simple single machine scheduling strategies such as SRPT (shortest remaining processing time) and SJF (shortest job first), are O(poly(1/ε))-competitive for both flow time and stretch. These are the first results which prove constant factor competitive ratios for flow time or stretch with arbitrarily small resource augmentation. Both the randomized and the deterministic load balancing algorithms are non- migratory and do immediate dispatch of jobs.
The randomized algorithm just allocates each incoming job to a random machine. Hence this algorithm is non- clairvoyant, and coupled with SETF (shortest elapsed time first), yields the first non-clairvoyant algorithm which is con- stant competitive for minimizing flow time with arbitrarily small resource augmentation.
The deterministic algorithm that we analyze is due to Avrahami and Azar. For this algorithm, we show O(1/ε)-competitiveness for total flow time and stretch, and also for their Lp norms, for any fixed p ≥ 1
- …