77 research outputs found

    Focus on Coronary Atherosclerosis

    Get PDF
    Atherosclerosis is a vascular disorder consisting of thickening of arteries and lack of elasticity. Result of atherosclerosis is that arteries become narrowed and hardened due to an excessive buildup of plaque around the artery wall. The disease disrupts the flow of blood around the body, posing serious cardiovascular complications. Arteries contain what is called an endothelium, a thin layer of cells that keeps the artery smooth and allows blood to flow easily. Endothelial damage starts the first step of atherosclerosis. After this, low-density lipoprotein (LDL) cholesterol accumulates in the artery wall. Inflammatory process starts after this accumulation, and macrophages reach the endothelium to clean up cholesterol. But some macrophages are stuck in the affected part of the artery wall in this process. Over time, this results in plaque being built up, consisting of cholesterol and macrophage white blood cells. The plaque clogs up the artery, disrupting the flow of blood. This potentially causes blood clots that can result in life-threatening conditions such as heart attack and other cardiovascular diseases. Atherosclerosis can be seen in all arteries in the body. Atherosclerosis is the most common cause of death in the western countries. Some risk factors are as follows: age, sex, familial predisposition, hyperlipidemia, hypertension, diabetes mellitus, smoking, obesity, insufficient physical activity, etc. Whatever the main reason or the risk factor is, once atherosclerosis is formed, several life-threatening cardiovascular disorders can be seen. So, it has to be revealed

    Compressive Sensing Using Iterative Hard Thresholding with Low Precision Data Representation: Theory and Applications

    Full text link
    Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance? Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized {IHT} that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speed-up with negligible loss of recovery quality.Comment: 19 pages, 5 figures, 1 table, in IEEE Transactions on Signal Processin

    Layerwise Systematic Scan: Deep Boltzmann Machines and Beyond

    Get PDF
    For Markov chain Monte Carlo methods, one of the greatest discrepancies between theory and system is the scan order - while most theoretical development on the mixing time analysis deals with random updates, real-world systems are implemented with systematic scans. We bridge this gap for models that exhibit a bipartite structure, including, most notably, the Restricted/Deep Boltzmann Machine. The de facto implementation for these models scans variables in a layerwise fashion. We show that the Gibbs sampler with a layerwise alternating scan order has its relaxation time (in terms of epochs) no larger than that of a random-update Gibbs sampler (in terms of variable updates). We also construct examples to show that this bound is asymptotically tight. Through standard inequalities, our result also implies a comparison on the mixing times.Comment: v2: typo fixes and improved presentatio

    Accelerating Generalized Linear Models with MLWeaving: A One-Size-Fits-All System for Any-precision Learning (Technical Report)

    Full text link
    Learning from the data stored in a database is an important function increasingly available in relational engines. Methods using lower precision input data are of special interest given their overall higher efficiency but, in databases, these methods have a hidden cost: the quantization of the real value into a smaller number is an expensive step. To address the issue, in this paper we present MLWeaving, a data structure and hardware acceleration technique intended to speed up learning of generalized linear models in databases. ML-Weaving provides a compact, in-memory representation enabling the retrieval of data at any level of precision. MLWeaving also takes advantage of the increasing availability of FPGA-based accelerators to provide a highly efficient implementation of stochastic gradient descent. The solution adopted in MLWeaving is more efficient than existing designs in terms of space (since it can process any resolution on the same design) and resources (via the use of bit-serial multipliers). MLWeaving also enables the runtime tuning of precision, instead of a fixed precision level during the training. We illustrate this using a simple, dynamic precision schedule. Experimental results show MLWeaving achieves up to16 performance improvement over low-precision CPU implementations of first-order methods.Comment: 18 page

    PMLR Press

    Get PDF
    Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-of-magnitude speedups? We mainly focus on linear models, and the answer is yes for linear models. We develop a simple framework called ZipML based on one simple but novel strategy called double sampling. Our ZipML framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quanti- zation would introduce significant bias. We val- idate our framework across a range of applica- tions, and show that it enables an FPGA proto- type that is up to 6.5 × faster than an implemen- tation using full 32-bit precision. We further de- velop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7 × in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-the- art XNOR-Net

    Nonodontogenic mandibular lesions: differentiation based on CT attenuation

    Get PDF
    Mandibular lesions are classified as odontogenic and nonodontogenic based on the cell of origin. Odontogenic lesions are frequently encountered at head and neck imaging. However, several nonodontogenic pathologies may also involve mandible and present further diagnostic dilemma. Awareness of the imaging features of nonodontogenic lesions is crucial in order to guide clinicians in proper patient management. Computed tomography (CT) may provide key information to narrow diagnostic considerations. Nonodontogenic mandibular lesions may have lytic, sclerotic, ground-glass, or mixed lytic and sclerotic appearances on CT. In this article, our aim is to present various nonodontogenic lesions of the mandible by categorizing them according to their attenuations on CT

    COVID-19 in pediatric nephrology centers in Turkey

    Get PDF
    Background/aim: There is limited data on COVID-19 disease in children with kidney disease. We aimed to investigate the characteristics and prognosis of COVID-19 in pediatric nephrology patients in Turkey. Materials and methods: This was a national, multicenter, retrospective cohort study based on an online survey evaluating the data between 11th March 2020 and 11th March 2021 as an initial step of a detailed pediatric nephrology COVID-19 registry. Results: Two hundred and three patients (89 girls and 114 boys) were diagnosed with COVID-19. One-third of these patients (36.9%) were between 10–15 years old. Half of the patients were on kidney replacement therapy: kidney transplant (KTx) recipients (n = 56, 27.5%), patients receiving chronic hemodialysis (n = 33, 16.3%) and those on peritoneal dialysis (PD) (n = 18, 8.9%). Fifty-four (26.6%) children were asymptomatic. Eighty-two (40.3%) patients were hospitalized and 23 (28%) needed intensive care unit admission. Fifty-five percent of the patients were not treated, while the remaining was given favipiravir (20.7%), steroid (16.3%), and hydroxychloroquine (11.3%). Acute kidney injury developed in 19.5% of hospitalized patients. Five (2.4%) had MIS-C. Eighty-three percent of the patients were discharged without any apparent sequelae, while 7 (3.4%) died. One hundred and eight health care staff were infected during the study period. Conclusion: COVID-19 was most commonly seen in patients who underwent KTx and received HD. The combined immunosuppressive therapy and frequent exposure to the hospital setting may increase these patients’ susceptibility. Staff infections before vaccination era were alarming, various precautions should be taken for infection control, particularly optimal vaccination coverage

    Second Language Processing Shows Increased Native-Like Neural Responses after Months of No Exposure

    Get PDF
    Although learning a second language (L2) as an adult is notoriously difficult, research has shown that adults can indeed attain native language-like brain processing and high proficiency levels. However, it is important to then retain what has been attained, even in the absence of continued exposure to the L2—particularly since periods of minimal or no L2 exposure are common. This event-related potential (ERP) study of an artificial language tested performance and neural processing following a substantial period of no exposure. Adults learned to speak and comprehend the artificial language to high proficiency with either explicit, classroom-like, or implicit, immersion-like training, and then underwent several months of no exposure to the language. Surprisingly, proficiency did not decrease during this delay. Instead, it remained unchanged, and there was an increase in native-like neural processing of syntax, as evidenced by several ERP changes—including earlier, more reliable, and more left-lateralized anterior negativities, and more robust P600s, in response to word-order violations. Moreover, both the explicitly and implicitly trained groups showed increased native-like ERP patterns over the delay, indicating that such changes can hold independently of L2 training type. The results demonstrate that substantial periods with no L2 exposure are not necessarily detrimental. Rather, benefits may ensue from such periods of time even when there is no L2 exposure. Interestingly, both before and after the delay the implicitly trained group showed more native-like processing than the explicitly trained group, indicating that type of training also affects the attainment of native-like processing in the brain. Overall, the findings may be largely explained by a combination of forgetting and consolidation in declarative and procedural memory, on which L2 grammar learning appears to depend. The study has a range of implications, and suggests a research program with potentially important consequences for second language acquisition and related fields
    • …
    corecore