62 research outputs found

    Structure Characterization of Some Snake Venom Proteins as Targeted Therapeutics

    Get PDF
      Introduction:  Snake venom (SV) is a rich source of proteins. Many of them are used for their toxicity in the treatment of diseases such as cancer. In the other hand, toxic agents such as immunotoxins have been investigated as a possible therapy for cancer in targeted therapy. They are conjugated proteins comprised of a toxin such as Ribosome Inactivating Proteins (RIPs) along with an antibody or cytokine that specifically bind to target cells. In our earlier study, we suggest using toxins derived from snake venom as toxic moiety in immunotoxin. Methods and Results: In our earlier report, we structurally compared SVPs with RIPs and suggested SVPs as anticancer agents in immunotoxin therapy. In this study, we selected LAAO, SVMP, disintegrin, PLA2, CVF and CRISP and compared these proteins with each other. We used UniProt and PDB database to discover their sequence and function data. Their structures were constructed through phyre2 server, then compared to other similar peptides. We demonstrated that most of these proteins have low molecular weight and all of them contain several cysteines and are able to make disulfide bonds. Conclusions: Novel therapeutics are essential to treat cancer cells. It seems that SVPs are one of the best candidates due to theirs toxic characteristics, some SVPs such as PLA2 and CRISP are smaller than others and have the most disulfide bonds

    Computing the Similarity Estimate Using Approximate Memory

    Get PDF
    In many computing applications there is a need to compute the similarity of sets of elements. When the sets have many elements or comparison involves many sets, computing the similarity requires significant computational effort and storage capacity. As in most cases, a reasonably accurate estimate is sufficient, many algorithms for similarity estimation have been proposed during the last decades. Those algorithms compute signatures for the sets and use them to estimate similarity. However, as the number of sets that need to be compared grows, even these similarity estimation algorithms require significant memory with its associated power dissipation. This article for the first time considers the use of approximate memories for similarity estimation. A theoretical analysis and simulation results are provided; initially it is shown that similarity sketches can tolerate large bit error rates and thus, they can benefit from using approximate memories without substantially compromising the accuracy of the similarity estimate. An understanding of the effect of errors in the stored signatures on the similarity estimate is pursued. A scheme to mitigate the impact of errors is presented; the proposed scheme tolerates even larger bit error rates and does not need additional memory. For example, bit error rates of up to 10 4 have less than a 1% impact on the accuracy of the estimate when the memory is unprotected, and larger bit errors rates can be tolerated if the memory is parity protected. These findings can be used for voltage supply scaling and increasing the refresh time in SRAMs and DRAMs. Based on those initial results, an enhanced implementation is further proposed for unprotected memories that further extends the range of tolerated BERs and enables power savings of up to 61.31% for SRAMs. In conclusion, this article shows that the use of approximate memories in sketches for similarity estimation provides significant benefits with a negligible impact on accuracy.This work was supported by ACHILLES project PID2019-104207RB-I00 and Go2Edge network RED2018-102585-T funded by the Spanish Agencia Estatal de InvestigaciĂłn (AEI) 10.13039/501100011033 and by the Madrid Community research project TAPIR-CM under Grant P2018/TCS-4496. The research of S. Liu and F. Lombardi was supported by NSF under Grants CCF-1953961 and 1812467

    Concurrent Classifier Error Detection (CCED) in Large Scale Machine Learning Systems

    Full text link
    The complexity of Machine Learning (ML) systems increases each year, with current implementations of large language models or text-to-image generators having billions of parameters and requiring billions of arithmetic operations. As these systems are widely utilized, ensuring their reliable operation is becoming a design requirement. Traditional error detection mechanisms introduce circuit or time redundancy that significantly impacts system performance. An alternative is the use of Concurrent Error Detection (CED) schemes that operate in parallel with the system and exploit their properties to detect errors. CED is attractive for large ML systems because it can potentially reduce the cost of error detection. In this paper, we introduce Concurrent Classifier Error Detection (CCED), a scheme to implement CED in ML systems using a concurrent ML classifier to detect errors. CCED identifies a set of check signals in the main ML system and feeds them to the concurrent ML classifier that is trained to detect errors. The proposed CCED scheme has been implemented and evaluated on two widely used large-scale ML models: Contrastive Language Image Pretraining (CLIP) used for image classification and Bidirectional Encoder Representations from Transformers (BERT) used for natural language applications. The results show that more than 95 percent of the errors are detected when using a simple Random Forest classifier that is order of magnitude simpler than CLIP or BERT. These results illustrate the potential of CCED to implement error detection in large-scale ML models

    Stochastic dividers for low latency neural networks

    Get PDF
    Due to the low complexity in arithmetic unit design, stochastic computing (SC) has attracted considerable interest to implement Artificial Neural Networks (ANNs) for resources-limited applications, because ANNs must usually perform a large number of arithmetic operations. To attain a high computation accuracy in an SC-based ANN, extended stochastic logic is utilized together with standard SC units and thus, a stochastic divider is required to perform the conversion between these logic representations. However, the conventional divider incurs in a large computation latency, so limits an SC implementation for ANNs used in applications needing high performance. Therefore, there is a need to design fast stochastic dividers for SC-based ANNs. Recent works (e.g., a binary searching and triple modular redundancy (BS-TMR) based stochastic divider) are targeting a reduction in computation latency, while keeping the same accuracy compared with the traditional design. However, this divider still requires NN iterations to deal with 2N2^{N} -bit stochastic sequences, and thus the latency increases in proportion to the sequence length. In this paper, a decimal searching and TMR (DS-TMR) based stochastic divider is initially proposed to further reduce the computation latency; it only requires two iterations to calculate the quotient, so regardless of the sequence length. Moreover, a trade-off design between accuracy and hardware is also presented. An SC-based Multi-Layer Perceptron (MLP) is then considered to show the effectiveness of the proposed dividers over current designs. Results show that when utilizing the proposed dividers, the MLP achieves the lowest computation latency while keeping the same classification accuracy; although incurring in an area increase, the overhead due to the proposed dividers is low over the entire MLP. When using as combined metric for both hardware design and computation complexity the product of the implementation area, latency, power and number of clock cycles, the proposed designs are also shown to be superior to the SC-based MLPs (at the same level of accuracy) employing other dividers found in the technical literature as well as the commonly used 32-bit floating point implementation.The work of Shanshan Liu, Farzad Niknia, and Fabrizio Lombardi was supported by the NSF Grant CCF-1953961 and Grant 1812467. The work of Pedro Reviriego was supported in part by the Spanish Ministry of Science and Innovation under project ACHILLES (Grant PID2019-104207RB-I00) and the Go2Edge Network (Grant RED2018-102585-T), and in part by the Madrid Community Research Agency under Grant TAPIR-CM P2018/TCS-4496. The work of Weiqiang Liu was supported by the NSFC under Grant 62022041 and Grant 61871216. The work of Ahmed Louri was supported by the NSF Grant CCF-1812495 and Grant 1953980

    Investigation of the effects of B16F10 derived exosomes in induction of immunosuppressive phenotype in the hematopoietic stem cells

    Get PDF
    Objective: This study aimed to elucidate the effects of melanoma-derived exosomes on modulating the differentiation of hematopoietic stem cells (HSCs) towards immunosuppressive myeloid-derived suppressor cells (MDSCs). Materials and Methods: Exosomes were isolated via ultracentrifugation from conditioned media of the B16F10 murine melanoma cell line after adaptation to exosome-free culture conditions. HSCs were extracted from the bone marrow of adult C57BL/6 mice through density gradient separation and MACS column isolation of CD133+ and CD34+ populations. HSCs were cultured with or without B16F10 exosomes for 24 hours. Flow cytometry analyzed the expression of canonical MDSC surface markers CD11b, Ly6G, and Ly6C. Levels of the immunosuppressive cytokines interleukin-10 (IL-10) and tumor necrosis factor beta (TGF-β) in HSC culture supernatants were quantified by ELISA. Results: Compared to untreated controls, HSCs treated with B16F10 exosomes displayed significantly increased percentages of CD11b+Ly6G+ granulocytic MDSCs and CD11b+Ly6C+ monocytic MDSCs, with a notable predominance of the Ly6G+ granulocytic subtype. Additionally, exosome-treated HSCs secreted markedly higher levels of the cytokines IL-10 and TGF-β, which are involved in MDSC-mediated immunosuppression. Conclusions: Our findings demonstrate that melanoma-derived exosomes can orchestrate the differentiation of HSCs into MDSCs with an immunosuppressive phenotype, as evidenced by the upregulation of MDSC surface markers and secreted cytokines. This supports a role for tumor-derived exosomes in driving the systemic expansion and accumulation of immunosuppressive MDSCs through the reprogramming of HSC fate. Elucidating the exosome contents and HSC signaling pathways involved could reveal therapeutic strategies to block this pathway and enhance anti-tumor immunity

    Mapping and Valuation Iran MARC (Machine Readable Cataloging) Data Elements with FRBR Entities and User Tasks

    No full text
    This study aimed to map IRAN MARC (extracted from RASA, the National Library of Iran’s Software) fields and data elements for FRBR entities and user tasks on one side and also exploring the users' view on assessing the value related to user tasks. A Mixed-method was used for the study. The IRANMARC fields (0XX – 9XX) and data elements were mapped on the FRBR entities and the 4 user tasks (Find, Identify, Select, Obtain) based on similar studies conducted by the Library of Congress on the MARC 21, IFLA report and evaluation of users in research. According to the findings of the study, the first rank of Mapping IRAN MARC (on RASA) fields and data elements on FRBR user tasks with 49.17 percent belonged to the “Identifying” and the last with 18.42 percent to the “Finding” tasks. According to results, maximum compatibility with existing data elements in IRANMARC belonged to the first group and the lowest compatibility with the third group of entities of FRBR model. In total 44.74 percent of the IRANMARC data elements mapped on the first group of FRBR, 9.95 percent matched with the second group and 3.15 with the third group. It also results in MARC data elements valuation which supports user tasks of 1st group entities had high value, 434 (27.83%) in the "identifying", 425 data Element (27.28%) in the "Obtain", 147 data elements (9.44%) in the "Selecting", 95 data elements (6.10 percent) in the "Finding" Respectively. The results show partially consistency with the valuation based on this research report

    Tolerance of Siamese Networks (SNs) to Memory Errors: Analysis and Design

    No full text
    This paper considers memory errors in a Siamese Network (SN) through an extensive analysis and proposes two schemes (using a weight filter and a code) to provide efficient hardware solutions for error tolerance. Initially the impact of memory errors on the weights of the SN (stored as floating-point (FP) numbers) is analyzed; this shows that the degradation is mostly caused by outliers in weights. Two schemes are subsequently proposed. An analysis is pursued to establish the filter’s bounds selection by the maximum/minimum values of the weight distributions, by which outliers can be removed from the operation of the SN. A code scheme for protecting the sign and exponent bits of each weight in an FP number, is also proposed; this code incurs in no memory overhead by utilizing the 4 least significant bits (LSB) to store parity bits. Simulation shows that the filter has a better performance for multi-bit errors correction (a reduction of 95.288% in changed predictions), while the code achieves superior results in single-bit errors correction (a reduction of 99.775% in changed predictions). The combined method that uses the two proposed schemes, retains their advantages, so adaptive to all scenarios; The ASIC-based FP designs of the SN using serial and hybrid implementations are also presented; these pipelined designs utilize a novel multi-layer perceptron (MLP) (as branch networks of the SN) that operates at a frequency of 681.2 MHz (at a 32nm technology node), so significantly higher than existing designs found in the technical literature. The proposed error-tolerant approaches also show advantages in overheads comparing with for example traditional error correction code (ECC). These error-tolerant MLP-based designs are well suited to hardware/power-constrained platforms

    Adaptive Resolution Inference (ARI): Energy Efficient Machine Learning for the Internet of Things

    Full text link
    The implementation of Machine Learning (ML) in Internet of Things (IoT) devices poses significant operational challenges due to limited energy and computation resources. In recent years, significant efforts have been made to implement simplified ML models that can achieve reasonable performance while reducing computation and energy, for example by pruning weights in neural networks, or using reduced precision for the parameters and arithmetic operations. However, this type of approach is limited by the performance of the ML implementation, i.e., by the loss for example in accuracy due to the model simplification. \textcolor{black}{In this paper, we present Adaptive Resolution Inference (ARI), a novel approach that enables to evaluate new trade-offs between energy dissipation and model performance in ML implementations.} The main principle of the proposed approach is to run inferences with reduced precision (quantization) and use the margin over the decision threshold to determine if either the result is reliable, or the inference must run with the full model. The rationale is that quantization only introduces small deviations in the inference scores, such that if the scores have a sufficient margin over the decision threshold, it is very unlikely that the full model would have a different result. Therefore, we can run the quantized model first, and only when the scores do not have a sufficient margin, the full model is run. \textcolor{black}{This enables most inferences to run with the reduced precision model and only a small fraction requires the full model, so significantly reducing computation and energy while not affecting model performance.} The proposed ARI approach is presented, analyzed in detail, and evaluated using different datasets both for floating-point and stochastic computing implementations. The results show that ARI can significantly reduce the energy for inference in different configurations with savings between 40% and 85%
    • …
    corecore