1,789 research outputs found

    A comprehensive review on the novel approaches using nanomaterials for the remediation of soil and water pollution

    No full text
    While urbanisation has numerous advantages, it causes greater risks to the environment and human health because of the release of heavy metals, various organic and inorganic contaminants, personal care products, and pharmaceuticals. Though several actions are being taken daily to lessen the release of harmful substances, there is still an immediate need to find a suitable solution to protect the environment. Nanotechnology has multifaceted applications, and there is extensive evidence of the emerging applications of nanoremediation, especially for soil and water pollution. Iron nanoparticles showed outstanding removal efficiency towards hexavalent chromium (100 %). Likewise, several publications on soil and water remediation employ nanomaterials based on metals, carbon, and polymers. However, most of the previously conducted works present the key nanoremediation results without depicting each nanomaterial's advantages and disadvantages. Hence, this work critically reviews the pros and cons of each nanomaterial with a special focus towards novel approaches using green synthesised nanomaterials that are completely eco-friendly and hence preferred for the removal of various contaminants without producing harmful effects. However, some bottlenecks exist in fully implementing the green nanoparticles for Nanoremediation. Thus, the review discusses the limitations of green nanomaterials that need to be addressed soon to maintain environmental sustainability. Finally, this review presents opportunities for future work in assessing the eco-safety of each nanomaterial that boosts the further utilisation of nanotechnology in the sustainable remediation of contaminated soil and water

    Algorithmic Fairness and Bias in Machine Learning Systems

    Get PDF
    In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in machine learning systems are crucial issues in this regard that demand the attention of academics, practitioners, and policymakers. Building fair and unbiased machine learning systems that uphold equality and prevent discrimination requires addressing biases in training data, creating fairness-aware algorithms, encouraging transparency and interpretability, and encouraging diversity and inclusivity

    PTEN in triple-negative breast carcinoma: protein expression and genomic alteration in pretreatment and posttreatment specimens

    No full text
    Background: Recent advances have been made in targeting the phosphoinositide 3-kinase pathway in breast cancer. Phosphatase and tensin homolog (PTEN) is a key component of that pathway. Objective: To understand the changes in PTEN expression over the course of the disease in patients with triple-negative breast cancer (TNBC) and whether PTEN copy number variation (CNV) by next-generation sequencing (NGS) can serve as an alternative to immunohistochemistry (IHC) to identify PTEN loss. Methods: We compared PTEN expression by IHC between pretreatment tumors and residual tumors in the breast and lymph nodes after neoadjuvant chemotherapy in 96 patients enrolled in a TNBC clinical trial. A correlative analysis between PTEN protein expression and PTEN CNV by NGS was also performed. Results: With a stringent cutoff for PTEN IHC scoring, PTEN expression was discordant between pretreatment and posttreatment primary tumors in 5% of patients ( n ‚ÄČ=‚ÄČ96) and between posttreatment primary tumors and lymph node metastases in 9% ( n ‚ÄČ=‚ÄČ33). A less stringent cutoff yielded similar discordance rates. Intratumoral heterogeneity for PTEN loss was observed in 7% of the patients. Among pretreatment tumors, PTEN copy numbers by whole exome sequencing ( n ‚ÄČ=‚ÄČ72) were significantly higher in the PTEN-positive tumors by IHC compared with the IHC PTEN-loss tumors ( p ‚ÄČ<‚ÄČ0.0001). However, PTEN-positive and PTEN-loss tumors by IHC overlapped in copy numbers: 14 of 60 PTEN-positive samples showed decreased copy numbers in the range of those of the PTEN-loss tumors. Conclusion: Testing various specimens by IHC may generate different PTEN results in a small proportion of patients with TNBC; therefore, the decision of testing one versus multiple specimens in a clinical trial should be defined in the patient inclusion criteria. Although a distinct cutoff by which CNV differentiated PTEN-positive tumors from those with PTEN loss was not identified, higher copy number of PTEN may confer positive PTEN, whereas lower copy number of PTEN would necessitate additional testing by IHC to assess PTEN loss. Trial registration: NCT02276443

    Protecting Children from Harmful Audio Content: Automated Profanity Detection From English Audio in Songs and Social-Media

    Get PDF
    A novel approach for the automated detection of profanity in English audio songs using machine learning techniques. One of the primary drawbacks of existing systems is only confined to textual data. The proposed method utilizes a combination of feature extraction techniques and machine learning algorithms to identify profanity in audio songs. Specifically, the approach employs the popular feature extraction techniques of Term frequency‚Äďinverse document frequency (TF-IDF), Bidirectional Encoder Representations from Transformers (BERT) and Doc2vec to extract relevant features from the audio songs. TF-IDF is used to capture the frequency and importance of each word in the song, while BERT is utilized to extract contextualized representations of words that can capture more nuanced meanings. To capture the semantic meaning of words in audio songs, also explored the use of the Doc2Vec model, which is a neural network-based approach that can extract relevant features from the audio songs. The study utilizes Open Whisper, an open-source machine learning library, to develop and implement the approach. A dataset of English audio songs was used to evaluate the performance of the proposed method. The results showed that both the TF-IDF and BERT models outperformed the Doc2Vec model in terms of accuracy in identifying profanity in English audio songs. The proposed approach has potential applications in identifying profanity in various forms of audio content, including songs, audio clips, social media, reels, and shorts

    Deconfined quantum critical points: a review

    Full text link
    Continuous phase transitions in equilibrium statistical mechanics were successfully described 50 years ago with the development of the renormalization group framework. This framework was initially developed in the context of phase transitions whose universal properties are captured by the long wavelength (and long time) fluctuations of a Landau order parameter field. Subsequent developments include a straightforward generalization to a class of T=0T = 0 phase transitions driven by quantum fluctuations. In the last 2 decades it has become clear that there is a vast landscape of quantum phase transitions where the physics is not always usefully (or sometimes cannot be) formulated in terms of fluctuations of a Landau order parameter field. A wide class of such phase transitions - dubbed deconfined quantum critical points - involve the emergence of fractionalized degrees of freedom coupled to emergent gauge fields. Here I review some salient aspects of these deconfined critical points.Comment: 32 pages, 3 figures; to be published in "50 years of the renormalization group", dedicated to the memory of Michael E. Fisher, edited by Amnon Aharony, Ora Entin-Wohlman, David Huse, and Leo Radzihovsky, World Scientifi

    Global variations in diabetes mellitus based on fasting glucose and haemogloblin A1c

    Get PDF
    Fasting plasma glucose (FPG) and haemoglobin A1c (HbA1c) are both used to diagnose diabetes, but may identify different people as having diabetes. We used data from 117 population-based studies and quantified, in different world regions, the prevalence of diagnosed diabetes, and whether those who were previously undiagnosed and detected as having diabetes in survey screening had elevated FPG, HbA1c, or both. We developed prediction equations for estimating the probability that a person without previously diagnosed diabetes, and at a specific level of FPG, had elevated HbA1c, and vice versa. The age-standardised proportion of diabetes that was previously undiagnosed, and detected in survey screening, ranged from 30% in the high-income western region to 66% in south Asia. Among those with screen-detected diabetes with either test, the agestandardised proportion who had elevated levels of both FPG and HbA1c was 29-39% across regions; the remainder had discordant elevation of FPG or HbA1c. In most low- and middle-income regions, isolated elevated HbA1c more common than isolated elevated FPG. In these regions, the use of FPG alone may delay diabetes diagnosis and underestimate diabetes prevalence. Our prediction equations help allocate finite resources for measuring HbA1c to reduce the global gap in diabetes diagnosis and surveillance.peer-reviewe

    Text Data Transfer Using Li-Fi Communication

    No full text
    A new and innovative technology called Light Fidelity (Li-Fi) has been developed in the past few years which are an alternate solution for wireless fidelity (Wi-Fi). Li-Fi uses light source as a transmitting medium for data transmission. The data is transmitted by flickering the light source (i.e. switching them On and Off) at a speed that cannot be noticeable to the eye. In this proposed article, wireless data transfer b e t w e e n t w o systems is established with the support of Li-Fi technology. Here data is transmitted from Tx PC via the LASER module that is driven by the Arduino UNO. On the receiver side data will be received on Rx PC using the photodiode (Solar panel), which is also connected to the Arduino UNO to extract the original data from the light. Li-Fi provides better bandwidth, efficiency, connectivity and security than WiFi and achieved high speeds larger than 1 Gbps under the laboratory conditions. The experimental report shows that the bit rate is enhanced using the proposed system and reached up to 147 bps with 1 0 0 % accuracy, over a 50 cm di sta nc e . And the setup also made it simpler than others out there and overall cost also reduce

    Large-scale data-driven financial risk management &amp; analysis using machine learning strategies

    No full text
    Recently the wave of financial crises that have shaken the economy and financial world have caused severe bank losses. Some researchers have focused on examining catastrophes to develop an early warning system to handle financial risks. Financial experts and academics are increasingly interested in developing big data financial risk prevention and control capabilities based on cutting-edge technologies like big data, machine learning (ML), and neural networks (NN), as well as accelerating the implementation of intelligent risk prevention and control platforms. This research analyzed and processed the large-scale datasets before training and evaluated using the three models ‚Äď cluster based K-nearest neighbor (KNN), cluster based logistic regression (LR), and cluster based XG Boost for their ability to predict loan defaults and their occurrence of likelihood. The investor's wealth proportion measure of the proposed model ranges from 0.02 to 0.09. Applying the value-at-risk strategy, the optimal consumption stability not exceeded 5% of the total investment wealth. The simulation results of the proposed model obtained better results of large-scale data-driven financial risks over the state-of-the-art methods. In this article XG Boost, KNN are the machine learning are proposed for financial risk management with IOT deployement

    Gelatin/Nanofibrin bioactive scaffold prepared with enhanced biocompatibility for skin tissue regeneration

    No full text
    This research study was to develop a nano bioactive scaffold (NBAS) using gelatins (Gel), nanofibrin (Nano-FB), and glycerol (Gly) by film casting method for potential use in skin tissue engineering. The developed NBAS were characterized for their molecular interaction (Fourier transforms infrared spectroscopy (FTIR)), microstructure (scanning electron microscopy (S.E.M.)), mechanical strength (tensile strength (MPa), elongation at break (%), and flexibility (%)), and Cell viability (M.T.T. assay ((3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyltetrazolium bromide)) were assessed in the biocompatibility study. The mechanical result of the N.B.F. possessed better (tensile strength of 5.22¬†¬Ī¬†0.07¬†MPa), (elongation at break of 5.88¬†+¬†0.04%), and (flexibility of 9.18¬†¬Ī¬†0.09%) properties. The Invitro study using a human keratinocyte (HaCaT) cell line proved the 100% biocompatibility of NBAS. Based outcome of this study, performed its mechanical properties and exhibited biocompatibility in skin tissue engineering. The study has devised a process for using slaughterhouse and fish waste in the production of valuable medical products like wound dressing materials
    • ‚Ķ
    corecore