26 research outputs found

    Modeling and experimentation of drying of adipose derived adult stem cells

    Get PDF
    Development of protocols for storing desiccated cells at ambient temperatures offers tremendous economic and practical advantages over traditional storage procedures such as cryopreservation and freeze-drying. An integrated frame work of experimentation and modeling is adopted in the present work to develop procedures for storage of adult stem cells at ambient temperature. As a first step, we have measured the post-rehydration membrane integrity of two passages, Passage-0 (P0) and Passage-1 (P1), of human adipose-derived stem cells (hASCs). hASCs were dried using a convective stage at three different drying rates (slow, moderate and rapid) in D-PBS with trehalose (50 mM) and glycerol (384 mM). After drying the ASCs were stored for 48 hrs, in three different conditions: i) at ambient temperature, ii) in plastic bags at ambient temperature and iii) in vacuum sealed plastic bags at ambient temperature. Post-rehydration membrane integrity was assessed after incubating the rehydrated ASCs. Also, the understanding of the entire drying procedure was extended by theoretically developing a novel ultrasound resonant sensor capable of quantifying the rate of water loss in real time. The sensor has been modeled as a conservative beam system and accounts for the effect on the fundamental frequency due to change in mass during drying when excited by ultrasound

    Quantification of channel performance and development and characterization of small magnetic field probes

    Get PDF
    This thesis presents a new approach to quantifying channel performance using a transmitter waveform and dispersion penalty (TWDP) with frequency domain S-parameter data. Initially TWDP was defined to characterize the performance of a transmitter in optical links. More recently its use has been extended to include the quantification of channel performance, especially in high-speed copper links. This project focused mainly on channel characterization. Instead of using the time-domain oscilloscope measurements involved in the original approach, it proposes a new method that relies on frequency-domain S-parameter data obtained either from measurements or simulations. It included a parametric study of TWDP with respect to factors such as bit rate, number of samples, and rise/fall time. This paper discusses the parameters and the results of that study. This thesis also describes a means to obtain a flat frequency response from the first-order-derivative behavior of an electrically small loop and an electrically short electric field probe by using both in combination with active oscilloscope probes. Several magnetic field (H-field) probes based on flex-circuit technology were designed to operate at up to about 5 GHz. The H-field probe terminals were connected to the differential amplifier of the active oscilloscope probe, which functioned as an integrator to achieve a flat frequency response. The integrator behavior compensated for the first-order-derivative response of the flex circuit probes. Another H-field probe was designed as a new approach to ensure high sensitivity without compromising spatial resolution. This thesis describes full wave simulations of the 1-mil probe and analyses the result --Abstract, page iii

    Using TWDP to Quantify Channel Performance with Frequency-Domain S-Parameter Data

    Get PDF
    This paper presents an approach to quantify channel performance using TWDP (Transmitter Waveform and Dispersion Penalty) with frequency-domain S-parameter data. TWDP is initially defined to characterize the performance of a transmitter in optical links. The same concept has been extended to quantify channel performance as well, especially in high-speed copper links. This paper focuses on channel characterization. Instead of using time-domain oscilloscope measurements as defined in the original approach, a new method is proposed by using the frequency-domain S-parameter data, obtained either from measurements or simulations. A parametric study on TWDP with respect to bit rate, number of samples per bit, rise/fall time, etc., is also presented with discussions

    Acute necrotising pancreatitis as the first and sole presentation of undiagnosed primary hyperparathyroidism

    Get PDF
    Primary hyperparathyroidism is a recognized, but rare, cause of acute pancreatitis. The pathophysiology of hypercalcemia-induced acute pancreatitis is not well known, but when this combination occurs, pancreatitis is likely to be severe and the degree of hypercalcemia may play an important role in this association. Therefore, the cause of hypercalcemia should be identified early. Surgical resection of the parathyroid adenoma is the ultimate therapy. We report two cases with severe acute necrotizing pancreatitis associated with hypercalcemia. The cause of hyperparathyroidism was a benign parathyroid adenoma. We highlight the drawbacks in delaying the diagnosis of primary hyperparathyroidism in patients with acute pancreatitis as the sole clinical presentation

    Are Face Detection Models Biased?

    Full text link
    The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.Comment: Accepted in FG 202

    DeePhy: On Deepfake Phylogeny

    Full text link
    Deepfake refers to tailored and synthetically generated videos which are now prevalent and spreading on a large scale, threatening the trustworthiness of the information available online. While existing datasets contain different kinds of deepfakes which vary in their generation technique, they do not consider progression of deepfakes in a "phylogenetic" manner. It is possible that an existing deepfake face is swapped with another face. This process of face swapping can be performed multiple times and the resultant deepfake can be evolved to confuse the deepfake detection algorithms. Further, many databases do not provide the employed generative model as target labels. Model attribution helps in enhancing the explainability of the detection results by providing information on the generative model employed. In order to enable the research community to address these questions, this paper proposes DeePhy, a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques. There are 840 videos of one-time swapped deepfakes, 2520 videos of two-times swapped deepfakes and 1680 videos of three-times swapped deepfakes. With over 30 GBs in size, the database is prepared in over 1100 hours using 18 GPUs of 1,352 GB cumulative memory. We also present the benchmark on DeePhy dataset using six deepfake detection algorithms. The results highlight the need to evolve the research of model attribution of deepfakes and generalize the process over a variety of deepfake generation techniques. The database is available at: http://iab-rubric.org/deephy-databaseComment: Accepted at 2022, International Joint Conference on Biometrics (IJCB 2022

    On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms

    Full text link
    Artificial Intelligence (AI) has made its way into various scientific fields, providing astonishing improvements over existing algorithms for a wide variety of tasks. In recent years, there have been severe concerns over the trustworthiness of AI technologies. The scientific community has focused on the development of trustworthy AI algorithms. However, machine and deep learning algorithms, popular in the AI community today, depend heavily on the data used during their development. These learning algorithms identify patterns in the data, learning the behavioral objective. Any flaws in the data have the potential to translate directly into algorithms. In this study, we discuss the importance of Responsible Machine Learning Datasets and propose a framework to evaluate the datasets through a responsible rubric. While existing work focuses on the post-hoc evaluation of algorithms for their trustworthiness, we provide a framework that considers the data component separately to understand its role in the algorithm. We discuss responsible datasets through the lens of fairness, privacy, and regulatory compliance and provide recommendations for constructing future datasets. After surveying over 100 datasets, we use 60 datasets for analysis and demonstrate that none of these datasets is immune to issues of fairness, privacy preservation, and regulatory compliance. We provide modifications to the ``datasheets for datasets" with important additions for improved dataset documentation. With governments around the world regularizing data protection laws, the method for the creation of datasets in the scientific community requires revision. We believe this study is timely and relevant in today's era of AI.Comment: corrected typo

    DEVELOPMENT AND EVALUATION OF FREEZING RESISTANT INTRAVENOUS FLUID

    Get PDF
    Objectives: Hemorrhagic or hypovolemic shocks accounts for a large portion of civilian and military trauma deaths due to life-threatening blood loss which requires intravenous fluid infusion to prevent essential deficiencies of fluids. However, at low temperature (-150C) fluid bottles freeze out and can not be used in emergency. In view of that, objective of the present work is to develop a freezing resistant intravenous formulation (FRIV) and its in vivo safety and efficacy evaluation. Methods: FRIV formulations were developed using standardized ringer lactate (RL) formulation protocol, in which varying concentrations of ethanol and glycerol were added to induce desired physiochemical properties. Efficacy of FRIV was evaluated in terms of survival percentage of hemorrhagic animal models (Swiss albino strain mice). Acute toxicity studies were carried out through an infusion at dose levels (0, 20 and 40 ml/Kg b. wt.). Results: In vitro data showed that optimized FRIV (F-10) takes more time (360 ± 21 min) for freezing and less time in thawing (50 ± 4.50 min) in comparison to control which takes (110 ± 15 min) in freezing and (80 ± 7.25 min) in thawing. Formulations were found to be stable and sterile up to six months. In vivo efficacy data showed ≥ 75% survival in animals infused with FRIV as compared to control group in hemorrhagic animal models and no treatment related toxic effects of optimized formulation in terms of hematological, serum biochemistry and histopathological analysis. Conclusion: Pre-clinical safety and efficacy data of the present study indicated that developed FRIV formulation could be used for fluid recovery during the hemorrhagic shocks conditions in the combat scenario

    Management of Secondary Glaucoma, a Rising Challenge

    Get PDF
    Secondary glaucoma has increased exponentially in recent times. This is partially due to the increase in complex eye surgeries like corneal transplantation and vitreoretinal surgery and partly due to the increase in life style related diseases like diabetes causing an increase in the prevalence of neovascular glaucoma. The other leading causes of secondary glaucoma are post-trauma, post-cataract surgery, and lens-induced glaucoma. Secondary glaucoma is an important cause of visual morbidity. The management of this complex glaucoma is difficult as they are mostly intractable and do not respond to anti-glaucoma medications. Many patients who are not managed by medical management may require surgical intervention along with vigilant control of their primary pathology. This course would address the stepwise approach to the management of these glaucomas and the tips and tricks to tackle the nuances during management. This chapter would specifically address the management of neovascular glaucoma, Post-PK glaucoma, lens-induced glaucoma, traumatic glaucoma, and uveitic glaucoma
    corecore