153 research outputs found

    VISUAL OUTCOME AND POST OPERATIVE COMPLICATIONS AFTER SILICONE OIL REMOVAL IN PSEUDOPHAKIC VITRECTOMIZED PATIENTS

    Get PDF
    OBJECTIVE: To assess visual outcome and post-operative complications after silicone oil removal in pseudophakic vitrectomized patients. METHODS:  This interventional case series study was conducted at Department of Ophthalmology, Medical Teaching Institution Lady Reading Hospital, Peshawar, Pakistan from February 2019 to January 2020. A total of 32 eyes of 32 patients were enrolled in the study after fulfilling inclusion and exclusion criteria using non-random consecutive sampling technique. All patients had pars plana vitrectomy with silicone oil done 6 months ago and were pseudophakic. Silicone oil removal was carried out in all patients and visual outcome and surgical complications assessed on 1st and 14th post-operative day. Final examination was done after six months. Statistical analysis was done by using Statistical Package for Social Sciences (version 21) by applying paired sample t-test. RESULTS:    Amongst 32 patients, 20 (62.5%) were male and 12 (37.5%) were female. Age of the patients ranged from 16 to 60 years with a mean age of 35±13.97 years. Pre operatively mean best corrected visual acuity (BCVA) was 1.45±0.52 Log Mar. On the last post-operative follow up after six months mean BCVA was 1.21±0.55 Log Mar. Visual acuity improved in 24 (75%), remained stable in 3 (9.4%) and worsening in visual acuity was seen in five (15.6%) cases. Visual improvement was statistically significant (p-value 0.001) using paired t-test. Most common complications were retinal detachment (n=4: 12.5%), secondary glaucoma (n=4: 12.5%) and epi-retinal membrane (n=2; 6.3%). CONCLUSION: Vision improves in majority of pseudophakic patients after silicone oil removal

    Susceptibility of Continual Learning Against Adversarial Attacks

    Full text link
    Recent continual learning approaches have primarily focused on mitigating catastrophic forgetting. Nevertheless, two critical areas have remained relatively unexplored: 1) evaluating the robustness of proposed methods and 2) ensuring the security of learned tasks. This paper investigates the susceptibility of continually learned tasks, including current and previously acquired tasks, to adversarial attacks. Specifically, we have observed that any class belonging to any task can be easily targeted and misclassified as the desired target class of any other task. Such susceptibility or vulnerability of learned tasks to adversarial attacks raises profound concerns regarding data integrity and privacy. To assess the robustness of continual learning approaches, we consider continual learning approaches in all three scenarios, i.e., task-incremental learning, domain-incremental learning, and class-incremental learning. In this regard, we explore the robustness of three regularization-based methods, three replay-based approaches, and one hybrid technique that combines replay and exemplar approaches. We empirically demonstrated that in any setting of continual learning, any class, whether belonging to the current or previously learned tasks, is susceptible to misclassification. Our observations identify potential limitations of continual learning approaches against adversarial attacks and highlight that current continual learning algorithms could not be suitable for deployment in real-world settings.Comment: 18 pages, 13 figure

    CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis

    Get PDF
    Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.publishedVersio

    Affective social anthropomorphic intelligent system

    Get PDF
    Human conversational styles are measured by the sense of humor, personality, and tone of voice. These characteristics have become essential for conversational intelligent virtual assistants. However, most of the state-of-the-art intelligent virtual assistants (IVAs) are failed to interpret the affective semantics of human voices. This research proposes an anthropomorphic intelligent system that can hold a proper human-like conversation with emotion and personality. A voice style transfer method is also proposed to map the attributes of a specific emotion. Initially, the frequency domain data (Mel-Spectrogram) is created by converting the temporal audio wave data, which comprises discrete patterns for audio features such as notes, pitch, rhythm, and melody. A collateral CNN-Transformer-Encoder is used to predict seven different affective states from voice. The voice is also fed parallelly to the deep-speech, an RNN model that generates the text transcription from the spectrogram. Then the transcripted text is transferred to the multi-domain conversation agent using blended skill talk, transformer-based retrieve-and-generate generation strategy, and beam-search decoding, and an appropriate textual response is generated. The system learns an invertible mapping of data to a latent space that can be manipulated and generates a Mel-spectrogram frame based on previous Mel-spectrogram frames to voice synthesize and style transfer. Finally, the waveform is generated using WaveGlow from the spectrogram. The outcomes of the studies we conducted on individual models were auspicious. Furthermore, users who interacted with the system provided positive feedback, demonstrating the system's effectiveness.Comment: Multimedia Tools and Applications (2023

    Vision transformer and explainable transfer learning models for auto detection of kidney cyst, stone and tumor from CT-radiography

    Get PDF
    Renal failure, a public health concern, and the scarcity of nephrologists around the globe have necessitated the development of an AI-based system to auto-diagnose kidney diseases. This research deals with the three major renal diseases categories: kidney stones, cysts, and tumors, and gathered and annotated a total of 12,446 CT whole abdomen and urogram images in order to construct an AI-based kidney diseases diagnostic system and contribute to the AI community’s research scope e.g., modeling digital-twin of renal functions. The collected images were exposed to exploratory data analysis, which revealed that the images from all of the classes had the same type of mean color distribution. Furthermore, six machine learning models were built, three of which are based on the state-of-the-art variants of the Vision transformers EANet, CCT, and Swin transformers, while the other three are based on well-known deep learning models Resnet, VGG16, and Inception v3, which were adjusted in the last layers. While the VGG16 and CCT models performed admirably, the swin transformer outperformed all of them in terms of accuracy, with an accuracy of 99.30 percent. The F1 score and precision and recall comparison reveal that the Swin transformer outperforms all other models and that it is the quickest to train. The study also revealed the blackbox of the VGG16, Resnet50, and Inception models, demonstrating that VGG16 is superior than Resnet50 and Inceptionv3 in terms of monitoring the necessary anatomy abnormalities. We believe that the superior accuracy of our Swin transformer-based model and the VGG16-based model can both be useful in diagnosing kidney tumors, cysts, and stones.publishedVersio

    Salivary Composition of Oral Squamous Cell Carcinoma Patients

    Get PDF
    OBJECTIVES: The purpose of the study was to determine the salivary composition of Oral squamous cell carcinoma patients. METHODOLOGY: A retrospective study was conducted over 6 months on data of 60 Oral squamous cell carcinoma patients obtained from the patient records of the Institute of Radiotherapy and Nuclear Medicine, Peshawar. Salivary pH, Sodium, Potassium, and total proteins of Oral squamous cell carcinoma patients were recorded. RESULTS: Sodium, Potassium, and total protein concentration in saliva of oral squamous cell carcinoma patients were 23.5 mM/L, 96.7mM/L, and 234.6 mM/L, respectively. These values were significantly higher than normal salivary concentration. CONCLUSION: It was concluded that the saliva of oral squamous cell carcinoma patients contains higher concentrations of Sodium, Potassium, and total proteins

    Spectral characterization, analgesic, and anti-inflammatory effects of ethanolic extract of Calotropis procera leaf and dry latex from Jazan, Kingdom of Saudi Arabia

    Get PDF
    Traditional healers have used the shrub Calotropis procera (CP) for many years for various therapies. The present study investigated the bioactive constituents of ethanolic extract of CP leaf and dried latex using gas chromatography-mass spectrometry and Fourier transforms infrared spectroscopy. The identification and characterization of the compounds were confirmed by examining the constituents' mass spectrum fragmentations and FT-IR spectra and comparing the results with those in the literature. The tail-flick method was used to investigate the analgesic properties of the extract and its anti-inflammatory activities using a rat model of formalin-induced oedema. Acute oral toxicity in rats was studied per OECD recommendations. Twenty male rats were divided into four groups and received an ethanolic extract of the leaves and dried milky sap of CP (200 mg/Kg) in groups 1, 2, and 3. Group 4 rats were administered aspirin 50 mg/kg as a positive control. The CP dried latex extract has the highest content of lupeol and its acetate derivative compared to its leaf extract. The CP dried latex extract inhibited inflammation more significantly than the ethanolic leaf extract and the drug indomethacin at a higher dosage (200 mg/kg). The ethanolic extracts showed analgesia comparable to aspirin. It suggests that fatty acids and their esters, particularly ethyl linoleate (8.96%), ethyl palmitate (7.99%), ethyl linoleate (6.98%), and palmitic acid (5.18%), may be valuable biomarkers for characterizing leaf and latex samples and describing the medicinal potential of CP

    Automatic seat belt

    Get PDF
    Motor vehicle accidents have grown to be a major cause of death and injuries. We are developed occupant safety feature with an intension to reduce accidental injuries to occupants. After studying number of design plans and research papers, we have concluded to design and develop seat belt safety mechanism using spring and rope mechanism. In case of accidents, passenger lives can be saved greatly by use of seat belts and airbags in the automobiles. The safety implications of these systems and the stringent safety regulations in the world have brought a growing market to these products. The purpose of developing the project is to design alternate method of seat belt safety mechanism without changing the available space in the car and also to provide safety to occupants in those cars in which air bags and other safety system could not be implemented due to increase in cost.The actuating system design includes three point seat belt, spring, wire rope ,solenoid and locking mechanism

    Spectral analysis and bioactive profiling of hot methanolic extracts from Phoenix dactylifera seeds: Antibacterial efficacy and in vitro cytotoxicity insights

    Get PDF
    Phoenix dactylifera, commonly called date palm, has great importance as a fruiting plant. The hot methanolic extract of date seeds (HMEDSE), was further fractionated into three fractions (F1, F2, and F3) through column chromatography. The three fractions were composed of various bioactive constituents which was analysed through GC-MS and FT-IR analysis. The results revealed remarkable antibacterial properties of crude HMEDSE against various pathogenic microorganisms affecting humans. The spectrum of activity of HMEDSE against various human pathogenic bacteria showed the following sequence based on its efficacy, Escherichia coli (17.6 ± 2.5 mm), Klebsiella pneumoniae (16.3 ± 2.5 mm), Staphylococcus aureus (16.3 ± 1.5 mm), Streptococcus pyogenes (15 ± 2.6 mm), Pseudomonas aeruginosa (15 ± 2 mm), and lastly, Bacillus subtilis (14.3 ± 2 mm). Furthermore, HMEDSE exhibited cytotoxicity, with an IC50 of 73.5 ± 0.5 µg/mL against MCF-7 ATCC breast cancer cells, leading to gradual apoptosis
    • …
    corecore