24 research outputs found
Photostability of J -aggregates adsorbed on TiO 2 nanoparticles and AFM imaging of J -aggregates on a glass surface
Abstract. Spectral properties and photostability of the 5,5'-6,6'-tetrachloro-1,1'-dioctyl-3,3'-bis-(3-carboxypropyl)-benzimidacarbocyanine (Dye 1) J-aggregate was investigated in solution and upon adsorption on TiO 2 nano-particles. Dye 1 was found to photodegrade on the surface of TiO 2 . Additionally, the self-assembly of Dye 1 was studied on a glass surface by non-contact atomic force microscopy (NCAFM). The dye molecules form a well-defined fiber like structure that extends for tens of micrometers. The internal structure of the fibers was clearly resolved and showed a number of small tubes wrapped around each other to form a helical structure
Flexible Bench-Scale Recirculating Flow CPC Photoreactor for Solar Photocatalytic Degradation of Methylene Blue Using Removable TiO 2
TiO2 immobilized on polyethylene (PET) nonwoven sheet was used in the solar photocatalytic degradation of methylene blue (MB). TiO2 Evonik Aeroxide P25 was used in this study. The amount of loaded TiO2 on PET was approximately 24%. Immobilization of TiO2 on PET was conducted by dip coating process followed by exposing to mild heat and pressure. TiO2/PET sheets were wrapped on removable Teflon rods inside home-made bench-scale recirculating flow Compound Parabolic Concentrator (CPC) photoreactor prototype (platform 0.7 × 0.2 × 0.4 m3). CPC photoreactor is made up of seven low iron borosilicate glass tubes connected in series. CPC reflectors are made of stainless steel 304. The prototype was mounted on a platform tilted at 30°N local latitude in Cairo. A centrifugal pump was used to circulate water containing methylene blue (MB) dye inside the glass tubes. Efficient photocatalytic degradation of MB using TiO2/PET was achieved upon the exposure to direct sunlight. Chemical oxygen demand (COD) analyses reveal the complete mineralization of MB. Durability of TiO2/PET composite was also tested under sunlight irradiation. Results indicate only 6% reduction in the amount of TiO2 after seven cycles. No significant change was observed for the physicochemical characteristics of TiO2/PET after the successive irradiation processes
Recommended from our members
J-Aggregates of Amphiphilic Cyanine Dyes for Dye-Sensitized Solar Cells: A Combination between Computational Chemistry and Experimental Device Physics
We report on the design and structure principles of 5,5′-6,6′-tetrachloro-1,1′-dioctyl-3,3′-bis-(3-carboxypropyl)-benzimidacarbocyanine (Dye 1). Such metal-free amphiphilic cyanine dyes have many applications in dye-sensitized solar cells. AFM surface topographic investigation of amphiphilic molecules of Dye 1 adsorbed on TiO2 anode reveals the ability of spontaneous self-organization into highly ordered aggregates of fiber-like structure. These aggregates are known to exhibit outstanding optical properties of J-aggregates, namely, efficient exciton coupling and fast exciton energy migration, which are essential for building up artificial light harvesting to the photovoltaic device. A light-to-electricity conversion efficiency of DSSC based on the metal free amphiphilic Dye 1 is η=3.75, which is about 50% of that based on metal-based N719 Ru-dye (Di-tetrabutylammoniumcis-bis(isothiocyanato)bis(2,2′-bipyridyl-4,4′-dicarboxylato)ruthenium(II)). DFT and TD-DFT studies show that large intramolecular charge transfer takes place from the HOMO to LUMO. HOMO is localized on a part of the molecule with almost no contribution from the carboxylic moiety. This clearly indicates that the anchoring carboxylic group plays a minor role
Multimodal Biometrics Recognition from Facial Video via Deep Learning
Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a Deep Learning Network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and nonredundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips
Recommended from our members
Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning
Biometrics identification using multiple modalities has attracted the attention of many researchers as it producesmore robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodalrecognition system that trains a deep learning network to automatically learn features after extracting multiplebiometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., leftear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we trainsupervised denoising auto-encoders to automatically extract robust and non-redundant features. The automaticallylearned features are then used to train modality specific sparse classifiers to perform the multimodalrecognition. Moreover, the proposed technique has proven robust when some of the above modalities weremissing during the testing. The proposed system has three main components that are responsible for detection,which consists of modality specific detectors to automatically detect images of different modalities present infacial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capturediscriminative representations that are robust to the illumination and pose variations; and classification, whichconsists of a set of modality specific sparse representation classifiers for unimodal recognition, followed byscore level fusion of the recognition results of the available modalities. Experiments conducted on theconstrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resultedin a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracydemonstrates the superiority and robustness of the proposed approach irrespective of the illumination, nonplanarmovement, and pose variations present in the video clips even in the situation of missing modalities. KCI Citation Count:
Recommended from our members
Transition Metal Complexes of Mixed Bioligands: Synthesis, Characterization, DFT Modeling, and Applications
Divalent transition metal complexes [MGlu-Arg (H2O)]H2O and [MGlu-Arg (H2O)]H2O, where M = Co, Ni, Cu, and Zn, Glu = glutamic acid, and Arg = L-arginine, are prepared and characterized using different techniques. DFT and TD-DFT modelling validated and interpreted some experimental results. Weight loss technique reveals efficient corrosion inhibition action of these complexes towards aluminum metal at different temperatures. Our results point to corrosion inhibition through chemical adsorption on the aluminum surface. Additionally, a facile calcination of Co and Cu complexes at 550°C yields nanosized oxides of Co3O4, CoO, and CuO crystalline phases. The complexes show remarkable biological activities towards pathogenic bacteria and fungi. Moreover, in vitro anticancer activity evaluation of these complexes is achieved against hepatocellular carcinoma (HepG-2). The results are correlated with molecular descriptors such as chemical potential and hardness obtained from the frontier orbitals
Multimodal Low Resolution Face and Frontal Gait Recognition from Surveillance Video
Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach
Recommended from our members
Multimodal Low Resolution Face and Frontal Gait Recognition from Surveillance Video
Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach