490 research outputs found
Transmission losses cost allocation in restructed electricity market environment
During these recent decades, the restructuring system of electricity market
has been taken places around the whole world. Due to the restructuring
(deregulation), the electrical power system has been divided into three separates
categories according to the function. First stage of power system is the generation
companies (GENCOs), followed by transmission companies (TRANSCOs) and
distribution companies (DISCOs). The competitive environment will be handling by
a non-profit entity, independent system operator (ISO) that functioning as the system
securities that have to make sure that the power system continues to operate in a
stable and economical manner. However, restructuring system can give effect during
the energy transmission. One of the transmission issues is regarding the power
losses. To overcome the losses, generators must generate more power. The issue
regarding the transmission losses in deregulated system is how to allocate it to the
user and charge them in fair ways as in for instance the pool trading model, it is hard
to trace the power contribution and losses of each user in transmission line. In
addition, the users didn’t want to pay the losses, it means that the ISO have to
responsible for the losses and it will be unfair to put the responsible to ISO alone.
Therefore, in this project, the allocation of transmission losses and loss cost methods
which are the pro-rata and proportional sharing method will be investigated.
Comparison between those methods will be done in order to identify which types of
method that reflect an efficient and fair way to distribute the cost of the transmission
losses to the user. These chosen methods will be tested on IEEE bus system
A Computational and Experimental Investigation of Lignin Metabolism in Arabidopsis thaliana.
Predominantly localized in plant secondary cell walls, lignin is a highly crosslinked, aromatic polymer that imparts structural support to plant vasculature, and renders biomass recalcitrant to pretreatment techniques impeding the economical production of biofuels. Lignin is synthesized via the phenylpropanoid pathway where the primary precursor phenylalanine (Phe) undergoes a series of functional modifications catalyzed by 11 enzyme families to produce p-coumaryl, coniferyl, and sinapyl alcohol, which undergo random polymerization into lignin. Several metabolic engineering efforts have aimed to alter lignin content and composition, and make biofuel feedstock more amenable to pretreatment techniques. Despite significant advances, several questions pertaining to carbon flux distribution in the phenylpropanoid network remain unanswered. Furthermore, complexity of the metabolic pathway and a lack of sensitive analytical tools add to the challenges of mechanistically understanding lignin synthesis. In this work, I describe improvements in analytical techniques used to characterize phenylpropanoid metabolism that have been applied to obtain a comprehensive quantitative mass balance of the phenylpropanoid pathway. Finally, machine learning and artificial intelligence were utilized to make predictions about optimal lignin amount and composition for improving saccharification. In summary, the overarching goal of this thesis was to further the understanding of lignin metabolism in the model system, Arabidopis thaliana, employing a combination of experimental and computational strategies. First, we developed comprehensive and sensitive analytical methods based on liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) to quantify intermediates of the phenylpropanoid pathway. Compared to existing targeted profiling techniques, the methods were capable of quantifying a wider range of phenylpropanoid intermediates, at lower concentrations, with minimal sample preparation. The technique was used to generate flux maps for wild type and mutant Arabidopsis stems that were fed exogenously 13C6-Phe. Flux maps computed in this work; (i) suggest the presence of a hitherto uncharacterized alternative route to caffeic acid and lignin synthesis, (ii) shed light on flux splits at key branch points of the network, and (iii) indicate presence of inactive pools for a number of metabolites. Finally, we present a machine learning based model that captures the non-linear relationship between lignin content and composition, and saccharification efficiency. A support vector machine (SVM) based regression technique was developed to predict saccharification efficiency and biomass yields as a function of lignin content, and composition of monomers that make up lignin, namely p-coumaryl (H), coniferyl (G), and sinapyl (S) alcohol derived lignin. The model was trained on data obtained from the literature and validated on Arabidopsis mutants that were excluded from the training data set. Functional forms obtained from SVM regression were further optimized using genetic algorithms (GA) to maximize total sugar yields. Our efforts resulted in two optimal solutions with lower lignin content and interestingly varying H:G:S composition that were conducive to saccharide extractability
Methods for functional characterization of transcription factor binding sites in bacteria
Thesis (Ph.D.)--Boston UniversityUnderstanding gene regulation is necessary to gain insight into and model
important cellular processes including disease. Current inability to combat many diseases
is partly because of incomplete understanding of gene circuitry. Regulation mechanisms
of Mycobacterium tuberculosis, the causative agent of Tuberculosis are not properly
understood. Transcriptional regulatory network (TRN) is a network comprising
transcription factors (TF) and their targeted genes that provide a powerful framework to analyze the complete regulatory system. Chromatin immunoprecipitation followed by next generation sequencing (ChiP-Seq) is becoming the method of choice to identify genome wide TFBS . Therefore, we use ChiP-Seq on known transcription factors to reconstruct the TRN of Mycobacterium tuberculosis (Mtb) and other bacteria. ChiP-Seq reveals various transcription factor binding sites (TFBS) but doesn't provide any information on the mechanism of regulation of the genes by their corresponding TF's. Techniques to gain more insight into the mechanisms include microarray, knock out studies and qPCR. But, these techniques provide a static view of network. Also, they provide information at RNA level and mask the regulation happening at protein level.
Therefore, in order to understand both the mechanism of regulation at protein level as well as to capture the network dynamics, we built a synthetic gene circuit in Mycobacterium smegmatis and defined input-output relationships between key TFs and their targeted promoters. We validated this system on kstR, a TF which is a known repressor. KstR regulates genes involved in cholesterol degradation and is shown to de- repress itself and its regulon genes in the presence of cholesterol as well as in hypoxia, where there are no exogenous lipids4- . We explored the possibility of other by-products
that may be responsible for the de-repression of kstR and its regulon. The data suggests that propionyl-coA, a by-product from degradation of cholesterol, odd numbered fatty acids as well as branched chain amino-acids is causing the de-repression of kstR and its regulon.
ChiP-Seq data on transcription factors in MTb as well as E.coli shows that many TFBS
are located immediately upstream of open reading frame start sites, consistent with our
understanding ofprokaryotic gene regulation. However, the data also suggests that many
TFBS are located inside and also downstream of open reading frames6. One of our
hypotheses is that these novel TFBS might be indirect binding sites that mediate chromatin looping . Therefore, we developed a method 3C (Chromosome Conformation Capture) to understand the regulation in the third dimension by analyzing the chromosomal interactions. We optimized the protocol in E.coli and validated using a known interaction mediated by a repressor GalR . We then identified two regions, 20 kbp apart, containing TFBS of StpA, a nucleoid associated protein, which are not directly involved in gene regulation of their downstream genes. The data from a 3C experiment on an E.coli strain with inducible StpA suggests that these two regions interact by an unknown mechanism. However, the interaction was not lost when a similar experiment is done in StpA knock out strain suggesting that StpA may not be a sole TF responsible for this interaction. Lastly, we developed Hi-C method on E.coli genomic DNA to identify long range interactions in a genome wide and unbiased manner
Meningkatkan Kompetensi Guru Matematika dalam Menerapkan Model PAKEM melalui Supervisi Klinis
Penelitian tindakan sekolah ini bertujuan mengetahui perubahan kompetensi guru matematika dalam menerapkan model PAKEM setelah mendapatkan supervisi klinis. Teknik analisis data menggunakan analisis kualitatif dengan memanfaatkan data deskriptif. Penelitian ini menghasilkan dua temuan yakni: (1) secara keseluruhan kompetensi guru mata pelajaran Matematika dalam menerapkan model PAKEM dapat ditingkatkan pada proses pembelajaran. (2) Supervisi klinis terhadap guru akan mampu meningkatkan kompetensi guru mata pelajaran Matematika dalam menerapkan model PAKEM dalam proses pembelajaran
Interannual Variability in American Lobster Settlement: Correlations with Sea Surface Temperature, Wind Stress and River Discharge
Recruitment to benthic marine populations is fundamentally a biophysical problem. The American Lobster Settlement Index is an annual diver-based survey of the young-of-year American lobsters (Homarus americanus) found in inshore nurseries in New England, USA and Atlantic Canada at the end of the postlarval settlement season. The considerable interannual variability in the settlement index suggests that environmental factors play an important role in regulating planktonic larval supply and transport. In this study, I focused on the longest settlement time series from three oceanographically contrasting regions: Midcoast Maine, coastal Rhode Island and the lower Bay of Fundy. Sampling in these regions was initiated in 1989, 1990 and 1991, respectively. I evaluated the correlation of inshore lobster settlement with sea surface temperature time series from satellites; wind data from buoys and land stations; and river discharge data from inland gauge stations. Correlations were performed between the annual lobster settlement indices and the monthly environmental metric with time lags up to three months prior to the month of settlement sampling, just before larvae hatch into the water column. Interannual variability in lobster settlement correlated strongly with SSTa and wind stress, but exhibited a weak association with river discharge. Statistically significant correlations were restricted to the two-month window when larvae and postlarvae are in the water column. Correlations of the settlement index with monthly satellite-derived sea surface temperature anomalies (SSTa) mapped to recognizable features on the sea surface. For example, the Rhode Island lobster settlement index correlated positively with SSTa found over Georges Bank up to two months prior to settlement sampling. Rhode Island settlement index also correlated with alongshore component of wind stress over Georges Bank for the month of settlement sampling. Midcoast Maine lobster settlement correlated weakly with sea surface temperature anomalies, but a strong positive correlation was found with alongshore wind stress during the month prior to settlement sampling. Only Midcoast Maine lobster settlement showed a negative association with local monthly river discharge. Bay of Fundy lobster settlement was positively correlated with sea surface temperature anomalies and cross-shore wind stress at two of the closest wind stations, one month prior to settlement sampling. In short, sea surface temperature anomalies and wind stress proved to be strong environmental correlates of lobster settlement in this analysis. All significant relationships consistently fell within two months of the settlement sampling, a time when larvae and postlarvae occupy the water column. These results suggest satellite SSTa data and wind data from multiple stations may be useful in predicting interannual fluctuations in lobster settlement, and therefore may lead to a better understanding of the mechanisms influencing recruitment variability
THINK HOSTEL MAINTENANCE SYSTEM (THMS)
This study deals with the complete development of the building maintenance system
equipped with several features that are relevance to produce an enterprise problem
reporting system. The objective of this project is to easily record and identifY which
area of V5 buildings are in need of attentions. Currently, the maintenance system of
new V5 buildings is done traditionally whereby the problem reporting processes are
done through a messy paper-based system. The scope of the study is the
implementation of dynamic graphic in problem notification and reporting. Dynamic
graphic has been widely used by the other web based system especially in weather and
natural disaster prediction system. Here, the implementations of dynamic graphics are
clearly discussed with the theory and technical requirement. The author has chosen
System Development Life Cycle (SDLC) with rapid prototyping as the methodology
for the project management framework. Author has done analysis regarding user
preferences and based on the analysis's result, author has come out with a set of
features that need to be embedded on the system. It includes dynamic graphic
notifications and text reporting process to be applied in the system. In the result and
discussion, the system modules, system's user interface and testing are included. As a
conclusion, the extensive usage of mapping and dynamic graphic notification could
defmitely increase the services of problem reporting system in the proposed system
Text-to-Image Diffusion Models are Zero-Shot Classifiers
The excellent generative capabilities of text-to-image diffusion models
suggest they learn informative representations of image-text data. However,
what knowledge their representations capture is not fully understood, and they
have not been thoroughly explored on downstream tasks. We investigate diffusion
models by proposing a method for evaluating them as zero-shot classifiers. The
key idea is using a diffusion model's ability to denoise a noised image given a
text description of a label as a proxy for that label's likelihood. We apply
our method to Imagen, using it to probe fine-grained aspects of Imagen's
knowledge and comparing it with CLIP's zero-shot abilities. Imagen performs
competitively with CLIP on a wide range of zero-shot image classification
datasets. Additionally, it achieves state-of-the-art results on shape/texture
bias tests and can successfully perform attribute binding while CLIP cannot.
Although generative pre-training is prevalent in NLP, visual foundation models
often use other methods such as contrastive learning. Based on our findings, we
argue that generative pre-training should be explored as a compelling
alternative for vision and vision-language problems
- …