1,791 research outputs found
The Revolting Monster - A Consideration of Existentialist Themes in Mary Shelley\u27s Frankenstein Through a Comparison to Albert Camus\u27 The Stranger
This Masterās thesis is concerned with analyzing key themes and ideas in Mary Shelleyās Frankenstein through an existentialist lens which is made possible through a comparison to themes and ideas in Albert Camusā The Stranger. I aim to make a contribution to my field by fulfilling a comparison that has long been made since the late 1960s when conversations about British Romanticism and Existentialism were still common. The purpose of my first chapter is to elucidate a new argument about the relationship between these two novels. There is a discernable element of Camusian Revolt exhibited by the Creature in some of the most riveting passages of Frankenstein; this element is all the more clearer when placed in conversation with the actions of Meursault, the protagonist of The Stranger. Through more specific examples, and a large reliance on the historical context of both novels that this project is concerned with, I am able to draw connections that go further than thematic similarities and show the relevance of these ideas to readers in our time. The second chapter consists of historical context that sets up an understanding of the reception of Frankenstein and the ensuing consequences of this novel for ruling body interested in maintaining a permanent underclass within the population. The third chapter examines the species of Revolt within Frankenstein by comparing it to The Stranger in order to reach conclusions about the significance of these themes today. The final chapter is an observation about the behavior of revolt modeled by the authors discussed in this thesis. It proposes that the act of writing and creating art is in itself an act of revolt which is the true message the authors intended to convey. It also argues that the medium of the novel is the most effective method of expression for revolt because it taps into human experience in a way no other distinct work of art can
Multi-Class Classification for Identifying JPEG Steganography Embedding Methods
Over 725 steganography tools are available over the Internet, each providing a method for covert transmission of secret messages. This research presents four steganalysis advancements that result in an algorithm that identifies the steganalysis tool used to embed a secret message in a JPEG image file. The algorithm includes feature generation, feature preprocessing, multi-class classification and classifier fusion. The first contribution is a new feature generation method which is based on the decomposition of discrete cosine transform (DCT) coefficients used in the JPEG image encoder. The generated features are better suited to identifying discrepancies in each area of the decomposed DCT coefficients. Second, the classification accuracy is further improved with the development of a feature ranking technique in the preprocessing stage for the kernel Fisher s discriminant (KFD) and support vector machines (SVM) classifiers in the kernel space during the training process. Third, for the KFD and SVM two-class classifiers a classification tree is designed from the kernel space to provide a multi-class classification solution for both methods. Fourth, by analyzing a set of classifiers, signature detectors, and multi-class classification methods a classifier fusion system is developed to increase the detection accuracy of identifying the embedding method used in generating the steganography images. Based on classifying stego images created from research and commercial JPEG steganography techniques, F5, JP Hide, JSteg, Model-based, Model-based Version 1.2, OutGuess, Steganos, StegHide and UTSA embedding methods, the performance of the system shows a statistically significant increase in classification accuracy of 5%. In addition, this system provides a solution for identifying steganographic fingerprints as well as the ability to include future multi-class classification tools
Pathologies of Neural Models Make Interpretations Difficult
One way to interpret neural model predictions is to highlight the most
important input features---for example, a heatmap visualization over the words
in an input sentence. In existing interpretation methods for NLP, a word's
importance is determined by either input perturbation---measuring the decrease
in model confidence when that word is removed---or by the gradient with respect
to that word. To understand the limitations of these methods, we use input
reduction, which iteratively removes the least important word from the input.
This exposes pathological behaviors of neural models: the remaining words
appear nonsensical to humans and are not the ones determined as important by
interpretation methods. As we confirm with human experiments, the reduced
examples lack information to support the prediction of any label, but models
still make the same predictions with high confidence. To explain these
counterintuitive results, we draw connections to adversarial examples and
confidence calibration: pathological behaviors reveal difficulties in
interpreting neural models trained with maximum likelihood. To mitigate their
deficiencies, we fine-tune the models by encouraging high entropy outputs on
reduced examples. Fine-tuned models become more interpretable under input
reduction without accuracy loss on regular examples.Comment: EMNLP 2018 camera read
Oncogenic mutation profiling in new lung cancer and mesothelioma cell lines
published_or_final_versio
Elucidating Conserved Transcriptional Networks Underlying Pesticide Exposure and Parkinson's Disease: A Focus on Chemicals of Epidemiological Relevance
While a number of genetic mutations are associated with Parkinson's disease (PD), it is also widely acknowledged that the environment plays a significant role in the etiology of neurodegenerative diseases. Epidemiological evidence suggests that occupational exposure to pesticides (e.g., dieldrin, paraquat, rotenone, maneb, and ziram) is associated with a higher risk of developing PD in susceptible populations. Within dopaminergic neurons, environmental chemicals can have an array of adverse effects resulting in cell death, such as aberrant redox cycling and oxidative damage, mitochondrial dysfunction, unfolded protein response, ubiquitin-proteome system dysfunction, neuroinflammation, and metabolic disruption. More recently, our understanding of how pesticides affect cells of the central nervous system has been strengthened by computational biology. New insight has been gained about transcriptional and proteomic networks, and the metabolic pathways perturbed by pesticides. These networks and cell signaling pathways constitute potential therapeutic targets for intervention to slow or mitigate neurodegenerative diseases. Here we review the epidemiological evidence that supports a role for specific pesticides in the etiology of PD and identify molecular profiles amongst these pesticides that may contribute to the disease. Using the Comparative Toxicogenomics Database, these transcripts were compared to those regulated by the PD-associated neurotoxicant MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine). While many transcripts are already established as those related to PD (alpha-synuclein, caspases, leucine rich repeat kinase 2, and parkin2), lesser studied targets have emerged as āpesticide/PD-associated transcriptsā [e.g., phosphatidylinositol glycan anchor biosynthesis class C (Pigc), allograft inflammatory factor 1 (Aif1), TIMP metallopeptidase inhibitor 3, and DNA damage inducible transcript 4]. We also compared pesticide-regulated genes to a recent meta-analysis of genome-wide association studies in PD which revealed new genetic mutant alleles; the pesticides under review regulated the expression of many of these genes (e.g., ELOVL fatty acid elongase 7, ATPase H+ transporting V0 subunit a1, and bridging integrator 3). The significance is that these proteins may contribute to pesticide-related increases in PD risk. This review collates information on transcriptome responses to PD-associated pesticides to develop a mechanistic framework for quantifying PD risk with exposures
Wide field CO J = 3->2 mapping of the Serpens Cloud Core
Context. Outflows provide indirect means to get an insight on diverse star
formation associated phenomena. On scales of individual protostellar cores,
outflows combined with intrinsic core properties can be used to study the mass
accretion/ejection process of heavily embedded protostellar sources. Methods.
An area comprising 460"x230" of the Serpens cloud core has been mapped in 12 CO
J = 3\to 2 with the HARP-B heterodyne array at the James Clerk Maxwell
Telescope; J = 3\to 2 observations are more sensitive tracers of hot outflow
gas than lower J CO transitions; combined with the high sensitivity of the
HARP-B receptors outflows are sharply outlined, enabling their association with
individual protostellar cores. Results. Most of ~20 observed outflows are found
to be associated with known protostellar sources in bipolar or unipolar
configurations. All but two outflow/core pairs in our sample tend to have a
projected orientation spanning roughly NW-SE. The overall momentum driven by
outflows in Serpens lies between 3.2 and 5.1 x 10^(-1) M\odot km s^(-1), the
kinetic energy from 4.3 to 6.7 x 10^(43) erg and momentum flux is between 2.8
and 4.4 x 10^(-4) M\odot km s^(-1) yr^(-1). Bolometric luminosities of
protostellar cores based on Spitzer photometry are found up to an order of
magnitude lower than previous estimations derived with IRAS/ISO data.
Conclusions. We confirm the validity of the existing correlations between the
momentum flux and bolometric luminosity of Class I sources for the homogenous
sample of Serpens, though we suggest that they should be revised by a shift to
lower luminosities. All protostars classified as Class 0 sources stand well
above the known Class I correlations, indicating a decline in momentum flux
between the two classes.Comment: 15 pages, 10 figures, accepted for publication in A&
Robust methods for purification of histones from cultured mammalian cells with the preservation of their native modifications
Post-translational modifications (PTMs) of histones play a role in modifying chromatin structure for DNA-templated processes in the eukaryotic nucleus, such as transcription, replication, recombination and repair; thus, histone PTMs are considered major players in the epigenetic control of these processes. Linking specific histone PTMs to gene expression is an arduous task requiring large amounts of highly purified and natively modified histones to be analyzed by various techniques. We have developed robust and complementary procedures, which use strong protein denaturing conditions and yield highly purified core and linker histones from unsynchronized proliferating, M-phase arrested and butyrate-treated cells, fully preserving their native PTMs without using enzyme inhibitors. Cell hypotonic swelling and lysis, nuclei isolation/washing and chromatin solubilization under mild conditions are bypassed to avoid compromising the integrity of histone native PTMs. As controls for our procedures, we tested the most widely used conventional methodologies and demonstrated that they indeed lead to drastic histone dephosphorylation. Additionally, we have developed methods for preserving acid-labile histone modifications by performing non-acid extractions to obtain highly purified H3 and H4. Importantly, isolation of histones H3, H4 and H2A/H2B is achieved without the use of HPLC. Functional supercoiling assays reveal that both hyper- and hypo-phosphorylated histones can be efficiently assembled into polynucleosomes. Notably, the preservation of fully phosphorylated mitotic histones and their assembly into polynucleosomes should open new avenues to investigate an important but overlooked question: the impact of mitotic phosphorylation in chromatin structure and function
Recommended from our members
Characterization of subsurface media from locations up- and down-gradient of a uranium-contaminated aquifer.
The processing of sediment to accurately characterize the spatially-resolved depth profiles of geophysical and geochemical properties along with signatures of microbial density and activity remains a challenge especially in complex contaminated areas. This study processed cores from two sediment boreholes from background and contaminated core sediments and surrounding groundwater. Fresh core sediments were compared by depth to capture the changes in sediment structure, sediment minerals, biomass, and pore water geochemistry in terms of major and trace elements including pollutants, cations, anions, and organic acids. Soil porewater samples were matched to groundwater level, flow rate, and preferential flows and compared to homogenized groundwater-only samples from neighboring monitoring wells. Groundwater analysis of nearby wells only revealed high sulfate and nitrate concentrations while the same analysis using sediment pore water samples with depth was able to suggest areas high in sulfate- and nitrate-reducing bacteria based on their decreased concentration and production of reduced by-products that could not be seen in the groundwater samples. Positive correlations among porewater content, total organic carbon, trace metals and clay minerals revealed a more complicated relationship among contaminant, sediment texture, groundwater table, and biomass. The fluctuating capillary interface had high concentrations of Fe and Mn-oxides combined with trace elements including U, Th, Sr, Ba, Cu, and Co. This suggests the mobility of potentially hazardous elements, sediment structure, and biogeochemical factors are all linked together to impact microbial communities, emphasizing that solid interfaces play an important role in determining the abundance of bacteria in the sediments
- ā¦