669 research outputs found
Metacognition as a Predictor of Conceptual Change
Metacognitive ability - proficiency in analyzing ones own thought processes - is related to the ability to correctly gauge one\u27s mastery of a task (Kruger, 1999; Dunning, 2003). It may also be tied to the ability to make radical conceptual changes learning new information incongruous with prior beliefs. We hypothesize performance on an expanded version of the Cognitive Reflection Test (Frederick, 2005), a battery of questions designed to measure metacognitive ability, would be a predictor of the extent to which undergraduate college students (N=103) improved their understanding of evolution after a semester of college level biology, particularly in the Darwinian principles behind natural selection such as inheritance, variation, and superfecundity. The benefits should be most pronounced in subjects displaying greater metacognitive ability whose prior knowledge incorporated flawed beliefs such as LeMarckianism or needs-based evolution. If metacognitive ability is indeed predictive of learning, it would suggest conceptual change is facilitated by a disposition to think about one\u27s own concepts. It would also suggest that the quality of education in fields such as biology may be improved by fostering and encouraging more reflective thinking
Data Security Predicament in Cloud
Cloud computing a tremendous technology that today becomes the part of almost everyone�s life. The cloud computing is used in homes, business organizations, in banking industries etc. Today, everyone is using cloud may be ranging from posting their pictures on social networking sites or by storing their crucial information. Although, the cloud is using in different areas, but for using cloud services, everyone faces some challenges associated with cloud. This study enlists some of the challenges of using cloud. Moreover, this study also describes some security requirements to limit threats and also some standards of cloud
Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes
Stochastic variational inference for Bayesian deep neural network (DNN)
requires specifying priors and approximate posterior distributions over neural
network weights. Specifying meaningful weight priors is a challenging problem,
particularly for scaling variational inference to deeper architectures
involving high dimensional weight space. We propose MOdel Priors with Empirical
Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian
neural networks. We formulate a two-stage hierarchical modeling, first find the
maximum likelihood estimates of weights with DNN, and then set the weight
priors using empirical Bayes approach to infer the posterior with variational
inference. We empirically evaluate the proposed approach on real-world tasks
including image classification, video activity recognition and audio
classification with varying complex neural network architectures. We also
evaluate our proposed approach on diabetic retinopathy diagnosis task and
benchmark with the state-of-the-art Bayesian deep learning techniques. We
demonstrate MOPED method enables scalable variational inference and provides
reliable uncertainty quantification.Comment: To be published at AAAI 2020 conferenc
Vascular Expansion Microscopy (VascExM): A new method for high-resolution optical imaging of the microvasculature
Purpose: The resolution of optical images systems is restricted by the diffraction limit. In the past
two decades super resolution microscopy techniques have been developed to circumvent this
limitation. However, these techniques depend on state-of-the-art technological advancements. In
2015, the Boyden Lab at MIT developed a technique called expansion microscopy (ExM) which
allows nanoscale resolution imaging of tissue samples using conventional diffraction-limited
microscopes by physically expanding the specimen. Samples are embedded within polyelectrolyte
gelsthat get deprotonated in a basic environment; this cause the gel to swell in solutions like water.
The hydrogel expands, expanding the sample along with it, increasing the distance between closely
placed structures, thereby resolving them. This thesis aims to develop a tissue processing protocol,
based on ExM, for high-resolution 3D optical imaging of the vasculature in preclinical models and
to optimize this protocol across various vascular labels.
Methods: 10-100 μn sections of the brain, liver, lung, heart, and leg muscle of C57 BALB/c mice
were individually labeled with Tomato Lectin Tx-Red, Anti-Laminin Cy3 and a BriteVu and
Galbumin-Rhodamine polymer complex to compare the vasculature pre-and post-expansion.
Morphological parameters such as mean vessel diameter, area and volume were obtained by
vascular segmentation using IMARIS ®
to quantify the expansion process.
Results: Similar trends were observed post-expansion in the mean vessel diameter across the
different organs. A magnification of ~2.5x was observed in the mouse brain, leg, and liver
vasculature while a ~1.6x magnification was observed in lung vasculature. However, due to
sampling error expansion was not observed in the vasculature of mouse heart tissue samples.
Conclusion: Developed a new expansion protocol, VascExM, to obtain high resolution 3D images
of the Tomato Lectin Tx-Red labelled mouse vasculature using diffraction limited microscopes
FRE: A Fast Method For Anomaly Detection And Segmentation
This paper presents a fast and principled approach for solving the visual
anomaly detection and segmentation problem. In this setup, we have access to
only anomaly-free training data and want to detect and identify anomalies of an
arbitrary nature on test data. We propose the application of linear statistical
dimensionality reduction techniques on the intermediate features produced by a
pretrained DNN on the training data, in order to capture the low-dimensional
subspace truly spanned by said features. We show that the \emph{feature
reconstruction error} (FRE), which is the -norm of the difference
between the original feature in the high-dimensional space and the pre-image of
its low-dimensional reduced embedding, is extremely effective for anomaly
detection. Further, using the same feature reconstruction error concept on
intermediate convolutional layers, we derive FRE maps that provide pixel-level
spatial localization of the anomalies in the image (i.e. segmentation).
Experiments using standard anomaly detection datasets and DNN architectures
demonstrate that our method matches or exceeds best-in-class quality
performance, but at a fraction of the computational and memory cost required by
the state of the art. It can be trained and run very efficiently, even on a
traditional CPU.Comment: arXiv admin note: text overlap with arXiv:2203.1042
- …