2,590 research outputs found
Some aspects of reconstruction using a scalar field in f (T ) Gravity
General relativity characterizes gravity as a geometric property exhibited on
spacetime by massive objects while teleparallel gravity achieves the same
results, at the level of equations, by taking a torsional perspective of
gravity. Similar to f (R) theory, teleparallel gravity can also be generalized
to f (T ), with the resulting field equations being inherently distinct from f
(R) gravity in that they are second order, while in the former case they turn
out to be fourth order. In the present case, a minimally coupled scalar field
is investigated in the f (T ) gravity context for several forms of the scalar
field potential. A number of new f (T ) solutions are found for these
potentials, with their respective state parameters also being examined.Comment: 22 pages, 19 figures, to appear in EPJ
Recommended from our members
All Quiet on the Domestic Front The Household Exemption, Private and Public Spheres, and Social Media: The Third Theater of the Privacy Wars
In 1995, the European Union adopted the Data Protection Directive, the principal statute governing data privacy within the E.U. In its so-called Household Exemption, the Directive excludes “natural persons [acting] in the course of a purely personal or household activity” from any legal obligation to abide by data protection laws in the E.U., an inconsequential exemption in 1995 that has since become a key cog in the debate over individual privacy. Technological innovation over the past twenty years has radically expanded the private individual’s capacity for processing personal data, affording natural persons many of the powers previously restricted to professionals and corporations. Problems have arisen from the misinformed view that those new powers of the individual should fall under the Household Exemption. The common thread is a misconception of what constitutes the sphere of private life that the Exemption is meant to protect. At the crux of the matter is a lack of definition as to what constitutes a purely personal or household activity in this age of increased individual processing power. In this paper, I shall take a deep dive into the history of the Household Exemption’s formation, ultimately proving the Exemption’s sole focus to be the protection of the individual’s private life. With that insight in mind, I shall examine the ways in which the Exemption has come to be misinterpreted, finishing with a suggested modification of the Household Exemption intended to remove all interpretive doubt. While not propounded to be a decisive, flawless resolution of the issue, I hope that my proposal and the underlying work, at a minimum, add an original and unique historical perspective to the discourse
Recommended from our members
Computational Methods for Parameter Estimation in Climate Models
Intensive computational methods have been used by Earth scientists in a wide range of problems in data inversion and uncertainty quantification such as earthquake epicenter location and climate projections. To quantify the uncertainties resulting from a range of plausible model configurations it is necessary to estimate a multidimensional probability distribution. The computational cost of estimating these distributions for geoscience applications is impractical using traditional methods such as Metropolis/Gibbs algorithms as simulation costs limit the number of experiments that can be obtained reasonably. Several alternate sampling strategies have been proposed that could improve on the sampling efficiency including Multiple Very Fast Simulated Annealing (MVFSA) and Adaptive Metropolis algorithms. The performance of these proposed sampling strategies are evaluated with a surrogate climate model that is able to approximate the noise and response behavior of a realistic atmospheric general circulation model (AGCM). The surrogate model is fast enough that its evaluation can be embedded in these Monte Carlo algorithms. We show that adaptive methods can be superior to MVFSA to approximate the known posterior distribution with fewer forward evaluations. However the adaptive methods can also be limited by inadequate sample mixing. The Single Component and Delayed Rejection Adaptive Metropolis algorithms were found to resolve these limitations, although challenges remain to approximating multi-modal distributions. The results show that these advanced methods of statistical inference can provide practical solutions to the climate model calibration problem and challenges in quantifying climate projection uncertainties. The computational methods would also be useful to problems outside climate prediction, particularly those where sampling is limited by availability of computational resources.National Science Foundation OCE-0415251CONACyT-Mexico 159764Institute for Geophysic
Automatic Update of Airport GIS by Remote Sensing Image Analysis
This project investigates ways to automatically update Geographic Information Systems (GIS) for airports by analysis of Very High Resolution (VHR) remote sensing images. These GIS databases map the physical layout of an airport by representing a broad range of features (such as runways, taxiways and roads) as georeferenced vector objects. Updating such systems therefore involves both automatic detection of relevant objects from remotely sensed images, and comparison of these objects between bi-temporal images. The size of the VHR images and the diversity of the object types
to be captured in the GIS databases makes this a very large and complex problem. Therefore we must split it into smaller parts which can be framed as instances of image processing problems. The aim of this project is to apply a range of methodologies to these problems and compare their results, providing quantitative data where possible. In this report, we devote a chapter to each sub-problem that was focussed on.
Chapter 1 begins by introducing the background and motivation of the project, and describes the problem in more detail.
Chapter 2 presents a method for detecting and segmenting runways, by detecting their distinctive markings and feeding them into a modified Hough transform. The algorithm was tested on a dataset of six bi-temporal remote sensing image pairs and validated against manually generated ground-truth GIS data, provided by Jeppesen.
Chapter 3 investigates co-registration of bi-temporal images, as a necessary precursor to most direct change detection algorithms. Chapter 4 then tests a range of bi-temporal change detection algorithms (some standard, some novel) on co-registered images of airports, with the aim of producing a change heat-map which may assist a human operator in rapidly focussing attention on areas that have changed significantly.
Chapter 5 explores a number of approaches to detecting curvilinear AMDB features such as taxilines and stopbars, by means of enhancing such features and suppressing others, prior to thresholding. Finally in Chapter 6 we develop a method for distinguishing between AMDB lines and other curvilinear structures that may occur in an image, by analysing the connectivity between such features and the runways
Functions of cell surface galectin-glycoprotein lattices
Programmed remodeling of cell surface glycans by the sequential action of specific glycosyltransferases can control biological processes by generating or masking ligands for endogenous lectins. Galectins, a family of animal lectins with affinity for beta-galactosides, can form multivalent complexes with cell surface glycoconjugates and deliver a variety of intracellular signals to modulate cell activation, differentiation, and survival. Recent efforts involving genetic or biochemical manipulation of O-glycosylation and N-glycosylation pathways, as well as blockade of the synthesis of endogenous galectins, have illuminated essential roles for galectin-glycoprotein lattices in the control of biological processes including receptor turnover and endocytosis, host-pathogen interactions, and immune cell activation and homeostasis.Fil: Rabinovich, Gabriel Adrián. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Instituto de BiologĂa y Medicina Experimental. FundaciĂłn de Instituto de BiologĂa y Medicina Experimental. Instituto de BiologĂa y Medicina Experimental; Argentina. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de QuĂmica BiolĂłgica de la Facultad de Ciencias Exactas y Naturales. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de QuĂmica BiolĂłgica de la Facultad de Ciencias Exactas y Naturales; ArgentinaFil: Toscano, Marta Alicia. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Instituto de BiologĂa y Medicina Experimental. FundaciĂłn de Instituto de BiologĂa y Medicina Experimental. Instituto de BiologĂa y Medicina Experimental; ArgentinaFil: Jackson, Shawn S.. University of Maryland; Estados UnidosFil: Vasta, Gerardo R.. University of Maryland; Estados Unido
Cooperativity and Stability in a Langevin Model of Protein Folding
We present two simplified models of protein dynamics based on Langevin's
equation of motion in a viscous medium. We explore the effect of the potential
energy function's symmetry on the kinetics and thermodynamics of simulated
folding. We find that an isotropic potential energy function produces, at best,
a modest degree of cooperativity. In contrast, a suitable anisotropic potential
energy function delivers strong cooperativity.Comment: 45 pages, 16 figures, 2 tables. LaTeX. Submitted to the Journal of
Chemical Physic
Machine Learning Advances for Practical Problems in Computer Vision
Convolutional neural networks (CNN) have become the de facto standard for computer vision tasks, due to their unparalleled performance and versatility. Although deep learning removes the need for extensive hand engineered features for every task, real world applications of CNNs still often require considerable engineering effort to produce usable results. In this thesis, we explore solutions to problems that arise in practical applications of CNNs.
We address a rarely acknowledged weakness of CNN object detectors: the tendency to emit many excess detection boxes per object, which must be pruned by non maximum suppression (NMS). This practice relies on the assumption that highly overlapping boxes are excess, which is problematic when objects are occluding overlapping detections are actually required. Therefore we propose a novel loss function that incentivises a CNN to emit exactly one detection per object, making NMS unnecessary.
Another common problem when deploying a CNN in the real world is domain shift - CNNs can be surprisingly vulnerable to sometimes quite subtle differences between the images they encounter at deployment and those they are trained on. We investigate the role that texture plays in domain shift, and propose a novel data augmentation technique using style transfer to train CNNs that are more robust against shifts in texture. We demonstrate that this technique results in better domain transfer on several datasets, without requiring any domain specific knowledge.
In collaboration with AstraZeneca, we develop an embedding space for cellular images collected in a high throughput imaging screen as part of a drug discovery project. This uses a combination of techniques to embed the images in 2D space such that similar images are nearby, for the purpose of visualization and data exploration. The images are also clustered automatically, splitting the large dataset into a smaller number of clusters that display a common phenotype. This allows biologists to quickly triage the high throughput screen, selecting a small subset of promising phenotypes for further investigation.
Finally, we investigate an unusual form of domain bias that manifested in a real-world visual binary classification project for counterfeit detection. We confirm that CNNs are able to ``cheat'' the task by exploiting a strong correlation between class label and the specific camera that acquired the image, and show that this reliably occurs when the correlation is present. We also investigate the question of how exactly the CNN is able to infer camera type from image pixels, given that this is impossible to the human eye.
The contributions in this thesis are of practical value to deep learning practitioners working on a variety of problems in the field of computer vision
Effective Difference of Research Projects on Secondary Mathematics Preservice Teachers\u27 Sense of Efficacy
The purpose of this quantitative study was to investigate the difference in teacher efficacy measures of two groups of preservice teachers who were given modified research projects and were enrolled in a secondary mathematics methods course. The participants were divided into two groups doing modified research projects related to the field of mathematics education. The modification of research projects were grounded in one of Bandura’s (1997) sources of self-efficacy: vicarious experience. Two possible vicarious experiences that inform preservice teachers’ sense of teacher efficacy are reading professional literature and watching others teach followed by discussing the results. These two contexts are the basis of the research projects modifications. Data revealed that there were statistically significant differences between the two groups’ teacher efficacy measures. Those who did the research project involving observations and discussion of mathematics teaching had significantly higher measures of teacher efficacy over those who did their research purely through professional literature
- …