2,355 research outputs found
THE ECONOMIC PHILOSOPHY OF THE UNITED STATES: IS THE ORGANIZATION MAN DESTROYING OUR TRADITIONAL INDIVIDUALISM?
Public Economics,
Recommended from our members
Analyzing Citation-Distance Networks for Evaluating Publication Impact
Studying citation patterns of scholarly articles has been of interest to many researchers from various disciplines. While the relationship of citations and scientific impact has been widely studied in the literature, in this paper we develop the idea of analyzing the semantic distance of scholarly articles in a citation network (citation-distance network) to uncover patterns that reflect scientific impact. More specifically, we compare two types of publications in terms of their citation-distance patterns, seminal publications and literature reviews, and focus on their referencing patterns as well as on publications which cite them. We show that seminal publications are associated with a larger semantic distance, measured using the content of the articles, between their references and the citing publications, while literature reviews tend to cite publications from a wider range of topics. Our motivation is to understand and utilize this information to create new research evaluation metrics which would better reflect scientific impact
Recommended from our members
Research Collaboration Analysis Using Text and Graph Features
Patterns of scientific collaboration and their effect on scientific production have been the subject of many studies. In this paper we analyze the nature of ties between co-authors and study collaboration patterns in science from the perspective of semantic similarity of authors who wrote a paper together and the strength of ties between these authors (i.e. how much have they previously collaborated together). These two views of scientific collaboration are used to analyze publications in the TrueImpactDataset [11], a new dataset containing two types of publications - publications regarded as seminal and publications regarded as literature reviews by field experts. We show there are distinct differences between seminal publications and literature reviews in terms of author similarity and the strength of ties between their authors. In particular, we find that seminal publications tend to be written by authors who have previously worked on dissimilar problems (i.e. authors from different fields or even disciplines), and by authors who are not frequent collaborators. On the other hand, literature reviews in our dataset tend to be the result of an established collaboration within a discipline. This demonstrates that our method provides meaningful information about potential future impacts of a publication which does not require citation information
Recommended from our members
Text and Graph Based Approach for Analyzing Patterns of Research Collaboration: An analysis of the TrueImpactDataset
Patterns of scientific collaboration and their effect on scientific production have been the subject of many studies. In this paper, we analyze the nature of ties between co-authors and study collaboration patterns in science from the perspective of semantic similarity of authors who wrote a paper together and the strength of ties between these authors (i.e. how frequently have they previously collaborated together). These two views of scientific collaboration are used to analyze publications in the TrueImpactDataset (Herrmannova et al., 2017) (Herrmannova et al., 2017), a new dataset containing two types of publications – publications regarded as seminal and publications regarded as literature reviews by field experts. We show there are distinct differences between seminal publications and literature reviews in terms of author similarity and the strength of ties between their authors. In particular, we find that seminal publications tend to be written by authors who have previously worked on dissimilar problems (i.e. authors from different fields or even disciplines), and by authors who are not frequent collaborators. On the other hand, literature reviews in our dataset tend to be the result of an established collaboration within a discipline. This demonstrates that our method provides meaningful information about potential future impacts of a publication which does not require citation information
Neutrinos from beta processes in a presupernova: probing the isotopic evolution of a massive star
We present a new calculation of the neutrino flux received at Earth from a
massive star in the hours of evolution prior to its explosion as a
supernova (presupernova). Using the stellar evolution code MESA, the neutrino
emissivity in each flavor is calculated at many radial zones and time steps. In
addition to thermal processes, neutrino production via beta processes is
modeled in detail, using a network of 204 isotopes. We find that the total
produced flux has a high energy spectrum tail, at
MeV, which is mostly due to decay and electron capture on isotopes with . In a tentative window of observability of MeV and hours pre-collapse, the contribution of beta processes to the flux
is at the level of . For a star at kpc distance, a 17 kt
liquid scintillator detector would typically observe several tens of events
from a presupernova, of which up to due to beta processes. These
processes dominate the signal at a liquid argon detector, thus greatly
enhancing its sensitivity to a presupernova.Comment: 14 pages, 5 figure
Visuomotor Learning Enhanced by Augmenting Instantaneous Trajectory Error Feedback during Reaching
We studied reach adaptation to a 30u visuomotor rotation to determine whether augmented error feedback can promote faster and more complete motor learning. Four groups of healthy adults reached with their unseen arm to visual targets surrounding a central starting point. A manipulandum tracked hand motion and projected a cursor onto a display immediately above the horizontal plane of movement. For one group, deviations from the ideal movement were amplified with a gain of 2 whereas another group experienced a gain of 3.1. The third group experienced an offset equal to the average error seen in the initial perturbations, while a fourth group served as controls. Learning in the gain 2 and offset groups was nearly twice as fast as controls. Moreover, the offset group averaged more reduction in error. Such error augmentation techniques may be useful for training novel visuomotor transformations as required of robotic teleoperators or in movement rehabilitation of the neurologically impaired
Optimization of Direct-Write 3D Two Photon Photolithography in Poly (methyl methacrylate)
Direct-write multiphoton photolithography (DWMP) is a technique which exploits the localization of multi-photon processes which occur at a tightly focused femtosecond laser to write 3D patterns in a photosensitive polymer. In conventional photolithography devices are fabricated by using masks to tailor light exposure onto photo sensitive material, developing the photoresist, and this process is driven by a single photon. DWMP differs in that the energy of at least two photons is required to reach criticality for exposure chemistry. This means that whereas traditional photolithography will polymerize a material throughout the volume of the beam, DWMP will only polymerize a material where the probability of two or more photons interacting with a molecule simultaneously is incredibly high, i.e. at the focus. This is the essential idea behind DWMP which allows arbitrary 3D shapes to be created, in contrast to traditional photolithography where devices are produced in layers with strict limitations on complexity.
DWMP also allows for the creation of very small, high resolution shapes. This is possible because of the tight laser focus which produces “voxels” (volume-pixels) of polymerized material. To zeroth order the dimensions of a voxel can be estimated by a Gaussian laser’s diffraction limit. However in the DWMP case, because two or more photons must interact at the beam waist to induce polymerization the effective volume is reduced in proportion to the number of photons required for the interaction. This reduces the effective volume further as a function of the cross-sectional intensity, allowing for voxels smaller than the diffraction limit.
The majority of DWMP work to date has used negative photoresists, in which exposed material is made less soluble. This results in solidified material where the focus was scanned and is useful for creating high resolution freeform structures. Here we explore and attempt to optimize DWMP with regards to the positive resist Poly (methyl methacrylate) (PMMA) using short wavelength (~387 nm) light. Not only is PMMA a widely used and durable material in the biological community, but because it is a positive resist it is the exposed rather than unexposed material which is removed upon development. This property combined with the complexity allowed by DWMP should make it possible to make wells and intricate channels imbedded on all sides within a block of PMMA. Such a technique would prove useful in the creation of arbitrary microfluidic devices, as are often needed for biological research. We find that the technique is indeed viable while outlining a general method and defining future work to optimize the resolution of the process.No embargoAcademic Major: Physic
Visual Error Augmentation for Enhancing Motor Learning and Rehabilitative Relearning
We developed a real-time controller for a 2 degree-of-freedom robotic system using xPC Target. This system was used to investigate how different methods of performance error feedback can lead to faster and more complete motor learning in individuals asked to compensate for a novel visuo-motor transformation (a 30 degree rotation). Four groups of normal human subjects were asked to reach with their unseen arm to visual targets surrounding a central starting location. A cursor tracking hand motion was provided during each reach. For one group of subjects, deviations from the ideal compensatory hand movement (i.e. trajectory errors) were amplified with a gain of 2 whereas another group was provided visual feedback with a gain of 3.1. Yet another group was provided cursor feedback wherein the cursor was rotated by an additional (constant) offset angle. We compared the rates at which the hand paths converged to the steady-state trajectories. Our results demonstrate that error-augmentation can improve the rate and extent of motor learning of visuomotor rotations in healthy subjects. We also tested this method on straightening the movements of stroke subjects, and our early results suggest that error amplification can facilitate neurorehabilitation strategies in brain injuries such as stroke
IMPACTS OF TRADES IN AN ERROR-CORRECTION MODEL OF QUOTE PRICES
In this paper we analyze and interpret the quote price dynamics of 100 NYSE stocks
with varying average trade frequencies. We specify an error-correction model for the log
difference of the bid and the ask price, with the spread acting as the error-correction
term, and include as regressors the characteristics of the trades occurring between quote observations, if any. We find that short duration and medium volume trades have the largest impacts on quote prices for all one hundred stocks, and that buyer initiated trades primarily move the ask price while seller initiated trades primarily move the bid price. Trades have a greater impact on quotes in both the short and the long run for the infrequently traded stocks than for the more actively traded stocks. Finally, we find strong evidence that the spread is mean reverting
- …
