1,267 research outputs found

    On the compositum of all degree d extensions of a number field

    Get PDF
    Let k be a number field, and denote by k^[d] the compositum of all degree d extensions of k in a fixed algebraic closure. We first consider the question of whether all algebraic extensions of k of degree less than d lie in k^[d]. We show that this occurs if and only if d < 5. Secondly, we consider the question of whether there exists a constant c such that if K/k is a finite subextension of k^[d], then K is generated over k by elements of degree at most c. This was previously considered by Checcoli. We show that such a constant exists if and only if d < 3. This question becomes more interesting when one restricts attention to Galois extensions K/k. In this setting, we derive certain divisibility conditions on d under which such a constant does not exist. If d is prime, we prove that all finite Galois subextensions of k^[d] are generated over k by elements of degree at most d.Comment: 14 pages, 2 figure

    Abandon Statistical Significance

    Get PDF
    We discuss problems the null hypothesis significance testing (NHST) paradigm poses for replication and more broadly in the biomedical and social sciences as well as how these problems remain unresolved by proposals involving modified p-value thresholds, confidence intervals, and Bayes factors. We then discuss our own proposal, which is to abandon statistical significance. We recommend dropping the NHST paradigm--and the p-value thresholds intrinsic to it--as the default statistical paradigm for research, publication, and discovery in the biomedical and social sciences. Specifically, we propose that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence. We have no desire to "ban" p-values or other purely statistical measures. Rather, we believe that such measures should not be thresholded and that, thresholded or not, they should not take priority over the currently subordinate factors. We also argue that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures. We offer recommendations for how our proposal can be implemented in the scientific publication process as well as in statistical decision making more broadly

    Real-time Detection and Rapid Multiwavelength Follow-up Observations of a Highly Subluminous Type II-P Supernova from the Palomar Transient Factory Survey

    Get PDF
    The Palomar Transient Factory (PTF) is an optical wide-field variability survey carried out using a camera with a 7.8 deg^2 field of view mounted on the 48 inch Oschin Schmidt telescope at Palomar Observatory. One of the key goals of this survey is to conduct high-cadence monitoring of the sky in order to detect optical transient sources shortly after they occur. Here, we describe the real-time capabilities of the PTF and our related rapid multiwavelength follow-up programs, extending from the radio to the γ-ray bands. We present as a case study observations of the optical transient PTF10vdl (SN 2010id), revealed to be a very young core-collapse (Type II-P) supernova having a remarkably low luminosity. Our results demonstrate that the PTF now provides for optical transients the real-time discovery and rapid-response follow-up capabilities previously reserved only for high-energy transients like gamma-ray bursts

    Use of anticoagulants in elderly patients: practical recommendations

    Get PDF
    Elderly people represent a patient population at high thromboembolic risk, but also at high hemorrhagic risk. There is a general tendency among physicians to underuse anticoagulants in the elderly, probably both because of underestimation of thromboembolic risk and overestimation of bleeding risk. The main indications for anticoagulation are venous thromboembolism (VTE) prophylaxis in medical and surgical settings, VTE treatment, atrial fibrillation (AF) and valvular heart disease. Available anticoagulants for VTE prophylaxis and initial treatment of VTE are low molecular weight heparins (LMWH), unfractionated heparin (UFH) or synthetic anti-factor Xa pentasaccharide fondaparinux. For long-term anticoagulation vitamin K antagonists (VKA) are the first choice and only available oral anticoagulants nowadays. Assessing the benefit-risk ratio of anticoagulation is one of the most challenging issues in the individual elderly patient, patients at highest hemorrhagic risk often being those who would have the greatest benefit from anticoagulants. Some specific considerations are of utmost importance when using anticoagulants in the elderly to maximize safety of these treatments, including decreased renal function, co-morbidities and risk of falls, altered pharmacodynamics of anticoagulants especially VKAs, association with antiplatelet agents, patient education. Newer anticoagulants that are currently under study could simplify the management and increase the safety of anticoagulation in the future

    Pseudorandom Self-Reductions for NP-Complete Problems

    Get PDF
    A language L is random-self-reducible if deciding membership in L can be reduced (in polynomial time) to deciding membership in L for uniformly random instances. It is known that several "number theoretic" languages (such as computing the permanent of a matrix) admit random self-reductions. Feigenbaum and Fortnow showed that NP-complete languages are not non-adaptively random-self-reducible unless the polynomial-time hierarchy collapses, giving suggestive evidence that NP may not admit random self-reductions. Hirahara and Santhanam introduced a weakening of random self-reductions that they called pseudorandom self-reductions, in which a language L is reduced to a distribution that is computationally indistinguishable from the uniform distribution. They then showed that the Minimum Circuit Size Problem (MCSP) admits a non-adaptive pseudorandom self-reduction, and suggested that this gave further evidence that distinguished MCSP from standard NP-Complete problems. We show that, in fact, the Clique problem admits a non-adaptive pseudorandom self-reduction, assuming the planted clique conjecture. More generally we show the following. Call a property of graphs ? hereditary if G ? ? implies H ? ? for every induced subgraph of G. We show that for any infinite hereditary property ?, the problem of finding a maximum induced subgraph H ? ? of a given graph G admits a non-adaptive pseudorandom self-reduction

    Revisiting Automated Prompting: Are We Actually Doing Better?

    Full text link
    Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompts. Our work suggests that, in addition to fine-tuning, manual prompts should be used as a baseline in this line of research

    On Low-End Obfuscation and Learning

    Get PDF
    Most recent works on cryptographic obfuscation focus on the high-end regime of obfuscating general circuits while guaranteeing computational indistinguishability between functionally equivalent circuits. Motivated by the goals of simplicity and efficiency, we initiate a systematic study of "low-end" obfuscation, focusing on simpler representation models and information-theoretic notions of security. We obtain the following results. - Positive results via "white-box" learning. We present a general technique for obtaining perfect indistinguishability obfuscation from exact learning algorithms that are given restricted access to the representation of the input function. We demonstrate the usefulness of this approach by obtaining simple obfuscation for decision trees and multilinear read-k arithmetic formulas. - Negative results via PAC learning. A proper obfuscation scheme obfuscates programs from a class C by programs from the same class. Assuming the existence of one-way functions, we show that there is no proper indistinguishability obfuscation scheme for k-CNF formulas for any constant k ? 3; in fact, even obfuscating 3-CNF by k-CNF is impossible. This result applies even to computationally secure obfuscation, and makes an unexpected use of PAC learning in the context of negative results for obfuscation. - Separations. We study the relations between different information-theoretic notions of indistinguishability obfuscation, giving cryptographic evidence for separations between them

    Australian Digital Commerce: A commentary on the retail sector

    Get PDF
    In this market study we analysed the digital presences of 89 Australian retailers using a catalogue of 63 single items. We find that while Australian retailers have achieved reasonable levels of maturity in the informational and transactional dimensions, and also ventured into the social media space, they are lacking in implementing the relational components of digital commerce. Termed the 'relational gap', this finding points to missed opportunities in building loyalty and lasting relationships with their customers as the basis for repeat purchases and cross selling

    The Properties of Radio Galaxies and the Effect of Environment in Large Scale Structures at z1z\sim1

    Get PDF
    In this study we investigate 89 radio galaxies that are spectroscopically-confirmed to be members of five large scale structures in the redshift range of 0.65z0.960.65 \le z \le 0.96. Based on a two-stage classification scheme, the radio galaxies are classified into three sub-classes: active galactic nucleus (AGN), hybrid, and star-forming galaxy (SFG). We study the properties of the three radio sub-classes and their global and local environmental preferences. We find AGN hosts are the most massive population and exhibit quiescence in their star-formation activity. The SFG population has a comparable stellar mass to those hosting a radio AGN but are unequivocally powered by star formation. Hybrids, though selected as an intermediate population in our classification scheme, were found in almost all analyses to be a unique type of radio galaxies rather than a mixture of AGN and SFGs. They are dominated by a high-excitation radio galaxy (HERG) population. We discuss environmental effects and scenarios for each sub-class. AGN tend to be preferentially located in locally dense environments and in the cores of clusters/groups, with these preferences persisting when comparing to galaxies of similar colour and stellar mass, suggesting that their activity may be ignited in the cluster/group virialized core regions. Conversely, SFGs exhibit a strong preference for intermediate-density global environments, suggesting that dusty starbursting activity in LSSs is largely driven by galaxy-galaxy interactions and merging.Comment: 28 pages, 10 figures, accepted to MNRA
    corecore