347 research outputs found
Recommended from our members
Performance of keck adaptive optics with sodium laser guide star
The Keck telescope adaptive optics system is designed to optimize performance in he 1 to 3 micron region of observation wavelengths (J, H, and K astronomical bands). The system uses a 249 degree of freedom deformable mirror, so that the interactuator spacing is 56 cm as mapped onto the 10 meter aperture. 56 cm is roughly equal to r0 at 1.4 microns, which implies the wavefront fitting error is 0.52 ({lambda}/2{pi})({ital d}/{ital r}{sub 0}){sup 5/6} = 118 nm rms. This is sufficient to produce a system Strehl of 0.74 at 1.4 microns if all other sources of error are negligible, which would be the case with a bright natural guidestar and very high control bandwidth. Other errors associated with the adaptive optics will however contribute to Strehl degradation, namely, servo bandwidth error due to inability to reject all temporal frequencies of the aberrated wavefront, wavefront measurement error due to finite signal-to-noise ratio in the wavefront sensor, and, in the case of a laser guidestar, the so-called cone effect where rays from the guidestar beacon fail to sample some of the upper atmosphere turbulence. Cone effect is mitigated considerably by the use of the very high altitude sodium laser guidestar (90 km altitude), as opposed to Rayleigh beacons at 20 km. However, considering the Keck telescope`s large aperture, this is still the dominating wavefront error contributor in the current adaptive optics system design
Fluorinated dibenzo[a,c]-phenazine-based green to red thermally activated delayed fluorescent OLED emitters
Purely organic thermally activated delayed fluorescence (TADF) emitting materiaLs for organic Light-emitting diodes (OLEDs) enable a facile method to modulate the emission color through judicious choice of donor and acceptor units. Amongst purely organic TADF emitters, the development of TADF molecules that emit at Longer wavelengths and produce high-efficiency devices that show Low efficiency roll-off remains a challenge. We report a modular synthesis route that delivers three structurally related fluorinated dibenzo[a,c]-phenazine-based TADF molecules, each bearing two donor moieties with different electron-donating strengths, namely 3,6-bis(3,6-di-tert-butyl-9H-carbazol-9-yl)-10-fluorodi-benzo[a,c]phenazine (2DTCz-BP-F), 3,6- bis(9,9-dimethylacridin-10(9H)-yl)-10-fluorodibenzo[a,c]-phenazine (2DMAC-BP-F) and 10,10'-(10-fluorodibenzo[a,c]phenazine-3,6-diyl)bis(10H-phenoxazine) (2PXZ-BP-F). They exhibit donor strength-controlled color-tuning over a wide color range from green to deep-red with photoluminescence maxima, lambda(PL), of 505 nm, 589 nm, and 674 nm in toluene solution. OLED devices using these TADF materials showed excellent to moderate performance with an EQE(max) of 21.8% in the case of 2DMAC-BP-F, 12.4% for 2PXZ-BP-F and 2.1% with 2DTCZ-BP-F, and associated electroluminescence (EL) emission maxima, lambda(EL), of 585 nm, 605 nm and 518 nm in an mCBP host, respectively.Peer reviewe
HIGH RESOLUTION WAVEFRONT CONTROL OF HIGH-POWER LASER SYSTEMS
Nearly every new large-scale laser system application at LLNL has requirements for beam control which exceed the current level of available technology. For applications such as inertial confinement fusion, laser isotope separation, laser machining, and laser the ability to transport significant power to a target while maintaining good beam quality is critical. There are many ways that laser wavefront quality can be degraded. Thermal effects due to the interaction of high-power laser or pump light with the internal optical components or with the ambient gas are common causes of wavefront degradation. For many years, adaptive optics based on thing deformable glass mirrors with piezoelectric or electrostrictive actuators have be used to remove the low-order wavefront errors from high-power laser systems. These adaptive optics systems have successfully improved laser beam quality, but have also generally revealed additional high-spatial-frequency errors, both because the low-order errors have been reduced and because deformable mirrors have often introduced some high-spatial-frequency components due to manufacturing errors. Many current and emerging laser applications fall into the high-resolution category where there is an increased need for the correction of high spatial frequency aberrations which requires correctors with thousands of degrees of freedom. The largest Deformable Mirrors currently available have less than one thousand degrees of freedom at a cost of approximately $1M. A deformable mirror capable of meeting these high spatial resolution requirements would be cost prohibitive. Therefore a new approach using a different wavefront control technology is needed. One new wavefront control approach is the use of liquid-crystal (LC) spatial light modulator (SLM) technology for the controlling the phase of linearly polarized light. Current LC SLM technology provides high-spatial-resolution wavefront control, with hundreds of thousands of degrees of freedom, more than two orders of magnitude greater than the best Deformable Mirrors currently made. Even with the increased spatial resolution, the cost of these devices is nearly two orders of magnitude less than the cost of the largest deformable mirror
Recommended from our members
Middleware for Astronomical Data Analysis Pipelines
In this paper the authors describe the approach to research, develop, and evaluate prototype middleware tools and architectures. The developed tools can be used by scientists to compose astronomical data analysis pipelines easily. They use the SuperMacho data pipelines as example applications to test the framework. they describe their experience from scheduling and running these analysis pipelines on massive parallel processing machines. they use MCR a Linux cluster machine with 1152 nodes and Luster parallel file system as the hardware test-bed to test and enhance the scalability of the tools
From reading numbers to seeing ratios: a benefit of icons for risk comprehension
Promoting a better understanding of statistical data is becoming increasingly important for improving risk comprehension and decision-making. In this regard, previous studies on Bayesian problem solving have shown that iconic representations help infer frequencies in sets and subsets. Nevertheless, the mechanisms by which icons enhance performance remain unclear. Here, we tested the hypothesis that the benefit offered by icon arrays lies in a better alignment between presented and requested relationships, which should facilitate the comprehension of the requested ratio beyond the represented quantities. To this end, we analyzed individual risk estimates based on data presented either in standard verbal presentations (percentages and natural frequency formats) or as icon arrays. Compared to the other formats, icons led to estimates that were more accurate, and importantly, promoted the use of equivalent expressions for the requested probability. Furthermore, whereas the accuracy of the estimates based on verbal formats depended on their alignment with the text, all the estimates based on icons were equally accurate. Therefore, these results support the proposal that icons enhance the comprehension of the ratio and its mapping onto the requested probability and point to relational misalignment as potential interference for text-based Bayesian reasoning. The present findings also argue against an intrinsic difficulty with understanding single-event probabilities
Recommended from our members
Image Content Engine (ICE): A System for Fast Image Database Searches
The Image Content Engine (ICE) is being developed to provide cueing assistance to human image analysts faced with increasingly large and intractable amounts of image data. The ICE architecture includes user configurable feature extraction pipelines which produce intermediate feature vector and match surface files which can then be accessed by interactive relational queries. Application of the feature extraction algorithms to large collections of images may be extremely time consuming and is launched as a batch job on a Linux cluster. The query interface accesses only the intermediate files and returns candidate hits nearly instantaneously. Queries may be posed for individual objects or collections. The query interface prompts the user for feedback, and applies relevance feedback algorithms to revise the feature vector weighting and focus on relevant search results. Examples of feature extraction and both model-based and search-by-example queries are presented
Spartalizumab or placebo in combination with dabrafenib and trametinib in patients with V600-mutant melanoma: exploratory biomarker analyses from a randomized phase 3 trial (COMBI-i)
BackgroundThe randomized phase 3 COMBI-i trial did not meet its primary endpoint of improved progression-free survival (PFS) with spartalizumab plus dabrafenib and trametinib (sparta-DabTram) vs placebo plus dabrafenib and trametinib (placebo-DabTram) in the overall population of patients with unresectable/metastatic V600-mutant melanoma. This prespecified exploratory biomarker analysis was performed to identify subgroups that may derive greater treatment benefit from sparta-DabTram.MethodsIn COMBI-i (ClinicalTrials.gov, NCT02967692), 532 patients received spartalizumab 400 mg intravenously every 4 weeks plus dabrafenib 150 mg orally two times daily and trametinib 2 mg orally one time daily or placebo-DabTram. Baseline/on-treatment pharmacodynamic markers were assessed via flow cytometry-based immunophenotyping and plasma cytokine profiling. Baseline programmed death ligand 1 (PD-L1) status and T-cell phenotype were assessed via immunohistochemistry; V600 mutation type, tumor mutational burden (TMB), and circulating tumor DNA (ctDNA) via DNA sequencing; gene expression signatures via RNA sequencing; and CD4/CD8 T-cell ratio via immunophenotyping.ResultsExtensive biomarker analyses were possible in approximately 64% to 90% of the intention-to-treat population, depending on sample availability and assay. Subgroups based on PD-L1 status/TMB or T-cell inflammation did not show significant differences in PFS benefit with sparta-DabTram vs placebo-DabTram, although T-cell inflammation was prognostic across treatment arms. Subgroups defined by V600K mutation (HR 0.45 (95% CI 0.21 to 0.99)), detectable ctDNA shedding (HR 0.75 (95% CI 0.58 to 0.96)), or CD4/CD8 ratio above median (HR 0.58 (95% CI 0.40 to 0.84)) derived greater PFS benefit with sparta-DabTram vs placebo-DabTram. In a multivariate analysis, ctDNA emerged as strongly prognostic (p=0.007), while its predictive trend did not reach significance; in contrast, CD4/CD8 ratio was strongly predictive (interaction p=0.0131).ConclusionsThese results support the feasibility of large-scale comprehensive biomarker analyses in the context of a global phase 3 study. T-cell inflammation was prognostic but not predictive of sparta-DabTram benefit, as patients with high T-cell inflammation already benefit from targeted therapy alone. Baseline ctDNA shedding also emerged as a strong independent prognostic variable, with predictive trends consistent with established measures of disease burden such as lactate dehydrogenase levels. CD4/CD8 T-cell ratio was significantly predictive of PFS benefit with sparta-DabTram but requires further validation as a biomarker in melanoma. Taken together with previous observations, further study of checkpoint inhibitor plus targeted therapy combination in patients with higher disease burden may be warranted
Structural mapping in statistical word problems: A relational reasoning approach to Bayesian inference
Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations
- …