5,416 research outputs found

    YF-17/ADEN system study

    Get PDF
    The YF-17 aircraft was evaluated as a candidate nonaxisymmetric nozzle flight demonstrator. Configuration design modifications, control system design, flight performance assessment, and program plan and cost we are summarized. Two aircraft configurations were studied. The first was modified as required to install only the augmented deflector exhaust nozzle (ADEN). The second one added a canard installation to take advantage of the full (up to 20 deg) nozzle vectoring capability. Results indicate that: (1) the program is feasible and can be accomplished at reasonable cost and low risk; (2) installation of ADEN increases the aircraft weight by 600 kg (1325 lb); (3) the control system can be modified to accomplish direct lift, pointing capability, variable static margin and deceleration modes of operation; (4) unvectored thrust-minus-drag is similar to the baseline YF-17; and (5) vectoring does not improve maneuvering performance. However, some potential benefits in direct lift, aircraft pointing, handling at low dynamic pressure and takeoff/landing ground roll are available. A 27 month program with 12 months of flight test is envisioned, with the cost estimated to be 15.9millionforthecanardequippedaircraftand15.9 million for the canard equipped aircraft and 13.2 million for the version without canard. The feasiblity of adding a thrust reverser to the YF-17/ADEN was investigated

    Jet Substructure at the Tevatron and LHC: New results, new tools, new benchmarks

    Get PDF
    In this report we review recent theoretical progress and the latest experimental results in jet substructure from the Tevatron and the LHC. We review the status of and outlook for calculation and simulation tools for studying jet substructure. Following up on the report of the Boost 2010 workshop, we present a new set of benchmark comparisons of substructure techniques, focusing on the set of variables and grooming methods that are collectively known as "top taggers". To facilitate further exploration, we have attempted to collect, harmonise, and publish software implementations of these techniques.Comment: 53 pages, 17 figures. L. Asquith, S. Rappoccio, C. K. Vermilion, editors; v2: minor edits from journal revision

    The NASA Spitzer Space Telescope

    Get PDF
    The National Aeronautics and Space Administration's Spitzer Space Telescope (formerly the Space Infrared Telescope Facility) is the fourth and final facility in the Great Observatories Program, joining Hubble Space Telescope (1990), the Compton Gamma-Ray Observatory (1991–2000), and the Chandra X-Ray Observatory (1999). Spitzer, with a sensitivity that is almost three orders of magnitude greater than that of any previous ground-based and space-based infrared observatory, is expected to revolutionize our understanding of the creation of the universe, the formation and evolution of primitive galaxies, the origin of stars and planets, and the chemical evolution of the universe. This review presents a brief overview of the scientific objectives and history of infrared astronomy. We discuss Spitzer's expected role in infrared astronomy for the new millennium. We describe pertinent details of the design, construction, launch, in-orbit checkout, and operations of the observatory and summarize some science highlights from the first two and a half years of Spitzer operations. More information about Spitzer can be found at http://spitzer.caltech.edu/

    Expansion microscopy of C. elegans.

    Get PDF
    Funder: John DoerrFunder: The Open Philanthropy ProjectFunder: Lisa YangWe recently developed expansion microscopy (ExM), which achieves nanoscale-precise imaging of specimens at ~70 nm resolution (with ~4.5x linear expansion) by isotropic swelling of chemically processed, hydrogel-embedded tissue. ExM of C. elegans is challenged by its cuticle, which is stiff and impermeable to antibodies. Here we present a strategy, expansion of C. elegans (ExCel), to expand fixed, intact C. elegans. ExCel enables simultaneous readout of fluorescent proteins, RNA, DNA location, and anatomical structures at resolutions of ~65-75 nm (3.3-3.8x linear expansion). We also developed epitope-preserving ExCel, which enables imaging of endogenous proteins stained by antibodies, and iterative ExCel, which enables imaging of fluorescent proteins after 20x linear expansion. We demonstrate the utility of the ExCel toolbox for mapping synaptic proteins, for identifying previously unreported proteins at cell junctions, and for gene expression analysis in multiple individual neurons of the same animal

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    The investigation of a method to generate conformal lattice structures for additive manufacturing

    Get PDF
    Additive manufacturing (AM) allows a geometric complexity in products not seen in conventional manufacturing. This geometric freedom facilitates the design and fabrication of conformal hierarchical structures. Entire parts or regions of a part can be populated with lattice structure, designed to exhibit properties that differ from the solid material used in fabrication. Current computer aided design (CAD) software used to design products is not suitable for the generation of lattice structure models. Although conceptually simple, the memory requirements to store a virtual CAD model of a lattice structure are prohibitively high. Conventional CAD software defines geometry through boundary representation (B-rep); shapes are described by the connectivity of faces, edges and vertices. While useful for representing accurate models of complex shape, the sheer quantity of individual surfaces required to represent each of the relatively simple individual struts that comprise a lattice structure ensure that memory limitations are soon reached. Additionally, the conventional data flow from CAD to manufactured part is arduous, involving several conversions between file formats. As well as a lengthy process, each conversion risks the generation of geometric errors that must be fixed before manufacture. A method was developed to specifically generate large arrays of lattice structures, based on a general voxel modelling method identified in the literature review. The method is much less sensitive to geometric complexity than conventional methods and thus facilitates the design of considerably more complex structures. The ability to grade structure designs across regions of a part (termed functional grading ) was also investigated, as well as a method to retain connectivity between boundary struts of a conformal structure. In addition, the method streamlines the data flow from design to manufacture: earlier steps of the data conversion process are bypassed entirely. The effect of the modelling method on surface roughness of parts produced was investigated, as voxel models define boundaries with discrete, stepped blocks. It was concluded that the effect of this stepping on surface roughness was minimal. This thesis concludes with suggestions for further work to improve the efficiency, capability and usability of the conformal structure method developed in this work

    Advanced Concept Studies for Supersonic Commercial Transports Entering Service in the 2018-2020 Period Phase 2

    Get PDF
    Lockheed Martin Aeronautics Company (LM), working in conjunction with General Electric Global Research (GE GR) and Stanford University, executed a 19 month program responsive to the NASA sponsored "N+2 Supersonic Validation: Advanced Concept Studies for Supersonic Commercial Transports Entering Service in the 2018-2020 Period" contract. The key technical objective of this effort was to validate integrated airframe and propulsion technologies and design methodologies necessary to realize a supersonic vehicle capable of meeting the N+2 environmental and performance goals. The N+2 program is aligned with NASA's Supersonic Project and is focused on providing system level solutions capable of overcoming the efficiency, environmental, and performance barriers to practical supersonic flight. The N+2 environmental and performance goals are outlined in the technical paper, AIAA-2014-2138 (Ref. 1) along with the validated N+2 Phase 2 results. Our Phase 2 efforts built upon our Phase 1 studies (Ref. 2) and successfully demonstrated the ability to design and test realistic configurations capable of shaped sonic booms over the width of the sonic boom carpet. Developing a shaped boom configuration capable of meeting the N+2 shaped boom targets is a key goal for the N+2 program. During the LM Phase 1 effort, LM successfully designed and tested a shaped boom trijet configuration (1021) capable of achieving 85 PLdB under track (forward and aft shock) and up to 28 deg off-track at Mach 1.6. In Phase 2 we developed a refined configuration (1044-2) that extended the under 85 PLdB sonic boom level over the entire carpet of 52 deg off-track at a cruise Mach number of 1.7. Further, the loudness level of the configuration throughout operational conditions calculates to an average of 79 PLdB. These calculations rely on propagation employing Burger's (sBOOM) rounding methodology, and there are indications that the configuration average loudness would actually be 75 PLdB. We also added significant fidelity to the design of the configuration in this phase by performing a low speed wind tunnel test at our LTWT facility in Palmdale, by more complete modelling of propulsion effects in our sonic boom analysis, and by refining our configuration packaging and performance assessments. Working with General Electric, LM performed an assessment of the impact of inlet and nozzle effects on the sonic boom signature of the LM N+2 configurations. Our results indicate that inlet/exhaust streamtube boundary conditions are adequate for conceptual design studies, but realistic propulsion modeling at similar stream-tube conditions does have a small but measurable impact on the sonic boom signature. Previous supersonic transport studies have identified aeroelastic effects as one of the major challenges associated with the long, slender vehicles particularly common with shaped boom aircraft (Ref. 3). Under the Phase 2 effort, we have developed a detailed structural analysis model to evaluate the impact of flexibility and structural considerations on the feasibility of future quiet supersonic transports. We looked in particular at dynamic structural modes and flutter as a failure that must be avoided. We found that for our N+2 design in particular, adequate flutter margin existed. Our flutter margin is large enough to cover uncertainties like large increases in engine weight and the margin is relatively easy to increase with additional stiffening mass. The lack of major aeroelastic problems probably derives somewhat from an early design bias. While shaped boom aircraft require long length, they are not required to be thin. We intentionally developed our structural depths to avoid major flexibility problems. So at the end of Phase 2, we have validated that aeroelastic problems are not necessarily endemic to shaped boom designs. Experimental validation of sonic boom design and analysis techniques was the primary objective of the N+2 Supersonic Validations contract; and in this Phase, LM participated in four high speed wind tunnel tests. The first so-called Parametric Test in the Ames 9x7 tunnel did an exhaustive look at variation effects of the parameters: humidity, total pressure, sample time, spatial averaging distance and number of measurement locations, and more. From the results we learned to obtain data faster and more accurately, and made test condition tolerances easy to meet (eliminating earlier 60 percent wasted time when condition tolerances could not be held). The next two tests used different tunnels. The Ames 11 ft tunnel was used to test lower Mach numbers of 1.2 and 1.4. There were several difficulties using this tunnel for the first time for sonic boom including having to shift the measurement Mach numbers to 1.15 and 1.3 to avoid flow problems. It is believed that the 11 ft could be used successfully to measure sonic boom but there are likely to be a number of test condition restrictions. The Glenn 8x6 ft tunnel was used next and the tunnel has a number of desirable features for sonic boom measurement. While the Ames 9x7 can only test Mach 1.55 to 2.55 and the 11 ft can only test Mach 1.3 and lower, the Glenn 8x6 can test continuously from Mach 0.3 to 2.0. Unfortunately test measurement accuracy was compromised by a reference pressure drift. Post-test analysis revealed that the drift occurred when Mach number drifted slightly. Test measurements indicated that if Mach number drift is eliminated, results from the 8x6 would be more accurate, especially at longer distances, than results from the 9x7. The fourth test in the 9x7, called LM4, used everything we learned to comprehensively and accurately measure our new 1044-02 configuration with a full-carpet shaped signature design. Productivity was 8 times greater than our Phase 1 LM3 test. Measurement accuracy and repeatability was excellent out to 42 in. However, measurements at greater distances require the rail in the aft position and become substantially less accurate. Further signature processing or measurement improvements are needed for beyond near-field signature validation

    Analysis and Measurements of Vehicle Door Structural Dynamic Response

    Get PDF
    In order to reduce lead time and cost in the product development of vehicles more development will be made virtually. However, the predictability capability of simulation models is questioned and the simulation models need to be correlated versus hardware measurements and modeling techniques improved. As part of the process of vehicle system model capability improvements the main objective of this project is to improve the structural dynamic response prediction capability of a vehicle door simulation model in a free-free configuration under steady-state conditions. The actions performed can then be rolled down to: simulate eigenmodes and frequency response, perform hardware measurements, make correlations of simulations versus measurements, using modal assurance criterion, frequency response assurance criterion and sum-blocks, and update simulation model. These actions are performed for four successively more complex door structures starting from a door in white and ending at a trimmed door. The correlation status of the original model was only reasonably good for the door in white configuration. All other configurations displayed serious correlation mismatch. By replacing the existing antiflutter models (connecting the side impact rail to outer panel) in the door in white configuration with simple spring elements the correlation for the door in white configuration was improved. With the window and seals attached the correlation problems was solved by introducing stiffness in the plane of the window of the springs acting as seals. The idea was to take friction into account. Also, by adjusting the spring stiffness of the seals, fair correlation could be achieved. The most important issue is to relate these results to component properties known before building simulation models. The following two configurations need more attention for better correlation. By using more detailed models the correlation could be improved, which shows the obvious trade-off between accuracy and computational effort. However, improving the model detail level fall outside the limitations of this project

    Phase Combination and its Application to the Solution of Macromolecular Structures: Developing ALIXE and SHREDDER

    Get PDF
    [eng] Phasing X-ray data within the frame of the ARCIMBOLDO programs requires very accurate models and a sophisticated evaluation of the possible hypotheses. ARCIMBOLDO uses small fragments, that are placed with the maximum likelihood molecular replacement program Phaser, and are subject to density modification and autotracing with the program SHELXE. The software receives its name from the Italian painter Giuseppe Arcimboldo, who used to compose portraits out of common objects such as vegetables or flowers. Out of most possible arrangements of such objects, only a still-life will result, and just a few ones will truly produce a portrait. In a similar way, from all possible placements with small protein fragments, only a few will be correct and will allow to get the full “protein’s portrait”. The work presented in this thesis has explored new ways to exploit partial information and increase the signal in the process of phasing with fragments. This has been achieved through two main pieces of software, ALIXE and SHREDDER. With the spherical mode in ARCIMBOLDO_SHREDDER, the aim is to derive compact fragments starting from a distant homolog to our unknown protein of interest. Then, locations for these fragments are searched with Phaser. These include strategies for refining the fragments against the experimental data and giving them more degrees of freedom. With ALIXE, the aim is to combine information in reciprocal space from partial solutions, such as the ones produced by SHREDDER, and use the coherence between them to guide their merging and to increase the information content, so that the step of density modification and autotracing starts from a more complete solution. Even if partial solutions contain both correct and incorrect information, the combination of solutions that share some similarity will allow to get a better approximation to the correct structure. Both ARCIMBOLDO_SHREDDER and ALIXE have been used on test data for development and optimisation but also on datasets from previously unknown structures, which have been solved thanks to these programs. These programs are distributed through the website of the group but also through software suites of general use in the crystallographic community such as CCP4 and SBGrid

    Modeling, Simulation and Control of Very Flexible Unmanned Aerial Vehicle

    Full text link
    This dissertation presents research on modeling, simulation and control of very flexible aircraft. This work includes theoretical and numerical developments, as well as experimental validations. On the theoretical front, new kinematic equations for modeling sensors are derived. This formulation uses geometrically nonlinear strain-based finite elements developed as part of University of Michigan Nonlinear Aeroelastic Simulation Toolbox (UM/NAST). Numerical linearizations of both the flexible vehicle and the sensor measurements are developed, allowing a linear time invariant model to be extracted for control analysis and design. Two different algorithms to perform sensor fusion from different sensor sources to extract elastic deformation are investigated. Nonlinear least square method uses geometry and nonlinear beam strain-displacement kinematics to reconstruct the wing shape. Detailed information such as material properties or loading conditions are not required. The second method is the Kalman filter, implemented in a multi-rate form. This method requires a dynamical system representation to be available. However, it is more robust to noise corruption in sensor measurements. In order to control maneuver loads, Model Predictive Control is applied to maneuver load alleviation of a representative very flexible aircraft (X-HALE). Numerical studies are performed in UM/NAST for pitch up and roll maneuvers. Both control and state constraints are successfully enforced, while reference commands are still being tracked. MPC execution is also timed and current implementation is capable of almost real-time operation. On the experimental front, two aeroelastic testbed vehicles (ATV-6B and RRV-6B) are instrumented with sensors. On ATV-6B, an extensive set of sensors measuring structural, flight dynamic, and aerodynamic information are integrated on-board. A novel stereo-vision measurement system mounted on the body center looking towards the wing tip measures wing deformation. High brightness LEDs are used as target markers for easy detection and to allow each view to be captured with fast camera shutter speed. Experimental benchmarks are conducted to verify the accuracy of this methodology. RRV-6B flight test results are presented. System identification is applied to the experimental data to generate a SISO description of the flexible aircraft. System identification results indicate that the UM/NAST X-HALE model requires some tuning to match observed dynamics. However, the general trends predicted by the numerical model are in agreement with flight test results. Finally, using this identified plant, a stability augmentation autopilot is designed and flight tested. This augmentation autopilot utilizes a cascaded two-loop proportional integral control design, with the inner loop regulating angular rates and outer loop regulating attitude. Each of the three axes is assumed to be decoupled and designed using SISO methodology. This stabilization system demonstrates significant improvements in the RRV-6B handling qualities. This dissertation ends with a summary of the results and conclusions, and its main contribution to the field. Suggestions for future work are also presented.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144019/1/pziyang_1.pd
    corecore