6,128 research outputs found

    Coordinated design of coding and modulation systems

    Get PDF
    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems

    Intrinsic and extrinsic geometries of a tidally deformed black hole

    Full text link
    A description of the event horizon of a perturbed Schwarzschild black hole is provided in terms of the intrinsic and extrinsic geometries of the null hypersurface. This description relies on a Gauss-Codazzi theory of null hypersurfaces embedded in spacetime, which extends the standard theory of spacelike and timelike hypersurfaces involving the first and second fundamental forms. We show that the intrinsic geometry of the event horizon is invariant under a reparameterization of the null generators, and that the extrinsic geometry depends on the parameterization. Stated differently, we show that while the extrinsic geometry depends on the choice of gauge, the intrinsic geometry is gauge invariant. We apply the formalism to solutions to the vacuum field equations that describe a tidally deformed black hole. In a first instance we consider a slowly-varying, quadrupolar tidal field imposed on the black hole, and in a second instance we examine the tide raised during a close parabolic encounter between the black hole and a small orbiting body.Comment: 27 pages, 4 figure

    Controls on Sediment Bed Erodibility in a Muddy, Partially-Mixed Tidal Estuary

    Get PDF
    he objectives of this study are to better understand controls on bed erodibility in muddy estuaries, including the roles of both sediment properties and recent hydrodynamic history. An extensive data set of erodibility measurements, sediment properties, and hydrodynamic information was utilized to create statistical models to predict the erodibility of the sediment bed. This data set includes \u3e160 eroded mass versus applied stress profiles collected over 15 years along the York River estuary, a system characterized by “depth-limited erosion,” such that the critical stress for erosion increases rapidly with depth into the bed. For this study, erodibility was quantified in two ways: the mass of sediment eroded at 0.2 Pa (a stress commonly produced by tides in the York), and the normalized shape of the eroded mass profile for stresses between 0 and 0.56 Pa. In models with eroded mass as the response variable, the explanatory variables with the strongest influence were (in descending order) tidal range squared averaged over the previous 8 days (a proxy for recent bottom stress), salinity or past river discharge, sediment organic content, recent water level anomalies, percent sand, percent clay, and bed layering. Results support the roles of 1) recent deposition and bed disturbance increasing erodibility and 2) cohesion/consolidation and erosion/winnowing of fines decreasing erodibility. The most important variable influencing the shape of the eroded mass profile was eroded mass at 0.2 Pa, such that more (vs. less) erodible cases exhibited straighter (vs. more strongly curved) profiles. Overall, hydrodynamic variables were the best predictors of eroded mass at 0.2 Pa, which, in turn, was the best predictor of profile shape. This suggests that calculations of past bed stress and the position of the salt intrusion can serve as useful empirical proxies for muddy bed consolidation state and resulting erodibility of the uppermost seabed in estuarine numerical models. Observed water content averaged over the top 1 cm was a poor predictor of erodibility, likely because typical tidal stresses suspend less than 1 mm of bed sediment. Future field sampling would benefit from higher resolution observations of water content within the bed’s top few millimeters

    Scalable Mining of Common Routes in Mobile Communication Network Traffic Data

    Get PDF
    A probabilistic method for inferring common routes from mobile communication network traffic data is presented. Besides providing mobility information, valuable in a multitude of application areas, the method has the dual purpose of enabling efficient coarse-graining as well as anonymisation by mapping individual sequences onto common routes. The approach is to represent spatial trajectories by Cell ID sequences that are grouped into routes using locality-sensitive hashing and graph clustering. The method is demonstrated to be scalable, and to accurately group sequences using an evaluation set of GPS tagged data

    ‘It’s not what it looks like. I’m Santa’: connecting community through film

    Get PDF
    The lived experiences of young people are becoming increasingly marginalised within the narrowly defined curricula of neoliberal contexts. Many young people are also cast within the media according to deficit discourses of youth, which contributes to the fragmentation of communities and the limitation of interaction between generations. This article describes a film project in which young people living in an ex-mining community in the Midlands of England worked in and with their community to create a representation of where they live. As part of the process, the young filmmakers did more than connect to other people’s memories as repositories of information; both as process and as product, their film can be seen to connect shared narratives of people and place, across time and space. We argue that this project offers a timely opportunity to reflect upon the ways in which we understand learning in and out of English classrooms

    Electrode level Monte Carlo model of radiation damage effects on astronomical CCDs

    Full text link
    Current optical space telescopes rely upon silicon Charge Coupled Devices (CCDs) to detect and image the incoming photons. The performance of a CCD detector depends on its ability to transfer electrons through the silicon efficiently, so that the signal from every pixel may be read out through a single amplifier. This process of electron transfer is highly susceptible to the effects of solar proton damage (or non-ionizing radiation damage). This is because charged particles passing through the CCD displace silicon atoms, introducing energy levels into the semi-conductor bandgap which act as localized electron traps. The reduction in Charge Transfer Efficiency (CTE) leads to signal loss and image smearing. The European Space Agency's astrometric Gaia mission will make extensive use of CCDs to create the most complete and accurate stereoscopic map to date of the Milky Way. In the context of the Gaia mission CTE is referred to with the complementary quantity Charge Transfer Inefficiency (CTI = 1-CTE). CTI is an extremely important issue that threatens Gaia's performances. We present here a detailed Monte Carlo model which has been developed to simulate the operation of a damaged CCD at the pixel electrode level. This model implements a new approach to both the charge density distribution within a pixel and the charge capture and release probabilities, which allows the reproduction of CTI effects on a variety of measurements for a large signal level range in particular for signals of the order of a few electrons. A running version of the model as well as a brief documentation and a few examples are readily available at http://www.strw.leidenuniv.nl/~prodhomme/cemga.php as part of the CEMGA java package (CTI Effects Models for Gaia).Comment: Accepted by MNRAS on 13 February 2011. 15 pages, 7 figures and 5 table

    Supporting Data: Controls on Sediment Bed Erodibility in a Muddy, Partially-Mixed Tidal Estuary, York River, Virginia

    Get PDF
    Dataset consists of all sampling cruises with data that were analyzed and used in the statistical modeling associated with Wright (2021) and Wright et al. (2022). Each cruise folder includes erodibility data that was analyzed using a Gust Microcosm along with sediment and water column characteristics

    The impact of galaxy colour gradients on cosmic shear measurement

    Get PDF
    Cosmic shear has been identified as the method with the most potential to constrain dark energy. To capitalize on this potential, it is necessary to measure galaxy shapes with great accuracy, which in turn requires a detailed model for the image blurring by the telescope and atmosphere, the point spread function (PSF). In general, the PSF varies with wavelength and therefore the PSF integrated over an observing filter depends on the spectrum of the object. For a typical galaxy the spectrum varies across the galaxy image, thus the PSF depends on the position within the image. We estimate the bias on the shear due to such colour gradients by modelling galaxies using two co-centred, co-elliptical Sérsic profiles, each with a different spectrum. We estimate the effect of ignoring colour gradients and find the shear bias from a single galaxy can be very large depending on the properties of the galaxy. We find that halving the filter width reduces the shear bias by a factor of about 5. We show that, to the first order, tomographic cosmic shear two point statistics depend on the mean shear bias over the galaxy population at a given redshift. For a single broad filter, and averaging over a small galaxy catalogue from Simard et al., we find a mean shear bias which is subdominant to the predicted statistical errors for future cosmic shear surveys. However, the true mean shear bias may exceed the statistical errors, depending on how accurately the catalogue represents the observed distribution of galaxies in the cosmic shear survey. We then investigate the bias on the shear for two-filter imaging and find that the bias is reduced by at least an order of magnitude. Lastly, we find that it is possible to calibrate galaxies for which colour gradients were ignored using two-filter imaging of a fair sample of noisy galaxies, if the galaxy model is known. For a signal-to-noise ratio of 25 the number of galaxies required in each tomographic redshift bin is of the order of 10

    Spitzer SAGE-SMC Infrared Photometry of Massive Stars in the Small Magellanic Cloud

    Get PDF
    We present a catalog of 5324 massive stars in the Small Magellanic Cloud (SMC), with accurate spectral types compiled from the literature, and a photometric catalog for a subset of 3654 of these stars, with the goal of exploring their infrared properties. The photometric catalog consists of stars with infrared counterparts in the Spitzer, SAGE-SMC survey database, for which we present uniform photometry from 0.3-24 um in the UBVIJHKs+IRAC+MIPS24 bands. We compare the color magnitude diagrams and color-color diagrams to those of the Large Magellanic Cloud (LMC), finding that the brightest infrared sources in the SMC are also the red supergiants, supergiant B[e] (sgB[e]) stars, luminous blue variables, and Wolf-Rayet stars, with the latter exhibiting less infrared excess, the red supergiants being less dusty and the sgB[e] stars being on average less luminous. Among the objects detected at 24 um are a few very luminous hypergiants, 4 B-type stars with peculiar, flat spectral energy distributions, and all 3 known luminous blue variables. We detect a distinct Be star sequence, displaced to the red, and suggest a novel method of confirming Be star candidates photometrically. We find a higher fraction of Oe and Be stars among O and early-B stars in the SMC, respectively, when compared to the LMC, and that the SMC Be stars occur at higher luminosities. We estimate mass-loss rates for the red supergiants, confirming the correlation with luminosity even at the metallicity of the SMC. Finally, we confirm the new class of stars displaying composite A & F type spectra, the sgB[e] nature of 2dFS1804 and find the F0 supergiant 2dFS3528 to be a candidate luminous blue variable with cold dust.Comment: 23 pages, 17 figures, 5 tables, accepted for publication in the Astronomical Journa

    VIMS 2019 Potomac River Estuary Data in Support of: Improved Penetrometer Performance in Stratified Sediment for Cost-Effective Characterization, Monitoring and Management of Submerged Munitions Sites (SERDP project: MR18-1233)

    Get PDF
    This work complements the efforts by the Virginia Tech Department of Civil & Environmental Engineering in SERDP MR18-1233, as described in the project’s final report (Stark et al, 2020) and in the Master’s thesis by Dennis Kiptoo (Kiptoo, 2020). Previous work on this project, conducted in the York River during 2018-2918 worked to improve calibration of the Bluedrop free fall penetrometer (FFP) with high resolution sampling of a variety of sediment types (Massey et al, 2020a). Calibration methods developed (Kiptoo, 2020) were used to rapidly identify different sediment types from a grid of 59 Bluedrop PPF stations sampled on the morning of August 5, 2019 on the Potomac River in Wades Bay, just down river from Mallows Bay Park in Nanjemoy, MD. The Bluedrop FFP was deployed numerous times at each station, and the data were retained and processed by Virginia Tech. The Bluedrop stations were arranged in a grid of 8 transects (A-H) perpendicular to shore, spaced ~200 meters apart. Each transect had 5 to 10 stations, depending the on distance of the first one from the shore line, also spaced ~200 meters apart, with stations identified as A1, A2 etc. along the transect. Exact locations for these stations, along with the water depth and temperature at the station, were collected with the GPS onboard the R/V Pintail, are described in the attached VIMS data report CHSD-2020-02 (Massey et al, 2020b). Detailed methodologies, including data processing, station maps and figures from the processed data can also be found in the report. Four distinct sediment types were identified from the Bluedrop FFP results, and the identified regions of these sediment types are indicated on the station map in the data report. A sediment sampling station was selected within each of the regions identified. One sediment station (corresponding to Bluedrop stations C1, G3, G6 and D6, was sampled each day over a period of 4 days from August 5-8, 2019, respectively. At each of the sites, a GOMEX box core was used to collect several sediment grab samples of which sub-cores were collected to minimize edge effects that would disturb the sediment/water interface. At each site, the top ten centimeters, if possible, from two 4” diameter sub-cores were sliced in 1 cm increments and combined for later analysis in the lab for grain size (sand, silt, and clay) distribution (data stored under Grainsize) as well as percent moisture and percent volatile content by loss on ignition at 550 degree C (data stored under Moisture). Two additional 4” diameter cores were analyzed for sediment erodibility using a GUST Microcosm (data stored under GUST), and two rectangular cores were imaged by digital X-ray analysis (data stored under Xray). Salinity and temperature profiles were collected at each site with a Sontek Castaway CTD (data stored under CTD). At each muddy sediment station (C1, G3, and G6), several gravity core samples were collected. One core was imaged by digital X-ray analysis (data stored under Xray) and sliced and analyzed for grain size (sand, silt, and clay) distribution (data stored under Gravity Core). The other gravity cores samples were retained by Virginia Tech personnel for analyses done in their lab (Kiptoo, 2020). The gravity core would not penetrate sufficiently into sandy sediment, therefore there are no gravity cores for D6. Digital X-ray images were taken from a core from each site after it was sliced lengthwise (data stored under Xrays). The cores were then subsampled in 5 cm intervals and analyzed for grain size (sand, silt, and clay) distribution (data stored under grain size). Additional gravity cores were retained by Virginia Tech Personnel from each site for later analysis at their lab. At D6, samples were also collected for Virginia Tech personnel using the GOMEX grab. Acoustic Doppler Current Profiler (ADCP) transect data were collected on August 6th. One ADCP transect was collected along each Bluedrop transect perpendicular to the river flow (A-H). ADCP can be used to look at the general velocity flow field around the sediment sample station as well as provide an approximate measure of the bathymetry along the transects. Chirp transects were collected on the same day as ADCP transects along the numbered transects (1-10), and the data were retained by Virginia Tech personnel
    corecore