5,328 research outputs found
Performance improvements to the 802.11 wireless network medium access control sub-layer : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University
This thesis presents the outcome into the research and development of improvements to the 802.11 wireless networking medium access control (MAC) sublayer. The main products of the research are three types of improvement that increase the efficiency and throughput of the 802.11 protocol. Beginning with an overview of the original 802.11 physical layer and MAC sub-layer standard, the introductory chapters then cover the many supplements to the original standard (including a brief on the future 802.11n supplement). The current state of the 802.11 MAC sub-layer is presented along with an assessment of the realistic performance available from 802.11. Lastly, the motivations for improving the MAC sub-layer are explained along with a summary of existing research into this area. The main improvement presented within the thesis is that of packet aggregation. The operation of aggregation is explained in detail, along with the reasons for the significant available throughput increase to 802.11 from aggregation. Aggregation is then developed to produce even higher throughput, and to be a more robust mechanism. Additionally, aggregation is formally described in the form of an update to the existing 802.11 standard. Following this, two more improvements are shown that can be used either with or without the aggregation mechanism. Stored frame headers are designed to reduce repetition of control data, and combined acknowledgements are an expansion of the block acknowledgement system introduced in the 802.11e supplement. This is followed by a description of the simulation environment used to test the three improvements presented, such as the settings used and metrics created. The results of the simulations of the improvements are presented along with the discussion. The developments to the basic improvements are also simulated and discussed in the same way. Finally, conclusions about the improvements detailed and the results shown in the simulations are drawn. Also at the end of the thesis, the possible future direction of research into the improvements is given, as well as the aspects and issues of implementing aggregation on a personal computer based platform
An investigation into the relationship between small intestinal fluid secretion and systemic arterial blood pressure in the anesthetized rat
In the absence of an ability to absorb fluid by cellular uptake mechanisms, fluid movement in vivo from the perfused rat intestine is absorptive when the diastolic blood pressure is normal or very low but is secretory when blood pressure falls below normal. This pattern of fluid movement is consistent with changes in capillary pressure within the villus. Whether flow moves into or out of the intestine is determined by changes in the Starling forces across intestinal capillaries. These observations indicate that secretion caused by some bacterial enterotoxins may act solely on the vasculature of the small intestine. This contradicts a major current theory of secretion that requires the source of the fluid to be from the epithelial cell. The significance of this work is that the intestinal arterioles rather than the epithelial cells may determine secretion. If substantiated, this may allow the development of the effective anti-secretory drugs that have not been forthcoming with development strategies based on the enterocyte model of deranged intestinal secretion
Animal models of Zika virus infection, pathogenesis, and immunity
Zika virus (ZIKV) is an emerging mosquito-transmitted flavivirus that now causes epidemics affecting millions of people on multiple continents. The virus has received global attention because of some of its unusual epidemiological and clinical features, including persistent infection in the male reproductive tract and sexual transmission, an ability to cross the placenta during pregnancy and infect the developing fetus to cause congenital malformations, and its association with Guillain-Barré syndrome in adults. This past year has witnessed an intensive effort by the global scientific community to understand the biology of ZIKV and to develop pathogenesis models for the rapid testing of possible countermeasures. Here, we review the recent advances in and utility and limitations of newly developed mouse and nonhuman primate models of ZIKV infection and pathogenesis
Moduli of Tropical Plane Curves
We study the moduli space of metric graphs that arise from tropical plane
curves. There are far fewer such graphs than tropicalizations of classical
plane curves. For fixed genus , our moduli space is a stacky fan whose cones
are indexed by regular unimodular triangulations of Newton polygons with
interior lattice points. It has dimension unless or .
We compute these spaces explicitly for .Comment: 31 pages, 25 figure
Man and machine thinking about the smooth 4-dimensional Poincar\'e conjecture
While topologists have had possession of possible counterexamples to the
smooth 4-dimensional Poincar\'{e} conjecture (SPC4) for over 30 years, until
recently no invariant has existed which could potentially distinguish these
examples from the standard 4-sphere. Rasmussen's s-invariant, a slice
obstruction within the general framework of Khovanov homology, changes this
state of affairs. We studied a class of knots K for which nonzero s(K) would
yield a counterexample to SPC4. Computations are extremely costly and we had
only completed two tests for those K, with the computations showing that s was
0, when a landmark posting of Akbulut (arXiv:0907.0136) altered the terrain.
His posting, appearing only six days after our initial posting, proved that the
family of ``Cappell--Shaneson'' homotopy spheres that we had geared up to study
were in fact all standard. The method we describe remains viable but will have
to be applied to other examples. Akbulut's work makes SPC4 seem more plausible,
and in another section of this paper we explain that SPC4 is equivalent to an
appropriate generalization of Property R (``in S^3, only an unknot can yield
S^1 x S^2 under surgery''). We hope that this observation, and the rich
relations between Property R and ideas such as taut foliations, contact
geometry, and Heegaard Floer homology, will encourage 3-manifold topologists to
look at SPC4.Comment: 37 pages; changes reflecting that the integer family of
Cappell-Shaneson spheres are now known to be standard (arXiv:0907.0136
A dust scattered halo in starburst galaxy M82?
The source of the halo about M82 has been under discussion for several years. One explanation for it is the dust model of Solinger, Morrison and Markert in which they propose a diffuse cloud of dust through the M81 group, with M82 traveling through the group holding a denser cloud of dust around it. The feasibility of the dust theory is examined in the X-ray range, using the halo in the X-ray image of M82 taken by the Einstein Observatory. To this end the X-ray cross section for dust is presented, along with the single scattered image of an X-ray source surrounded by a dust cloud; multiply scattered images were simulated with a Monte Carlo program; profiles of the halo along the major and minor axes of M82 are presented. Also presented is an accounting for line spectrographs of M82 that show unusual splitting, using the dust model. The final model proposed for the X-ray image requires dust of radius 50 to 300 A, with density on the order of 10 to the -7th power cu cm, out to a distance of about 9 kpc for some regions
Ensuring cost-effective heat exchanger network design for non-continuous processes
The variation in stream conditions over time inevitably adds significant complexity to the task of integrating non-continuous processes. The Time Averaging Method (TAM), where stream conditions are simply averaged across the entire time cycle, leads to unrealistic energy targets for direct heat recovery and consequently to Heat Exchanger Network (HEN) designs that are in fact suboptimal. This realisation led to the development of the Time Slice Method (TSM) that instead considers each time interval separately, and can be used to reach accurate targets and to design the appropriate HEN to maximise heat recovery. However, in practise the HENs often require excessive exchanger surface area, which renders them unfeasible when capital costs are taken in to account. An extension of the TSM that reduces the required overall exchanger surface area and systematically distributes it across the stream matches is proposed. The methodology is summarised with the help of a simple case study and further improvement opportunities are discusse
Carbon Emissions Pinch Analysis (CEPA) for emissions reduction in the New Zealand electricity sector
Carbon Emissions Pinch Analysis (CEPA) is a recent extension of traditional thermal and mass pinch analysis to the area of emissions targeting and planning on a macro-scale (i.e. economy wide). This paper presents an extension to the current methodology that accounts for increased demand and a carbon pinch analysis of the New Zealand electricity industry while illustrating some of the issues with realising meaningful emissions reductions. The current large proportion of renewable generation (67% in 2007) complicates extensive reduction of carbon emissions from electricity generation. The largest growth in renewable generation is expected to come from geothermal generation followed by wind and hydro. A four fold increase in geothermal generation capacity is needed in addition to large amounts of new wind generation to reduce emissions to around 1990 levels and also meet projected demand. The expected expansion of geothermal generation in New Zealand raises issues of GHG emissions from the geothermal fields. The emissions factors between fields can vary by almost two orders of magnitude making predictions of total emissions highly site specific
Multiple Detector Optimization for Hidden Radiation Source Detection
The purpose of this research was to demonstrate two things: the first, a validation of the 2 dimensional point attenuation kernel against the MCNP model for optimal placement of multiple NaI detectors. The second was to develop a model to deduce an employment/emplacement strategy for optimal detector placement based on the amount of devices available. The 2 dimensional model is able to replicate the MCNP results in a fraction of the time. Additionally, the point attenuation kernel can predict optimal detector locations with the same proficiency as the MCNP model
- …