12,965 research outputs found

    Measuring And Improving Internet Video Quality Of Experience

    Get PDF
    Streaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE) on top of todays Internet. This work presents the first empirical understanding of Internet’s video-QoE capabilities, and tools and protocols to efficiently infer and improve them. To infer video-QoE at arbitrary nodes in the Internet, we design and implement MintMOS: a lightweight, real-time, noreference framework for capturing perceptual quality. We demonstrate that MintMOS’s projections closely match with subjective surveys in accessing perceptual quality. We use MintMOS to characterize Internet video-QoE both at the link level and end-to-end path level. As an input to our study, we use extensive measurements from a large number of Internet paths obtained from various measurement overlays deployed using PlanetLab. Link level degradations of intra– and inter–ISP Internet links are studied to create an empirical understanding of their shortcomings and ways to overcome them. Our studies show that intra–ISP links are often poorly engineered compared to peering links, and that iii degradations are induced due to transient network load imbalance within an ISP. Initial results also indicate that overlay networks could be a promising way to avoid such ISPs in times of degradations. A large number of end-to-end Internet paths are probed and we measure delay, jitter, and loss rates. The measurement data is analyzed offline to identify ways to enable a source to select alternate paths in an overlay network to improve video-QoE, without the need for background monitoring or apriori knowledge of path characteristics. We establish that for any unstructured overlay of N nodes, it is sufficient to reroute key frames using a random subset of k nodes in the overlay, where k is bounded by O(lnN). We analyze various properties of such random subsets to derive simple, scalable, and an efficient path selection strategy that results in a k-fold increase in path options for any source-destination pair; options that consistently outperform Internet path selection. Finally, we design a prototype called source initiated frame restoration (SIFR) that employs random subsets to derive alternate paths and demonstrate its effectiveness in improving Internet video-QoE

    Proactive software rejuvenation solution for web enviroments on virtualized platforms

    Get PDF
    The availability of the Information Technologies for everything, from everywhere, at all times is a growing requirement. We use information Technologies from common and social tasks to critical tasks like managing nuclear power plants or even the International Space Station (ISS). However, the availability of IT infrastructures is still a huge challenge nowadays. In a quick look around news, we can find reports of corporate outage, affecting millions of users and impacting on the revenue and image of the companies. It is well known that, currently, computer system outages are more often due to software faults, than hardware faults. Several studies have reported that one of the causes of unplanned software outages is the software aging phenomenon. This term refers to the accumulation of errors, usually causing resource contention, during long running application executions, like web applications, which normally cause applications/systems to hang or crash. Gradual performance degradation could also accompany software aging phenomena. The software aging phenomena are often related to memory bloating/ leaks, unterminated threads, data corruption, unreleased file-locks or overruns. We can find several examples of software aging in the industry. The work presented in this thesis aims to offer a proactive and predictive software rejuvenation solution for Internet Services against software aging caused by resource exhaustion. To this end, we first present a threshold based proactive rejuvenation to avoid the consequences of software aging. This first approach has some limitations, but the most important of them it is the need to know a priori the resource or resources involved in the crash and the critical condition values. Moreover, we need some expertise to fix the threshold value to trigger the rejuvenation action. Due to these limitations, we have evaluated the use of Machine Learning to overcome the weaknesses of our first approach to obtain a proactive and predictive solution. Finally, the current and increasing tendency to use virtualization technologies to improve the resource utilization has made traditional data centers turn into virtualized data centers or platforms. We have used a Mathematical Programming approach to virtual machine allocation and migration to optimize the resources, accepting as many services as possible on the platform while at the same time, guaranteeing the availability (via our software rejuvenation proposal) of the services deployed against the software aging phenomena. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems

    QoS Routing Solutions for Mobile Ad Hoc Network

    Get PDF

    Great Bay Estuary Water Quality Monitoring Program: Quality Assurance Project Plan 2019 - 2023

    Get PDF

    Modifying the frequency and characteristics of involuntary autobiographical memories

    Get PDF
    Recent studies have shown that involuntary autobiographical memories (IAMs) can be elicited in the laboratory. Here we assessed whether the specific instructions given to participants can change the nature of the IAMs reported, in terms of both their frequency and their characteristics. People were either made or not made aware that the aim of the study was to examine IAMs. They reported mental contents either whenever they became aware of them or following a predetermined schedule. Both making people aware of the aim of the study and following a fixed schedule of interruptions increased significantly the number of IAMs reported. When aware of the aim of the study, participants reported more specific memories that had been retrieved and rehearsed more often in the past. These findings demonstrate that the number and characteristics of memories depend on the procedure used. Explanations of these effects and their implications for research on IAMs are discussed

    Subsea fluid sampling to maximise production asset in offshore field development

    Get PDF
    The acquisition of representative subsea fluid sampling from offshore field development asset is crucial for the correct evaluation of oil reserves and for the design of subsea production facilities. Due to rising operational expenditures, operators and manufacturers have been working hard to provide systems to enable cost effective subsea fluid sampling solutions. To achieve this, any system has to collect sufficient sample volumes to ensure statistically valid characterisation of the sampled fluids. In executing the research project, various subsea sampling methods used in the offshore industry were examined and ranked using multi criteria decision making; a solution using a remote operated vehicle was selected as the preferred method, to compliment the subsea multiphase flowmeter capability, used to provide well diagnostics to measure individual phases – oil, gas, and water. A mechanistic (compositional fluid tracking) model is employed, using the fluid properties that are equivalent to the production flow stream being measured, to predict reliable reservoir fluid characteristics on the subsea production system. This is applicable even under conditions where significant variations in the reservoir fluid composition occur in transient production operations. The model also adds value in the decision to employ subsea processing in managing water breakthrough as the field matures. This can be achieved through efficient processing of the fluid with separation and boosting delivered to the topside facilities or for water re-injection to the reservoir. The combination of multiphase flowmeter, remote operated vehicle deployed fluid sampling and the mechanistic model provides a balanced approach to reservoir performance monitoring. Therefore, regular and systematic field tailored application of subsea fluid sampling should provide detailed understanding on formation fluid, a basis for accurate prediction of reservoir fluid characteristic, to maximize well production in offshore field development

    Combustion Feature Characterization using Computer Vision Diagnostics within Rotating Detonation Combustors

    Get PDF
    In recent years, the possibilities of higher thermodynamic efficiency and power output have led to increasing interest in the field of pressure gain combustion (PGC). Currently, a majority of PGC research is concerned with rotating detonation engines (RDEs), devices which may theoretically achieve pressure gain across the combustor. Within the RDE, detonation waves propagate continuously around a cylindrical annulus, consuming fresh fuel mixtures supplied from the base of the RDE annulus. Through constant-volume heat addition, pressure gain combustion devices theoretically achieve lower entropy generation compared to Brayton cycle combustors. RDEs are being studied for future implementation in gas turbines, where they would offer efficiency gains in both propulsion and power generation turbines. Much diagnostic work has been done to investigate the detonative behaviors within RDEs, including point measurements, optical diagnostics, thrust stands and other methods. However, to date, these analysis methods have been limited in either diagnostic sophistication or to post-processing due to the computationally expensive treatment of large data volumes. This is a result of the substantial data acquisition rates needed to study behavior on the incredibly short time scale of detonation interactions and propagation. As laboratory RDE operations become more reliable, industrial applications become more plausible. Real-time monitoring of combustion behavior within the RDE is a crucial step towards actively controlled RDE operation in the laboratory environment and eventual turbine integration. For these reasons, this study seeks to advance the efficiency of RDE diagnostic techniques from conventional post-processing efforts to lab-deployed real-time methods, achieving highly efficient detonation characterization through the application of convolutional neural networks (CNNs) to experimental RDE data. This goal is accomplished through the training of various CNNs, being image classification, object detection, and time series classification. Specifically, image classification aims to classify the number and direction of waves using a single image; object detection detects and classifies each detonation wave according to location and direction within individual images; and time series classification determines wave number and direction using a short window of sensor data. Each of these network outputs are used to develop unique RDE diagnostics, which are evaluated alongside conventional techniques with respect to real-time capabilities. Those real-time capable diagnostics are deployed and evaluated in the laboratory environment using an altered experimental setup via a live data acquisition environment. Completion of the research tasks results in overarching diagnostic capability developments of conventional methods, image classification, object detection, and timeseries classification applied to experimental RDE data. Each diagnostic is employed with varying strengths with respect to feasibility, long-term application, and performance, all of which are surveyed and compared extensively. Conventional methods, specifically detonation surface matrices, and object detection are found to offer diagnostic feedback rates of 0.017 and 9.50 Hz limited to post-processing, respectively. Image classification using high-speed chemiluminescence images, and timeseries classification using high-speed flame ionization and pressure measurements, achieve classification speeds enabling real-time diagnostic capabilities, averaging diagnostic feedback rates of 4 and 5 Hz when deployed in the laboratory environment, respectively. Among the CNN-based methods, object detection, while limited to post-processing usage, achieves the most refined diagnostic time-step resolution of 20 µsec compared to real-time-capable image and timeseries classification, which require the additional correlation of a sensor data window, extending their time-step resolutions to 80 msec. Through the application of machine learning to RDE data, methods and results presented offer beneficial advancement of diagnostic techniques from post-processing to real-time speeds. These methods are uniquely developed for various RDE data types commonly used in the PGC community and are successfully deployed in an altered laboratory environment. Feedback rates reported are expected to be satisfactory to the future development of an RDE active-control framework. This portfolio of diagnostics will bring valuable insight and direction throughout RDE technological maturation as a collective early application of machine learning to PGC technology

    Advanced Flame Monitoring and Emission Prediction through Digital Imaging and Spectrometry

    Get PDF
    This thesis describes the design, implementation and experimental evaluation of a prototype instrumentation system for burner condition monitoring and NOx emissions prediction on fossil-fuel-fired furnaces. A review of methodologies and technologies for burner condition monitoring and NOx emissions prediction is given, together with the discussions of existing problems and technical requirements in their applications. A technical strategy, incorporating digital imaging, UV-visible spectrum analysis and soft computing techniques, is proposed. Based on these techniques, a prototype flame imaging system is developed. The system consists mainly of an optical and fibre probe protected by water-air cooling jacket, a digital camera, a miniature spectrometer and a mini-motherboard with associated application software. Detailed system design, implementation, calibration and evaluation are reported. A number of flame characteristic parameters are extracted from flame images and spectral signals. Luminous and geometric parameters, temperature and oscillation frequency are collected through imaging, while flame radical information is collected by the spectrometer. These parameters are then used to construct a neural network model for the burner condition monitoring and NOx emission prediction. Extensive experimental work was conducted on a 120 MWth gas-fired heat recovery boiler to evaluate the performance of the prototype system and developed algorithms. Further tests were carried out on a 40 MWth coal-fired combustion test facility to investigate the production of NOx emissions and the burner performance. The results obtained demonstrate that an Artificial Neural Network using the above inputs has produced relative errors of around 3%, and maximum relative errors of 8% under real industrial conditions, even when predicting flame data from test conditions not disclosed to the network during the training procedure. This demonstrates that this off the shelf hardware with machine learning can be used as an online prediction method for NOx
    • …
    corecore