15 research outputs found

    Price-Based Controller for Utility-Aware HTTP Adaptive Streaming

    Get PDF
    HTTP Adaptive Streaming (HAS) permits to efficiently deliver video to multiple heterogenous users in a fully distributed way. This might however lead to unfair bandwidth utilization among HAS users. Therefore, network-assisted HAS systems have been proposed where network elements operate alongside with the clients adaptation logic for improving users satisfaction. However, current solutions rely on the assumption that network elements have full knowledge of the network status, which is not always realistic. In this work, we rather propose a practical network-assisted HAS system where the network elements infer the network link congestion using measurements collected from the client endpoints, the congestion level signal is then used by the clients to optimize their video data requests. Our novel controller maximizes the overall users satisfaction and the clients share the available bandwidth fairly from a utility perspective, as demonstrated by simulation results obtained on a network simulator

    Water level prediction from social media images with a multi-task ranking approach

    Full text link
    Floods are among the most frequent and catastrophic natural disasters and affect millions of people worldwide. It is important to create accurate flood maps to plan (offline) and conduct (real-time) flood mitigation and flood rescue operations. Arguably, images collected from social media can provide useful information for that task, which would otherwise be unavailable. We introduce a computer vision system that estimates water depth from social media images taken during flooding events, in order to build flood maps in (near) real-time. We propose a multi-task (deep) learning approach, where a model is trained using both a regression and a pairwise ranking loss. Our approach is motivated by the observation that a main bottleneck for image-based flood level estimation is training data: it is diffcult and requires a lot of effort to annotate uncontrolled images with the correct water depth. We demonstrate how to effciently learn a predictor from a small set of annotated water levels and a larger set of weaker annotations that only indicate in which of two images the water level is higher, and are much easier to obtain. Moreover, we provide a new dataset, named DeepFlood, with 8145 annotated ground-level images, and show that the proposed multi-task approach can predict the water level from a single, crowd-sourced image with ~11 cm root mean square error.Comment: Accepted in ISPRS Journal 202

    Metal ion release from fine particulate matter sampled in the Po Valley to an aqueous solution mimicking fog water: Kinetics and solubility

    Get PDF
    Metals are among the key aerosol components exerting adverse health effects. Their toxic properties may vary depending on their chemical form and solubility, which can be affected by aqueous processing during aerosol atmospheric lifetime. In this work, fine particulate matter (PM2.5) was collected in the city centre of Padua in the Po Valley (Italy), during a winter campaign. Part of the sampling filters were used to measure the kinetics by which metal ions and other elements can leach from PM2.5 to an aqueous solution mimicking fog water in the winter in temperate climate regions (pH 4.7, 5\ub0C). The leaching process was interpreted by a first order kinetics, and the fitting of the experimental data allowed to obtain the leaching kinetic constants and the equilibrium concentrations (i.e., at infinite time) for all elements. The remaining filter parts were mineralised, through two subsequent extraction steps, and the extracts were analysed by ICP-MS to gain the total elemental content of PM for a large number of elements. We found that elements can leach from PM with half times generally between 10\u201340 minutes, which is a timescale compatible with atmospheric aqueous processing during fog events. For instance, aluminium(III) in PM2.5 dissolved with an average k = 0.0185 min\u20131, and t1/2 = 37.5 min. Nevertheless, a fraction of each element was immediately solubilised after contact with the extraction solution suggesting that metal ion solubilisation may already had started during particle lifetime in the atmosphere

    Improved Utility-Based Congestion Control for Delay-Constrained Communication

    Get PDF
    Due to the presence of buffers in the inner network nodes, each congestion event leads to buffer queueing and thus to an increasing end-to-end delay. In the case of delay sensitive applications, a large delay might not be acceptable and a solution to properly manage congestion events while maintaining a low end-to-end delay is required. Delay-based congestion algorithms are a viable solution as they target to limit the experienced end-to-end delay. Unfortunately, they do not perform well when sharing the bandwidth with congestion control algorithms not regulated by delay constraints (e.g., loss-based algorithms). Our target is to fill this gap, proposing a novel congestion control algorithm for delay-constrained communication over best effort packet switched networks. The proposed algorithm is able to maintain a bounded queueing delay when competing with other delay-based flows, and avoid starvation when competing with loss-based flows. We adopt the well-known price-based distributed mechanism as congestion control, but: 1) we introduce a novel non-linear mapping between the experienced delay and the price function and 2) we combine both delay and loss information into a single price term based on packet interarrival measurements. We then provide a stability analysis for our novel algorithm and we show its performance in the simulation results carried out in the NS3 framework. Simulation results demonstrate that the proposed algorithm is able to: achieve good intra-protocol fairness properties, control efficiently the end-to-end delay, and finally, protect the flow from starvation when other flows cause the queuing delay to grow excessively

    I nonluoghi in letteratura

    No full text
    Il volume censisce e analizza la nuova dimensione spaziale nella letteratura romanzesca mondiale nell'epoca della globalizzazione, dagli anni Novanta in poi, attraverso quelli che l'antropologo francese Aug\ue9 ha efficacemente definito "nonluoghi": luoghi senza memoria, privi di identit\ue0 etnica, territori di transito come Las Vegas, gli ipermercati, le stazioni della metropolitana, i luoghi di svago, gli aeroporti

    Preclinical three-dimensional colorectal cancer model: The next generation of in vitro drug efficacy evaluation

    No full text
    Colorectal cancer (CRC), the third most common cancer diagnosed in both men and women in the United States, shows a highly ineffective therapeutic management. In these years neither substantial improvements nor new therapeutic approaches have been provided to patients. Performing the early lead discovery phases of new cancer drugs in cellular models, resembling as far as possible the real in vivo tumor environment, may be more effective in predicting their future success in the later clinical phases. In this review, we critically describe the most representative bioengineered models for anticancer drug screening in CRC from the conventional two-dimensional models to the new-generation three-dimensional scaffold-based ones. The scaffold aims to replace the extracellular matrix, thus influencing the biomechanical, biochemical, and biological properties of cells and tissues. In this scenario, we believe that reconstitution of tumor condition is mandatory for an alternative in vitro methods to study cancer development and therapeutic strategies

    Field-assisted paper spray mass spectrometry for the quantitative evaluation of imatinib levels in plasma

    No full text
    Drug levels in patients' bloodstreams vary among individuals and consequently therapeutic drug monitoring (TDM) is fundamental to controlling the effective therapeutic range. For TDM purposes, different analytical approaches have been used, mainly based on immunoassay, liquid chromatography-ultraviolet, liquid chromatography-mass spectrometry and liquid chromatography-tandem mass spectrometry (LC-MS/MS) methods. More recently a matrix-assisted laser desorption/ionisation method has been proposed for the determination of irinotecan levels in the plasma of subjects under therapy and this method has been cross-validated by comparison with data achieved by LC-MS/MS. However, to reach an effective point-of-care monitoring of plasma drug concentrations, a TDM platform technology for fast, accurate, low-cost assays is required. In this frame, recently the use of paper spray mass spectrometry, which is becoming a popular and widely employed MS method, has been proposed. In this paper we report the results obtained by the development of a paper spray-based method for quantitative analysis in plasma samples of imatinib, a new generation of anticancer drug. Preliminary experiments showed that poor sensitivity, reproducibility and linear response were obtained by the "classical" paper spray set-up. In order to achieve better results, it was thought of interest to operate in presence of a higher and more homogeneous electrical field. For this aim, a stainless steel needle connected with the high voltage power supply was mounted below the paper triangle. Furthermore, in order to obtain valid quantitative data, we analysed the role of the different equilibria participating to the phenomena occurring in paper spray experiments, depending either on instrumental parameters or on the chemical nature of analyte and solvents. A calibration curve was obtained by spiking plasma samples containing different amounts of imatinib (1) with known amounts of deuterated imatinib (1d3) as internal standard, with molar ratios [1]/[1d3] in the range 0.00-2.00. A quite good linearity was obtained (R2 = 0.975) and some experiments performed on spiked plasma samples with known amounts of 1 confirmed the validity of this method

    Experimental Evidence of the Presence of Bimolecular Caffeine/Catechin Complexes in Green Tea Extracts

    No full text
    A hypothesis on the peculiar pharmacological behavior of biologically active natural compounds is based on the occurrence of molecular interactions originating from the high complexity of the natural matrix, following the rules of supramolecular chemistry. In this context, some investigations were performed to establish unequivocally the presence of caffeine/catechin complexes in green tea extracts (GTEs). 1H NMR spectroscopy was utilized to compare profiles from GTEs with caffeine/catechin mixtures in different molar ratios, showing that peaks related to caffeine in GTEs are generally upfield shifted compared to those of free caffeine. On the other hand, ESIMS experiments performed on GTE, by means of precursor ion scan and neutral loss scan experiments, proved unequivocally the presence of caffeine/catechin complexes. Further investigations were performed by an LC-MS method operating at high-resolution conditions. The reconstructed ion chromatograms of the exact mass ions corresponding to caffeine/catechin species have been obtained, showing the presence of complexes of caffeine with gallate-type catechins. Furthermore, this last approach evidenced the presence of the same complex with different structures, consequently exhibiting different retention times. Both MSE and product ion MS/MS methods confirm the nature of caffeine/catechin complexes of the detected ions, showing the formation of protonated caffeine

    Analytical aspects of sunitinib and its geometric isomerism towards therapeutic drug monitoring in clinical routine

    No full text
    Sunitinib malate, an oral multi-targeted tyrosine kinase inhibitor approved for the treatment of metastatic renal cell carcinoma, gastrointestinal stromal tumor, and well-differentiated pancreatic neuroendocrine tumors, has been identified as a potential candidate for therapeutic drug monitoring approach. Nevertheless, the development of an analytical assay suitable for clinical application for the quantification of the plasma concentration of sunitinib and its active metabolite, N-desethyl sunitinib, is limited by its Z/E isomerization when exposed to light. Several LC–MS/MS methods already published require protection from light during all sample handling procedures to avoid the formation of E-isomer, which makes them not suitable for clinical practice. In order to obtain a simple and fast procedure to reconvert the E-isomer, formed during sample collection and treatment without light protection, and, thus, to have only Z-isomer peak to quantify, we studied the Z/E photodegradation with special attention to the condition allowing the reverse reaction in plasma matrix. After 30 min of light exposure, the E-isomer maximum percentage of both the analytes was reached (44% of E-sunitinib and 20% of E-N-desethyl sunitinib; these percentages were calculated with respect to the sum of E + Z). Moreover, the formation of the E-isomer increased up to 20% after lowering the pH of the solution. Since the reverse reaction takes place when the pre-exposed solution is placed in dark, we followed the E to Z-isomer kinetics into the autosampler. The conversion rate was very slow when the autosampler was set at 4 °C (after 4 h the mean percentages of E-isomer were 50% for sunitinib and 22% for N-desethyl sunitinib). The reconversion rate was considerably accelerated with the increasing of the temperature: incubating the analytical solution in a heated water bath for 5 min at 70 °C we obtained the quantitative (99%) reconversion of the E- to the Z-isomer. No effect of concentration was observed, while the presence of acids inhibited the reconversion. Based on these results, a simple and fast procedure was setup to quantitatively reconvert the E-isomer formed during sample collection and processing without light protection into its Z-form thus leading to a single peak to quantify. The application of this additional step allows to develop a LC–MS/MS method suitable to clinical practice, due to its practicality and speed
    corecore