57 research outputs found
Quantitative assessment of HCC wash-out on CT is a predictor of early complete response to TACE
OBJECTIVES: To investigate the predictive value of four-phase contrast-enhanced CT (CECT) for early complete response (CR) to drug-eluting-bead transarterial chemoembolization (DEB-TACE), with a particular focus on the quantitatively assessed wash-in and wash-out. METHODS: A retrospective analysis of preprocedural CECTs was performed for 129 HCC nodules consecutively subjected to DEB-TACE as first-line therapy. Lesion size, location, and margins were recorded. For the quantitative analysis, the following parameters were computed: contrast enhancement ratio (CER) and lesion-to-liver contrast ratio (LLC) as estimates of wash-in; absolute and relative wash-out (WO(abs) and WO(rel)) and delayed percentage attenuation ratio (DPAR) as estimates of wash-out. The early radiological response of each lesion was assessed by the mRECIST criteria and dichotomized in CR versus others (partial response, stable disease, and progressive disease). RESULTS: All quantitatively assessed wash-out variables had significantly higher rates for CR lesions (WO(abs) p = 0.01, WO(rel) p = 0.01, and DPAR p = 0.00002). However, only DPAR demonstrated an acceptable discriminating ability, quantified by AUC = 0.80 (95% CI0.73–0.88). In particular, nodules with DPAR ≥ 120 showed an odds ratio of 3.3(1.5–7.2) for CR (p = 0.0026). When accompanied by smooth lesion margins, DPAR ≥ 120 lesions showed a 78% CR rate at first follow-up imaging. No significative association with CR was found for quantitative wash-in estimates (CER and LLC). CONCLUSIONS: Based on preprocedural CECT, the quantitative assessment of HCC wash-out is useful in predicting early CR after DEB-TACE. Among the different formulas for wash-out quantification, DPAR has the best discriminating ability. When associated, DPAR ≥ 120 and smooth lesion margins are related to relatively high CR rates. KEY POINTS: • A high wash-out rate, quantitatively assessed during preprocedural four-phase contrast-enhanced CT (CECT), is a favorable predictor for early radiological complete response of HCC to drug-eluting-bead chemoembolization (DEB-TACE). • The arterial phase of CECT shows great dispersion of attenuation values among different lesions, even when a standardized protocol is used, limiting its usefulness for quantitative analyses. • Among the different formulas used to quantify the wash-out rate (absolute wash-out, relative wash-out, and delayed percentage attenuation ratio), the latter (DPAR), based only on the delayed phase, is the most predictive (AUC = 0.80), showing a significant association with complete response for values above 120
A multi-element psychosocial intervention for early psychosis (GET UP PIANO TRIAL) conducted in a catchment area of 10 million inhabitants: study protocol for a pragmatic cluster randomized controlled trial
Multi-element interventions for first-episode psychosis (FEP) are promising, but have mostly been conducted in non-epidemiologically representative samples, thereby raising the risk of underestimating the complexities involved in treating FEP in 'real-world' services
Overcoming the memory limits of network devices in SDN-enabled data centers
Abstract:
In extremely connected and dynamic environments, such as data centers, SDN network devices can be exploited to simplify the management of network provisioning. However, they leverage on TCAMs to implement the flow tables, i.e., on size-limited memories that can be quickly filled up when fine-grained traffic control is required, eventually preventing the installation of new forwarding rules. In this work, we demonstrate how this issue can be mitigated by means of a novel flow rule swapping mechanism. Specifically, we first show the negative effects of a full TCAM on a video streaming service provided by an SDN-enabled data center. Then, we show that our swapping mechanism helps in overcoming the inability to properly access a media content available in the data center, by temporarily moving the least matched flow rules from the TCAM to a larger memory outside the SDN device
TinyKey: A light-weight architecture for wireless sensor networks securing real-world applications
While sharing some commonalities with a canonical computer network, Wireless Sensor Network (WSN) presents many aspects which are unique. Security mechanisms in a WSN are mainly devoted to protect both the resources from attacks and misbehavior of nodes and the information transferred throughout the network itself. While the vast majority of the works on security for WSN in literature are focusing on novel mechanisms or performance evaluation in “protected” environment like simulators or dedicated WSN testbeds, to the best of our knowledge there are no existing works describing the performance of security mechanisms in operational WSN dealing with real-world applications. In this paper, we present TinyKey, a security architecture for WSNs that takes into account pragmatic concerns of a real-work deployment. For instance, most of the approaches in literature have neglected mechanisms related to key management. TinyKey comes with an integrated key management system that can be used in any deployment. We have developed TinyKey to satisfy the security requirements of two projects funded by the local government of the Trento province in Italy that aim at developing and deploying real-world applications based on WSNs. One project aims at improving the safety of the road tunnels around the city of Trento while the second project focuses on improving the quality of life of elderly people. As a result, we have been able to measure the performances of TinyKey in real deployments
Towards an automated framework to instantiate virtual networks in OpenFlow-based infrastructures
The explosion of cloud applications and data-centres technologies has recently renewed the interest of both academic and industrial research community on network virtualisation techniques. The increasing availability of programmable network devices and the pervasive adoption of a software-defined networking approach in all segments of the network are paving the way toward novel virtualisation approaches which aim at improving network management operations while guaranteeing a better utilisation of resources especially when compared to traditional overlay-based techniques widely used in data-centre settings. In this paper, the experience gained by the authors in the development and deployment of an innovative network virtualisation framework on an OpenFlow-based programmable experimental facility is provided and discussed in detail. Despite its specific application to a Future Internet testbed scenario, the proposed architecture is a first step toward a completely automatic management of virtual networks in OpenFlow-based software-defined networks
AiroLAB: A framework toward effective virtualisation of multi-hop wireless networks
In this work, we introduce AiroLAB, a novel network virtualisation framework specifically tailored to multi-hop wireless networks. AiroLAB departs from conventional network virtualisation approaches by focusing on embedded, resource-constrained devices and by aiming at providing Wireless Internet Service Providers with an effective virtualisation mechanism where network resources are shared between production traffic and a variable number of experimental slices allowing novel solutions and services to be tested in a controlled yet realistic environment. In the paper, the design choices at the hearth of AiroLAB are presented, together with an early-stage prototype implementation and experimental results obtained in a small-scale wireless network testbed
VeRTIGO: Network virtualization and beyond
In this paper we present Vertigo (Virtual Topologies Generalization in OpenFlow networks), a Software-defined networking platform designed for network virtualization. Based on the OpenFlow original network slicing system Flow Visor, the Vertigo platform aims at covering all flavors of network virtualization: in particular, it is able to expose a simple abstract node on one extreme, and to deliver a logically fully connected network at the very opposite end. In this work, we first introduce the Vertigo system architecture and its design choices, then we report on a prototypical implementation deployed over an OpenFlow-enabled test bed. Experimental results show that Vertigo can deliver flexible and reliable network virtualization services to a wide range of use cases in spite of failure and/or congestion at the underlying physical network
- …