5,546 research outputs found
Using the event calculus for tracking the normative state of contracts
In this work, we have been principally concerned with the representation of contracts so that their normative state may be tracked in an automated fashion over their deployment lifetime. The normative state of a contract, at a particular time, is the aggregation of instances of normative relations that hold between contract parties at that time, plus the current values of contract variables. The effects of contract events on the normative state of a contract are specified using an XML formalisation of the Event Calculus, called ecXML. We use an example mail service agreement from the domain of web services to ground the discussion of our work. We give a characterisation of the agreement according to the normative concepts of: obligation, power and permission, and show how the ecXML representation may be used to track the state of the agreement, according to a narrative of contract events. We also give a description of a state tracking architecture, and a contract deployment tool, both of which have been implemented in the course of our work.
Energy-Efficient selective activation in Femtocell Networks
Provisioning the capacity of wireless networks is difficult when peak load is significantly higher than average load, for example, in public spaces like airports or train stations. Service providers can use femtocells and small cells to increase local capacity, but deploying enough femtocells to serve peak loads requires a large number of femtocells that will remain idle most of the time, which wastes a significant amount of power.
To reduce the energy consumption of over-provisioned femtocell networks, we formulate a femtocell selective activation problem, which we formalize as an integer nonlinear optimization problem. Then we introduce GREENFEMTO, a distributed femtocell selective activation algorithm that deactivates idle femtocells to
save power and activates them on-the-fly as the number of users increases. We prove that GREENFEMTO converges to a locally Pareto optimal solution and demonstrate its performance using extensive simulations of an LTE wireless system. Overall, we find that GREENFEMTO requires up to 55% fewer femtocells to serve a given user load, relative to an existing femtocell power-saving procedure, and comes within 15% of a globally optimal solution
The Incorporation of Sustainability in Higher Education: A Research Synthesis
In the past decade, sustainability efforts in higher education have become more prevalent across the nation and the world. However, some institutions lack the resources and information on how to successfully create a campus-wide culture that is dedicated to environmentally friendly practices. This study was designed to help direct institutions on the actions they can take to create a sustainably focused campus. Six participants from the offices of sustainability at Babson College and Bryant University were interviewed for this qualitative study. Findings included the significance of incorporating sustainability into the foundation of the institution through signing specific documents that outline the steps to creating campus-wide sustainable behavior. Participants also explained the importance of fostering behavior change to create campus-wide collaboration and how programming can create activism focused on sustainability. Based on the findings, recommendations for this study include establishing sustainability goals through institutional commitment, creating administrative positions dedicated to sustainability, advocating for student interest and participation in sustainability efforts, implementing sustainability training and education through cross-campus collaboration, producing sustainable infrastructure, and continuing assessment and monitoring of sustainability on campus
Combining intersemiotic and interlingual translation in training programmes: A functional approach to museum audio description
This paper seeks to put forward a didactic proposal focused on museum audio description (AD) to be implemented with post-graduate students attending a translation studies course within a Languages and Communication programme. The aim is to raise students’ awareness of translation and accessibility practices in the cultural and creative industries and train specialised translators and describers. The proposal includes two different but complementary levels. On a more theoretical side, museum AD is introduced, both as a form of intersemiotic translation and as an interpretative tool in the museum’s wider communication framework. From a practical point of view, we draw on Mazur (2020), who exploited the functional model proposed by Nord (2018 [1997]) with her translation-oriented text analysis in the context of screen AD training. We suggest that it may also be adapted to serve as a guiding methodology for prospective museum translators and describers. In doing so, intersemiotic translation is combined with interlingual translation to train students to (1) audio describe specific artworks/artefacts in their first language (L1) and (2) translate the produced ADs into their second language (L2)
Il piano spostamenti casa lavoro: un’indagine sul territorio nazionale
Indagine sul territorio nazionale per OMNITEL - VODAFONE:
sono state analizzate le risposte date al questionario dai dipendenti di tutte le sedi nazionali ( CATANIA, PADOVA, PISA, ROMA, MILANO e CESANO BOSCONE (MI)).
Tutti i plessi considerati, ad esclusione di quello di
Padova, sono ubicati ai confini comunali delle cittÃ
e risultano essere così quelle aree di "frontiera" che caratterizzano il rapporto che i comuni principali hanno con i comuni limitrofi. Proprio queste funzioni si configurano oggi
come i luoghi delle nuove centralit
Anomaly Detection and Anticipation in High Performance Computing Systems
In their quest toward Exascale, High Performance Computing (HPC) systems are rapidly becoming larger and more complex, together with the issues concerning their maintenance. Luckily, many current HPC systems are endowed with data monitoring infrastructures that characterize the system state, and whose data can be used to train Deep Learning (DL) anomaly detection models, a very popular research area. However, the lack of labels describing the state of the system is a wide-spread issue, as annotating data is a costly task, generally falling on human system administrators and thus does not scale toward exascale. In this article we investigate the possibility to extract labels from a service monitoring tool (Nagios) currently used by HPC system administrators to flag the nodes which undergo maintenance operations. This allows to automatically annotate data collected by a fine-grained monitoring infrastructure; this labelled data is then used to train and validate a DL model for anomaly detection. We conduct the experimental evaluation on a tier-0 production supercomputer hosted at CINECA, Bologna, Italy. The results reveal that the DL model can accurately detect the real failures, and, moreover, it can predict the insurgency of anomalies, by systematically anticipating the actual labels (i.e., the moment when system administrators realize when an anomalous event happened); the average advance time computed on historical traces is around 45 minutes. The proposed technology can be easily scaled toward exascale systems to easy their maintenance
Multi-Pulse Laser Wakefield Acceleration: A New Route to Efficient, High-Repetition-Rate Plasma Accelerators and High Flux Radiation Sources
Laser-driven plasma accelerators can generate accelerating gradients three
orders of magnitude larger than radio-frequency accelerators and have achieved
beam energies above 1 GeV in centimetre long stages. However, the pulse
repetition rate and wall-plug efficiency of plasma accelerators is limited by
the driving laser to less than approximately 1 Hz and 0.1% respectively. Here
we investigate the prospects for exciting the plasma wave with trains of
low-energy laser pulses rather than a single high-energy pulse. Resonantly
exciting the wakefield in this way would enable the use of different
technologies, such as fibre or thin-disc lasers, which are able to operate at
multi-kilohertz pulse repetition rates and with wall-plug efficiencies two
orders of magnitude higher than current laser systems. We outline the
parameters of efficient, GeV-scale, 10-kHz plasma accelerators and show that
they could drive compact X-ray sources with average photon fluxes comparable to
those of third-generation light source but with significantly improved temporal
resolution. Likewise FEL operation could be driven with comparable peak power
but with significantly larger repetition rates than extant FELs
Time resolved optical Kerr effect analysis of urea–water system
The nuclear dynamics of urea aqueous solution was analyzed by time resolved optical Kerr effect (OKE). The data analysis was achieved in time and in frequency domains. Three relaxation times characterize the time decay of the OKE signal at high mole fractions of urea, while only two relaxation times characterize this decay for the low mole fractions. The observed slowest relaxation time increases with increasing the mole fraction of urea. The comparison between this relaxation time and the ones determined by Raman and nuclear magnetic resonance spectroscopies suggests that the slow relaxation time is related to the reorientation of an axis lying in the plane of the urea molecule. At high mole fractions, the power spectra derived from the Fourier transform of the OKE signal are characterized by one broad peak at around 70 cm−1 and by a shoulder at around 160 cm−1 in the high frequency part of the former peak. This shoulder is related to the hydrogen bond interactions which involve urea molecules. Molecular dynamics simulation results on urea/water system suggest that the power spectra derived from OKE data could be interpreted in terms of translational motions (caging effect) and in terms of rotational motion (libration) of urea molecules
An Explainable Model for Fault Detection in HPC Systems
Large supercomputers are composed of numerous components that risk to break down or behave in unwanted manners. Identifying broken components is a daunting task for system administrators. Hence an automated tool would be a boon for the systems resiliency. The wealth of data available in a supercomputer can be used for this task. In this work we propose an approach to take advantage of holistic data centre monitoring, system administrator node status labeling and an explainable model for fault detection in supercomputing nodes. The proposed model aims at classifying the different states of the computing nodes thanks to the labeled data describing the supercomputer behaviour, data which is typically collected by system administrators but not integrated in holistic monitoring infrastructure for data center automation. In comparison the other method, the one proposed here is robust and provide explainable predictions. The model has been trained and validated on data gathered from a tier-0 supercomputer in production
- …