600 research outputs found

    Sustainability of intensive groundwater development: experience in Spain

    Get PDF
    Intensive aquifer development is common in arid and semiarid countries. The associated economic and social benefits are great, but management is needed and sustainability has to be analysed in the framework of a sound hydrogeological background which includes recharge as a key term. Recharge under natural conditions may greatly differ from the actual value under groundwater exploitation conditions when the aquifer is connected to surface water bodies or evaporation conditions are modified. Actual recharge is not an aquifer property but is variable depending on groundwater abstraction and its pattern, and changes in surface water-groundwater relationships and other circumstances, such as return irrigation flows, leakages, and activities to artificially modify it. Groundwater plays an important role in nature as it sustains spring flow, river base flow, wetlands, and crypto-wetlands, and the related provision of ecological services to mankind. Therefore, developable groundwater resources and their sustainability have to take into account concurrence and the net benefits of capturing it in a given moment and not in other circumstances, and exchanging groundwater-related nature services for the human use of groundwater. The often large storage relative to annual flow of aquifers implies that aquifer development produces effects that may last decades and even affect upcoming human generations. This new dimension, which has economic and sustainability aspects, is not as important for other water resources. Critical flow thresholds have to be considered for groundwater-dependent ecosystems. This is considered from the point of view of water quantity, which is the dominant aspect under arid and semiarid conditions. However, water quality may be as or more important for humans and for nature services, but this needs a separate treatment. The hydrogeological and socio-economic aspects of aquifer behaviour are presented taking into account the experience drawn from some intensively exploited and economically and socially important aquifers, mostly those in La Mancha, in central Spain, but also other intensively exploited Spanish aquifers. Topdown-down administrative decisions to get a given sustainable have resulted in partial failures, but if action is agreed among stakeholders better outcomes could be achieved. Mixed solutions seem the best approach

    A Critical Update of the Classification of Chiari and Chiari-like Malformations

    Get PDF
    Malformació d'Arnold-Chiari; Classificació; Malalties raresMalformación de Arnold-Chiari; Clasificación; Enfermedades rarasArnold-Chiari malformation; Classification; Rare diseasesChiari malformations are a group of craniovertebral junction anomalies characterized by the herniation of cerebellar tonsils below the foramen magnum, often accompanied by brainstem descent. The existing classification systems for Chiari malformations have expanded from the original four categories to nine, leading to debates about the need for a more descriptive and etiopathogenic terminology. This review aims to examine the various classification approaches employed and proposes a simplified scheme to differentiate between different types of tonsillar herniations. Furthermore, it explores the most appropriate terminology for acquired herniation of cerebellar tonsils and other secondary Chiari-like malformations. Recent advances in magnetic resonance imaging (MRI) have revealed a higher prevalence and incidence of Chiari malformation Type 1 (CM1) and identified similar cerebellar herniations in individuals unrelated to the classic phenotypes described by Chiari. As we reassess the existing classifications, it becomes crucial to establish a terminology that accurately reflects the diverse presentations and underlying causes of these conditions. This paper contributes to the ongoing discussion by offering insights into the evolving understanding of Chiari malformations and proposing a simplified classification and terminology system to enhance diagnosis and management.This research was partially supported by grant FIS PI22/01082, which was co-financed by the European Regional Development Fund (ERDF), awarded to M.A. Poca and by grant 2021SGR/00810 from the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR), Departament de Recerca i Universitats de la Generalitat de Catalunya, Spain. ASM is the recipient of a predoctoral fellowship from grant 2021SGR/00810 from the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR). The following nongovernmental associations have generously donated funding to support this research: 1. Asociación Nacional de Amigos de Arnold-Chiari (ANAC, http://www.arnoldchiari.es (accessed on 7 June 2023)); 2. Asociación Chiari y Siringomielia del Principado de Asturias (CHySPA, https://chyspa.org (accessed on 7 June 2023)); 3. Federación Española de Malformación de Chiari y Patologías Asociadas (FEMACPA); and 4. Mariana Dañobeitia (https://references.neurotrauma.com/chiari (accessed on 7 June 2023))

    On microarchitectural mechanisms for cache wearout reduction

    No full text
    Hot carrier injection (HCI) and bias temperature instability (BTI) are two of the main deleterious effects that increase a transistor's threshold voltage over the lifetime of a microprocessor. This voltage degradation causes slower transistor switching and eventually can result in faulty operation. HCI manifests itself when transistors switch from logic ''0'' to ''1'' and vice versa, whereas BTI is the result of a transistor maintaining the same logic value for an extended period of time. These failure mechanisms are especiall in those transistors used to implement the SRAM cells of first-level (L1) caches, which are frequently accessed, so they are critical to performance, and they are continuously aging. This paper focuses on microarchitectural solutions to reduce transistor aging effects induced by both HCI and BTI in the data array of L1 data caches. First, we show that the majority of cell flips are concentrated in a small number of specific bits within each data word. In addition, we also build upon the previous studies, showing that logic ''0'' is the most frequently written value in a cache by identifying which cells hold a given logic value for a significant amount of time. Based on these observations, this paper introduces a number of architectural techniques that spread the number of flips evenly across memory cells and reduce the amount of time that logic ''0'' values are stored in the cells by switchingThis work was supported in part by the Spanish Ministerio de Economía y Competitividad within the Plan E Funds under Grant TIN2015-66972-C5-1-R, in part by the HiPEAC Collaboration Grant funded by the FP7 HiPEAC Network of Excellence under Grant 287759, and in part by the Engineering and Physical Sciences Research Council under Grant EP/K 026399/1 and Grant EP/J016284/1

    Enhancing the L1 Data Cache Design to Mitigate HCI

    Get PDF
    Over the lifetime of a microprocessor, the Hot Carrier Injection (HCI) phenomenon degrades the threshold voltage, which causes slower transistor switching and eventually results in timing violations and faulty operation. This effect appears when the memory cell contents flip from logic ‘0’ to ‘1’ and vice versa. In caches, the majority of cell flips are concentrated into only a few of the total memory cells that make up each data word. In addition, other researchers have noted that zero is the most commonly-stored data value in a cache, and have taken advantage of this behavior to propose data compression and power reduction techniques. Contrary to these works, we use this information to extend the lifetime of the caches by introducing two microarchitectural techniques that spread and reduce the number of flips across the first-level (L1) data cache cells. Experimental results show that, compared to the conventional approach, the proposed mechanisms reduce the highest cell flip peak up to 65.8%, whereas the threshold voltage degradation savings range from 32.0% to 79.9% depending on the application.This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad (MINECO), by FEDER funds through Grant TIN2012-38341-C04-01, by the Intel Early Career Faculty Honor Program Award, by a HiPEAC Collaboration Grant funded by the FP7 HiPEAC Network of Excellence under grant agreement 287759, and by the Engineering and Physical Sciences Research Council (EPSRC) through Grants EP/K026399/1 and EP/J016284/1.This is the author accepted manuscript. The final version is available from IEEE at http://dx.doi.org/10.1109/LCA.2015.2460736. The dataset associated with this article can be found on the repository at https://www.repository.cam.ac.uk/handle/1810/249006

    Federating distributed clinical data for the prediction of adverse hypotensive events

    Get PDF
    The ability to predict adverse hypotensive events, where a patient's arterial blood pressure drops to abnormally low (and dangerous) levels, would be of major benefit to the fields of primary and secondary health care, and especially to the traumatic brain injury domain. A wealth of data exist in health care systems providing information on the major health indicators of patients in hospitals (blood pressure, temperature, heart rate, etc.). It is believed that if enough of these data could be drawn together and analysed in a systematic way, then a system could be built that will trigger an alarm predicting the onset of a hypotensive event over a useful time scale, e.g. half an hour in advance. In such circumstances, avoidance measures can be taken to prevent such events arising. This is the basis for the Avert-IT project (http://www.avert-it.org), a collaborative EU-funded project involving the construction of a hypotension alarm system exploiting Bayesian neural networks using techniques of data federation to bring together the relevant information for study and system development

    Analyzing web server performance under dynamic user workloads

    Full text link
    The increasing popularity of web applications has introduced a new paradigm where users are no longer passive web consumers but they become active contributors to the web, specially in the contexts of social networking, blogs, wikis or e-commerce. In this new paradigm, contents and services are even more dynamic, which consequently increases the level of dynamism in user's behavior. Moreover, this trend is expected to rise in the incoming web. This dynamism is a major adversity to define and model representative web workload, in fact, this characteristic is not fully represented in the most of the current web workload generators. This work proves that the web user's dynamic behavior is a crucial point that must be addressed in web performance studies in order to accurately estimate system performance indexes. In this paper, we analyze the effect of using a more realistic dynamic workload on the web performance metrics. To this end, we evaluate a typical e-commerce scenario and compare the results obtained using different levels of dynamic workload instead of traditional workloads. Experimental results show that, when a more dynamic and interactive workload is taken into account, performance indexes can widely differ and noticeably affect the stress borderline on the server. For instance, the processor usage can increase 30% due to dynamism, affecting negatively average response time perceived by users, which can also turn in unwanted effects in marketing and fidelity policies. © 2012 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish Ministry of Science and Innovation under Grant TIN-2009-08201.Peña Ortiz, R.; Gil Salinas, JA.; Sahuquillo Borrás, J.; Pont Sanjuan, A. (2013). Analyzing web server performance under dynamic user workloads. Computer Communications. 36(4):386-395. https://doi.org/10.1016/j.comcom.2012.11.005S38639536

    Referrer Graph: A cost-effective algorithm and pruning method for predicting web accesses

    Full text link
    This paper presents the Referrer Graph (RG) web prediction algorithm and a pruning method for the associated graph as a low-cost solution to predict next web users accesses. RG is aimed at being used in a real web system with prefetching capabilities without degrading its performance. The algorithm learns from users accesses and builds a Markov model. These kinds of algorithms use the sequence of the user accesses to make predictions. Unlike previous Markov model based proposals, the RG algorithm differentiates dependencies in objects of the same page from objects of different pages by using the object URI and the referrer in each request. Although its design permits us to build a simple data structure that is easier to handle and, consequently, needs lower computational cost in comparison with other algorithms, a pruning mechanism has been devised to avoid the continuous growing of this data structure. Results show that, compared with the best prediction algorithms proposed in the open literature, the RG algorithm achieves similar precision values and page latency savings but requiring much less computational and memory resources. Furthermore, when pruning is applied, additional and notable resource consumption savings can be achieved without degrading original performance. In order to reduce further the resource consumption, a mechanism to prune de graph has been devised, which reduces resource consumption of the baseline system without degrading the latency savings. 2013 Elsevier B.V. All rights reserved.This work has been partially supported by Spanish Ministry of Science and Innovation under Grant TIN2009-08201. The authors would also like to thank the technical staff of the School of Computer Science at the Polytechnic University of Valencia for providing us recent and customized trace files logged by their web server.De La Ossa Perez, BA.; Gil Salinas, JA.; Sahuquillo Borrás, J.; Pont Sanjuan, A. (2013). Referrer Graph: A cost-effective algorithm and pruning method for predicting web accesses. Computer Communications. 36(8):881-894. https://doi.org/10.1016/j.comcom.2013.02.005S88189436

    Osteocalcin in serum, saliva and gingival crevicular fluid : their relation with periodontal treatment outcome in postmenopausal women

    Get PDF
    Antecedentes. Los niveles de osteocalcina se han propuesto como marcador de la inhibición de la formación ósea. El propósito de este trabajo es determinar las concentraciones de osteocalcina en plasma, saliva y fluido crevicular correlacionándolo con el resultado del tratamiento periodontal en mujeres postmenopáusicas. Pacientes y métodos. El estudio se realizó en treinta y nueve mujeres postmenopáusicas (57.8 ±8.5 años de edad). El examen periodontal incluyó el control de placa, el sangrado al sondaje, la profundidad de sondaje (PS) y la pérdida de inserción (CAL). Se determinaron los niveles de osteocalcina en suero, saliva y fluido crevicular. A continuación se llevó a cabo el tratamiento periodontal. Pasados seis meses tras la primera cita se llevó a cabo un segundo examen periodontal. Resultados. Las medias de la PS y del CAL disminuyeron significativamente en el segundo examen periodontal en el grupo de mujeres con osteocalcina en suero < 10 ng/ml (15.8± 15.8% y 15.3± 21.2%, respectivamente; p < 0.05). La PS media disminuyó significativamente en el segundo exámen en los grupos con concentraciones de osteocalcina en saliva < 3 ng/ml (17.1± 15.9%; p < 0.05) y 3 ? 7 ng/ml (16.2 ± 18.1%; p < 0.05). Conclusiones. Los niveles bajos de osteocalcina en suero se asocian significativamente a un mayor porcentaje de disminución de la PS y del CAL tras el tratamiento periodontal en mujeres postmenopáusicas. Las bajas concentraciones de osteocalcina en saliva se asociaron significativamente a un mayor porcentaje de disminución de la PS

    Antimony speciation in spirits stored in PET bottles: identification of a novel antimony complex

    Full text link
    Total antimony and its +V and +III oxidation state species were determined in twelve spirit samples (Greek raki and tsipouro) stored in polyethylene terephthalate bottles. Reliable and reproducible results were obtained following direct analysis by using ICP-MS providing total Sb concentrations between 0.4-4 mu g L-1. Antimony speciation analysis by LC-ICP-MS was also assessed, showing the presence of both inorganic Sb species along with an unknown Sb complex, which was the predominant species in all samples analysed. The structure of this complex was investigated by using liquid chromatography with high-resolution tandem mass spectrometry. The analysis gave evidence for an acetaldehyde-bisulphite pyruvate Sb complex with the formula: C7H14O12S2Sb. The proposed ligands are organic substances expected to be present in the raki matrix. In addition, the influence of high temperature storage conditions and extended exposure times up to two weeks, on Sb migration from PET bottles into raki samples was investigated. Total Sb and Sb species content was determined by ICP-MS and LC-ICP-MS, respectively. The concentrations determined were in the range of 5.6 to 28 mu g Sb per L spirit after a week of storage at 60 degrees C. In which case, inorganic Sb(V) and Sb(III) became the predominant species in comparison to the 'novel' organic Sb complex
    corecore