148 research outputs found

    Assessment of the Nexus between Groundwater Extraction and Greenhouse Gas Emissions Employing Aquifer Modelling

    Get PDF
    AbstractOne of the main sources of Greenhouse Gas Emissions (GHG) is electricity consumption which is getting used for different purposes.Water pumping, especially, pumping from deep groundwater resources consumes a lot of energy. In arid and semi-arid areas, in which groundwater is the only source of water, water pumping is done for different purposes such as agricultural, industrial and urban uses. Kerman plain is one of these arid and semi-arid areas which is located in South East of Iran. Groundwater reliance and aquifer decline are the most prominent challenges that this area is faced with in recent years. This challenges increase the demand for more electricity consumption to pump water from the aquifer so that CO2 emissions will be increased. A large percentage of water extraction from the aquifer is used for agricultural purposes. In this paper, by modelling Kerman plain aquifer with MODFLOW software by using Geographical Information System (GIS) database and also studying height of groundwater table from 1999 to 2012, electricity energy consumption of groundwater extraction for agricultural, industrial and urban water supply is calculated and the CO2 emissions trends resulted from electricity energy consumption is evaluated. Then model results are examined for a business as usual (BAU) scenario of changes in water resources. As a result the amount of CO2 emitted from groundwater abstraction by three mentioned sectors is calculated for specified time horizon. Finally, some suggestions are presented for reducing greenhouse gas emissions for the time horizon

    Asymptotic bounds for the sizes of constant dimension codes and an improved lower bound

    Get PDF
    We study asymptotic lower and upper bounds for the sizes of constant dimension codes with respect to the subspace or injection distance, which is used in random linear network coding. In this context we review known upper bounds and show relations between them. A slightly improved version of the so-called linkage construction is presented which is e.g. used to construct constant dimension codes with subspace distance d=4d=4, dimension k=3k=3 of the codewords for all field sizes qq, and sufficiently large dimensions vv of the ambient space, that exceed the MRD bound, for codes containing a lifted MRD code, by Etzion and Silberstein.Comment: 30 pages, 3 table

    Composition of Constraint, Hypothesis and Error Models to improve interaction in Human-Machine Interfaces

    Full text link
    We use Weighted Finite-State Transducers (WFSTs) to represent the different sources of information available: the initial hypotheses, the possible errors, the constraints imposed by the task (interaction language) and the user input. The fusion of these models to find the most probable output string can be performed efficiently by using carefully selected transducer operations. The proposed system initially suggests an output based on the set of hypotheses, possible errors and Constraint Models. Then, if human intervention is needed, a multimodal approach, where the user input is combined with the aforementioned models, is applied to produce, with a minimum user effort, the desired output. This approach offers the practical advantages of a de-coupled model (e.g. input-system + parameterized rules + post-processor), keeping at the same time the error-recovery power of an integrated approach, where all the steps of the process are performed in the same formal machine (as in a typical HMM in speech recognition) to avoid that an error at a given step remains unrecoverable in the subsequent steps. After a presentation of the theoretical basis of the proposed multi-source information system, its application to two real world problems, as an example of the possibilities of this architecture, is addressed. The experimental results obtained demonstrate that a significant user effort can be saved when using the proposed procedure. A simple demonstration, to better understand and evaluate the proposed system, is available on the web https://demos.iti.upv.es/hi/. (C) 2015 Elsevier B.V. All rights reserved.Navarro Cerdan, JR.; Llobet Azpitarte, R.; Arlandis, J.; Perez-Cortes, J. (2016). Composition of Constraint, Hypothesis and Error Models to improve interaction in Human-Machine Interfaces. Information Fusion. 29:1-13. doi:10.1016/j.inffus.2015.09.001S1132

    A comparative study of turbulence models in a transient channel flow

    Get PDF
    Open Access funded by Engineering and Physical Sciences Research Council Under a Creative Commons license The authors would like to acknowledge the financial support provided by the Engineering and Physical Sciences Research Council (EPSRC) through the Grant No. EP/G068925/1.Peer reviewedPublisher PD

    Quality of life in patients with breast cancer before and after diagnosis: an eighteen months follow-up study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Measuring quality of life in breast cancer patients is of importance in assessing treatment outcomes. This study examined the impact of breast cancer diagnosis and its treatment on quality of life of women with breast cancer.</p> <p>Methods</p> <p>This was a prospective study of quality of life in breast cancer patients. Quality of life was measured using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30) and its breast cancer supplementary measure (QLQ-BR23) at three points in time: baseline (pre diagnosis), three months after initial treatment and one year after completion of treatment (in all 18 months follow-up). At baseline the questionnaires were administered to all suspected identified patients while both patients and the interviewer were blind to the final diagnosis. Socio-demographic and clinical data included: age, education, marital status, disease stage and initial treatment. Repeated measure analysis was performed to compare quality of life differences over the time.</p> <p>Results</p> <p>In all, 167 patients diagnosed with breast cancer. The mean age of breast cancer patients was 47.2 (SD = 13.5) years and the vast majority (82.6%) underwent mastectomy. At eighteen months follow-up data for 99 patients were available for analysis. The results showed there were significant differences in patients' functioning and global quality of life at three points in time (P < 0.001). Although there were deteriorations in patients' scores for body image and sexual functioning, there were significant improvements for breast symptoms, systematic therapy side effects and patients' future perspective (P < 0.05).</p> <p>Conclusion</p> <p>The findings suggest that overall breast cancer patients perceived benefit from their cancer treatment in long-term. However, patients reported problems with global quality of life, pain, arm symptoms and body image even after 18 months following their treatments. In addition, most of the functional scores did not improve.</p

    Microarray analysis revealed different gene expression patterns in HepG2 cells treated with low and high concentrations of the extracts of Anacardium occidentale shoots

    Get PDF
    In this study, the effects of low and high concentrations of the Anacardium occidentale shoot extracts on gene expression in liver HepG2 cells were investigated. From MTT assays, the concentration of the shoot extracts that maintained 50% cell viability (IC50) was 1.7 mg/ml. Cell viability was kept above 90% at both 0.4 mg/ml and 0.6 mg/ml of the extracts. The three concentrations were subsequently used for the gene expression analysis using Affymetrix Human Genome 1.0 S.T arrays. The microarray data were validated using real-time qRT–PCR. A total of 246, 696 and 4503 genes were significantly regulated (P < 0.01) by at least 1.5-fold in response to 0.4, 0.6 and 1.7 mg/ml of the extracts, respectively. Mutually regulated genes in response to the three concentrations included CDKN3, LOC100289612, DHFR, VRK1, CDC6, AURKB and GABRE. Genes like CYP24A1, BRCA1, AURKA, CDC2, CDK2, CDK4 and INSR were significantly regulated at 0.6 mg/ml and 1.7 mg but not at 0.4 mg/ml. However, the expression of genes including LGR5, IGFBP3, RB1, IDE, LDLR, MTTP, APOB, MTIX, SOD2 and SOD3 were exclusively regulated at the IC50 concentration. In conclusion, low concentrations of the extracts were able to significantly regulate a sizable number of genes. The type of genes that were expressed was highly dependent on the concentration of the extracts used

    Big Data Fusion Model for Heterogeneous Financial Market Data (FinDF)

    Get PDF
    The dawn of big data has seen the volume, variety, and velocity of data sources increase dramatically. Enormous amounts of structured, semi-structured and unstructured heterogeneous data can be garnered at a rapid rate, making analysis of such big data a herculean task. This has never been truer for data relating to financial stock markets, the biggest challenge being the 7 Vs of big data which relate to the collection, pre-processing, storage and real-time processing of such huge quantities of disparate data sources. Data fusion techniques have been adopted in a wide number of fields to cope with such vast amounts of heterogeneous data from multiple sources and fuse them together in order to produce a more comprehensive view of the data and its underlying relationships. Research into the fusing of heterogeneous financial data is scant within the literature, with existing work only taking into consideration the fusing of text-based financial documents. The lack of integration between financial stock market data, social media comments, financial discussion board posts and broker agencies means that the benefits of data fusion are not being realised to their full potential. This paper proposes a novel data fusion model, inspired by the data fusion model introduced by the Joint Directors of Laboratories, for the fusing of disparate data sources relating to financial stocks. Data with a diverse set of features from different data sources will supplement each other in order to obtain a Smart Data Layer, which will assist in scenarios such as irregularity detection and prediction of stock prices
    corecore