140 research outputs found

    Are Noachian-age ridged plains (Nplr) actually early Hesperian in age

    Get PDF
    Whether or not the Nplr units in Memnonia and Argyre truly represent ridged plains volcanism of Noachian age or are simply areas of younger (Early Hesperian age) volcanism which failed to bury older craters and therefore have a greater total crater age than really applies to the ridged plains portion of those terrains is examined. The Nuekum and Hiller technique is used to determine the number of preserved crater retention surfaces in the Memnonia and Argyre regions where Scott and Tanaka show Nplr units to be common. The results for cratered terrain (Npl) in Memnonia is summarized along with those for ridged plains (Nplr) in both Memnonia and Argyre, and they are compared with similar results obtained for Tempe Terra and Lunae Plunum

    Business and Information Technology Alignment Measurement -- a recent Literature Review

    Full text link
    Since technology has been involved in the business context, Business and Information Technology Alignment (BITA) has been one of the main concerns of IT and Business executives and directors due to its importance to overall company performance, especially today in the age of digital transformation. Several models and frameworks have been developed for BITA implementation and for measuring their level of success, each one with a different approach to this desired state. The BITA measurement is one of the main decision-making tools in the strategic domain of companies. In general, the classical-internal alignment is the most measured domain and the external environment evolution alignment is the least measured. This literature review aims to characterize and analyze current research on BITA measurement with a comprehensive view of the works published over the last 15 years to identify potential gaps and future areas of research in the field.Comment: 12 pages, Preprint version, BIS 2018 International Workshops, Berlin, Germany, July 18 to 20, 2018, Revised Paper

    How to measure influence in social networks?

    Get PDF
    Today, social networks are a valued resource of social data that can be used to understand the interactions among people and communities. People can influence or be influenced by interactions, shared opinions and emotions. How-ever, in the social network analysis, one of the main problems is to find the most influential people. This work aims to report on the results of literature review whose goal was to identify and analyse the metrics, algorithms and models used to measure the user influence on social networks. The search was carried out in three databases: Scopus, IEEEXplore, and ScienceDirect. We restricted pub-lished articles between the years 2014 until 2020, in English, and we used the following keywords: social networks analysis, influence, metrics, measurements, and algorithms. Backward process was applied to complement the search consid-ering inclusion and exclusion criteria. As a result of this process, we obtained 25 articles: 12 in the initial search and 13 in the backward process. The literature review resulted in the collection of 21 influence metrics, 4 influence algorithms, and 8 models of influence analysis. We start by defining influence and presenting its properties and applications. We then proceed by describing, analysing and categorizing all that were found metrics, algorithms, and models to measure in-fluence in social networks. Finally, we present a discussion on these metrics, al-gorithms, and models. This work helps researchers to quickly gain a broad per-spective on metrics, algorithms, and models for influence in social networks and their relative potentialities and limitations.This work has been supported by IViSSEM: POCI-01-0145-FEDER-28284, COMPETE: POCI-01-0145-FEDER-007043 and FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020

    Multiple publications: The main reason for the retraction of papers in computer science

    Get PDF
    This paper intends to review the reasons for the retraction over the last decade. The paper particularly aims at reviewing these reasons with reference to computer science field to assist authors in comprehending the style of writing. To do that, a total of thirty-six retracted papers found on the Web of Science within Jan 2007 through July 2017 are explored. Given the retraction notices which are based on ten common reasons, this paper classifies the two main categories, namely random and nonrandom retraction. Retraction due to the duplication of publications scored the highest proportion of all other reasons reviewed

    Large publishing consortia produce higher citation impact research but co-author contributions are hard to evaluate

    Get PDF
    This paper introduces a simple agglomerative clustering method to identify large publishing consortia with at least 20 authors and 80% shared authorship between articles. Based on Scopus journal articles 1996-2018, under these criteria, nearly all (88%) of the large consortia published research with citation impact above the world average, with the exceptions being mainly the newer consortia for which average citation counts are unreliable. On average, consortium research had almost double (1.95) the world average citation impact on the log scale used (Mean Normalised Log Citation Score). At least partial alphabetical author ordering was the norm in most consortia. The 250 largest consortia were for nuclear physics and astronomy around expensive equipment, and for predominantly health-related issues in genomics, medicine, public health, microbiology and neuropsychology. For the health-related issues, except for the first and last few authors, authorship seem to primary indicate contributions to the shared project infrastructure necessary to gather the raw data. It is impossible for research evaluators to identify the contributions of individual authors in the huge alphabetical consortia of physics and astronomy, and problematic for the middle and end authors of health-related consortia. For small scale evaluations, authorship contribution statements could be used, when available

    Selective methylation of histone H3 variant H3.1 regulates heterochromatin replication

    Get PDF
    Histone variants have been proposed to act as determinants for posttranslational modifications with widespread regulatory functions. We identify a histone-modifying enzyme that selectively methylates the replication-dependent histone H3 variant H3.1. The crystal structure of the SET domain of the histone H3 lysine-27 (H3K27) methyltransferase ARABIDOPSIS TRITHORAX-RELATED PROTEIN 5 (ATXR5) in complex with a H3.1 peptide shows that ATXR5 contains a bipartite catalytic domain that specifically "reads" alanine-31 of H3.1. Variation at position 31 between H3.1 and replication-independent H3.3 is conserved in plants and animals, and threonine-31 in H3.3 is responsible for inhibiting the activity of ATXR5 and its paralog, ATXR6. Our results suggest a simple model for the mitotic inheritance of the heterochromatic mark H3K27me1 and the protection of H3.3-enriched genes against heterochromatization during DNA replication

    Extracellular volume quantification in isolated hypertension - changes at the detectable limits?

    Get PDF
    The funding source (British Heart Foundation and UK National Institute for Health Research) provided salaries for research training (FZ, TT, DS, SW), but had no role in study design, collection, analysis, interpretation, writing, or decisions with regard to publication. This work was undertaken at University College London Hospital, which received a proportion of funding from the UK Department of Health National Institute for Health Research Biomedical Research Centres funding scheme. We are grateful to King’s College London Laboratories for processing the collagen biomarker panel

    Coverage of highly-cited documents in Google Scholar, Web of Science, and Scopus: a multidisciplinary comparison

    Get PDF
    This study explores the extent to which bibliometric indicators based on counts of highly-cited documents could be affected by the choice of data source. The initial hypothesis is that databases that rely on journal selection criteria for their document coverage may not necessarily provide an accurate representation of highly-cited documents across all subject areas, while inclusive databases, which give each document the chance to stand on its own merits, might be better suited to identify highly-cited documents. To test this hypothesis, an analysis of 2,515 highly-cited documents published in 2006 that Google Scholar displays in its Classic Papers product is carried out at the level of broad subject categories, checking whether these documents are also covered in Web of Science and Scopus, and whether the citation counts offered by the different sources are similar. The results show that a large fraction of highly-cited documents in the Social Sciences and Humanities (8.6%-28.2%) are invisible to Web of Science and Scopus. In the Natural, Life, and Health Sciences the proportion of missing highly-cited documents in Web of Science and Scopus is much lower. Furthermore, in all areas, Spearman correlation coefficients of citation counts in Google Scholar, as compared to Web of Science and Scopus citation counts, are remarkably strong (.83-.99). The main conclusion is that the data about highly-cited documents available in the inclusive database Google Scholar does indeed reveal significant coverage deficiencies in Web of Science and Scopus in several areas of research. Therefore, using these selective databases to compute bibliometric indicators based on counts of highly-cited documents might produce biased assessments in poorly covered areas.Alberto Martín-Martín enjoys a four-year doctoral fellowship (FPU2013/05863) granted by the Ministerio de Educación, Cultura, y Deportes (Spain)

    The co-development of a linguistic and culturally tailored tele-retinopathy screening intervention for immigrants living with diabetes from China and African-Caribbean countries in Ottawa, Canada

    Get PDF
    Background: Diabetic retinopathy is a sight-threatening ocular complication of diabetes. Screening is an effective way to reduce severe complications, but screening attendance rates are often low, particularly for newcomers and immigrants to Canada and people from cultural and linguistic minority groups. Building on previous work, in partnership with patient and health system stakeholders, we co-developed a linguistically and culturally tailored tele-retinopathy screening intervention for people living with diabetes who recently immigrated to Canada from either China or African-Caribbean countries. Methods: Following an environmental scan of diabetes eye care pathways in Ottawa, we conducted co-development workshops using a nominal group technique to create and prioritize personas of individuals requiring screening and identify barriers to screening that each persona may face. Next, we used the Theoretical Domains Framework to categorize the barriers/enablers and then mapped these categories to potential evidence-informed behaviour change techniques. Finally with these techniques in mind, participants prioritized strategies and channels of delivery, developed intervention content, and clarified actions required by different actors to overcome anticipated intervention delivery barriers. Results: We carried out iterative co-development workshops with Mandarin and French-speaking individuals living with diabetes (i.e., patients in the community) who immigrated to Canada from China and African-Caribbean countries (n = 13), patient partners (n = 7), and health system partners (n = 6) recruited from community health centres in Ottawa. Patients in the community co-development workshops were conducted in Mandarin or French. Together, we prioritized five barriers to attending diabetic retinopathy screening: language (TDF Domains: skills, social influences), retinopathy familiarity (knowledge, beliefs about consequences), physician barriers regarding communication for screening (social influences), lack of publicity about screening (knowledge, environmental context and resources), and fitting screening around other activities (environmental context and resources). The resulting intervention included the following behaviour change techniques to address prioritized local barriers: information about health consequence, providing instructions on how to attend screening, prompts/cues, adding objects to the environment, social support, and restructuring the social environment. Operationalized delivery channels incorporated language support, pre-booking screening and sending reminders, social support via social media and community champions, and providing using flyers and videos as delivery channels. Conclusion: Working with intervention users and stakeholders, we co-developed a culturally and linguistically relevant tele-retinopathy intervention to address barriers to attending diabetic retinopathy screening and increase uptake among two under-served groups

    Green process innovation: Where we are and where we are going

    Get PDF
    Environmental pollution has worsened in the past few decades, and increasing pressure is being put on firms by different regulatory bodies, customer groups, NGOs and other media outlets to adopt green process innovations (GPcIs), which include clean technologies and end-of-pipe solutions. Although considerable studies have been published on GPcI, the literature is disjointed, and as such, a comprehensive understanding of the issues, challenges and gaps is lacking. A systematic literature review (SLR) involving 80 relevant studies was conducted to extract seven themes: strategic response, organisational learning, institutional pressures, structural issues, outcomes, barriers and methodological choices. The review thus highlights the various gaps in the GPcI literature and illuminates the pathways for future research by proposing a series of potential research questions. This study is of vital importance to business strategy as it provides a comprehensive framework to help firms understand the various contours of GPcI. Likewise, policymakers can use the findings of this study to fill in the loopholes in the existing regulations that firms are exploiting to circumvent taxes and other penalties by locating their operations to emerging economies with less stringent environmental regulations.publishedVersio
    corecore