80 research outputs found
Effects of cultivation years on effective constituent content of Fritillaria pallidiflora Schernk
Fritillaria pallidiflora Schrenk has been treasured in traditional classic medicine as an antitussive, antiasthmatic and expectorant for hundreds of years. With gradually decreasing wild F. pallidiflora resources, the herb can no longer satisfy the demand. Artificial cultivation is one of the most effective ways to solve the contradiction between supply and demand in the medicinal material market. During the growth of Rhizomes medicinal plants, root biomass and active ingredient content showed dynamic accumulated variation with increasing cultivation years. Up to now, hardly any attempts have been made to investigate the relationship between quality and cultivation years of F. pallidiflora. Therefore, in this paper, we determined the optimum harvesting time by comparing biomass and biological characteristics of F. pallidiflora at different cultivation times. High-performance liquid chromatography with evaporative light scattering detection and phenol-sulfuric acid visible spectrophotometry was performed to determine imperialine and polysaccharide content of F. pallidiflora bulbs. From year 1 to 6 of cultivation, we observed an upward trend in plant height, diameter and dry weight of F. pallidiflora, while water content decreased. Plant height and dry weight increased remarkably during the fourth year of cultivation. The content of imperialine and polysaccharide of F. pallidiflora bulbs, on the other hand, showed an upward trend from year 1 to 3, after which it decreased from year 3 to 6. By comparing plant growth, biomass development and the accumulation of imperialine and polysaccharide, the best harvesting time of F. pallidiflora was determined to be after 4 years of cultivation. Our results showed that it is possible to establish a safe, effective, stable and controllable production process, which could play an important role in achieving sustainable utilization of F. pallidiflora resources.Fritillaria pallidiflora Schrenk has been treasured in traditional classic medicine as an antitussive, antiasthmatic and expectorant for hundreds of years. With gradually decreasing wild F. pallidiflora resources, the herb can no longer satisfy the demand. Artificial cultivation is one of the most effective ways to solve the contradiction between supply and demand in the medicinal material market. During the growth of Rhizomes medicinal plants, root biomass and active ingredient content showed dynamic accumulated variation with increasing cultivation years. Up to now, hardly any attempts have been made to investigate the relationship between quality and cultivation years of F. pallidiflora. Therefore, in this paper, we determined the optimum harvesting time by comparing biomass and biological characteristics of F. pallidiflora at different cultivation times. High-performance liquid chromatography with evaporative light scattering detection and phenol-sulfuric acid visible spectrophotometry was performed to determine imperialine and polysaccharide content of F. pallidiflora bulbs. From year 1 to 6 of cultivation, we observed an upward trend in plant height, diameter and dry weight of F. pallidiflora, while water content decreased. Plant height and dry weight increased remarkably during the fourth year of cultivation. The content of imperialine and polysaccharide of F. pallidiflora bulbs, on the other hand, showed an upward trend from year 1 to 3, after which it decreased from year 3 to 6. By comparing plant growth, biomass development and the accumulation of imperialine and polysaccharide, the best harvesting time of F. pallidiflora was determined to be after 4 years of cultivation. Our results showed that it is possible to establish a safe, effective, stable and controllable production process, which could play an important role in achieving sustainable utilization of F. pallidiflora resources
Pan-cancer analysis of whole genomes
Cancer is driven by genetic change, and the advent of massively parallel sequencing has enabled systematic documentation of this variation at the whole-genome scale(1-3). Here we report the integrative analysis of 2,658 whole-cancer genomes and their matching normal tissues across 38 tumour types from the Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium of the International Cancer Genome Consortium (ICGC) and The Cancer Genome Atlas (TCGA). We describe the generation of the PCAWG resource, facilitated by international data sharing using compute clouds. On average, cancer genomes contained 4-5 driver mutations when combining coding and non-coding genomic elements; however, in around 5% of cases no drivers were identified, suggesting that cancer driver discovery is not yet complete. Chromothripsis, in which many clustered structural variants arise in a single catastrophic event, is frequently an early event in tumour evolution; in acral melanoma, for example, these events precede most somatic point mutations and affect several cancer-associated genes simultaneously. Cancers with abnormal telomere maintenance often originate from tissues with low replicative activity and show several mechanisms of preventing telomere attrition to critical levels. Common and rare germline variants affect patterns of somatic mutation, including point mutations, structural variants and somatic retrotransposition. A collection of papers from the PCAWG Consortium describes non-coding mutations that drive cancer beyond those in the TERT promoter(4); identifies new signatures of mutational processes that cause base substitutions, small insertions and deletions and structural variation(5,6); analyses timings and patterns of tumour evolution(7); describes the diverse transcriptional consequences of somatic mutation on splicing, expression levels, fusion genes and promoter activity(8,9); and evaluates a range of more-specialized features of cancer genomes(8,10-18).Peer reviewe
Dynamic Capabilities for Information Sharing: XBRL Enabling Business-to-Government Information Exchange
Recent scandals have stressed the need for information sharing among companies and governments. The sharing of information is not easy as companies want to keep their administrative burden low, whereas governments need high information quality. These drivers have resulted in the initiating of programs for developing infrastructures for information sharing. In these programs public and private organizations work together to create infrastructures satisfying the needs of both companies and governments. The creation of business-to-government information sharing is complex and meets many organizational and technical challenges. Information sharing requires that existing information assets are used and combined, information sharing and processing capabilities are used. This would be need to be done repeatedly and rapidly in different sectors. This study investigates the dynamic capabilities necessary to realize the information sharing. Specifically the capabilities for developing the infrastructure and the governance of the infrastructure are investigated. Our analysis shows that companies and public organizations need to create a different set of capabilities to enable information sharing. The creation of information sharing requires extensive knowledge about the existing landscape. The infrastructure should be flexible enough to support the different situations and governance is necessary to ensure that information sharing arrangements are customized for the situation at hand and to make decisions concerning its further development.Engineering, Systems and ServicesTechnology, Policy and Managemen
Digital Trade Infrastructures and Big Data Analytics: The concept of Value as a Linking Pin
This paper is largely empirically driven and reflects research and insights gained in the last 13 years of involvement in a series of innovation projects (ITAIDE, CASSANDRA, CORE1, PROFILE2) related to scaling-up digital infrastructures in the area of international trade. Most of the project efforts and related research have been focus on the processes of upscaling of digital infrastructures from initial R&D and proofs of concepts towards large-scale piloting. As part of the CORE project, piloting took place also with the IBM and MAERSK-led blockchain-enabled digital trade infrastructure, which is now commercialized to a global scale under the name Tradelens3. In the context of these projects authorities at the border were major stakeholders involved in the innovation process. The goal was that by accessing commercial and supply chain data available via the digital trade infrastructures authorities could perform better risk analysis and provide trade facilitation to companies in return. At the end of the CORE project massive amount of data became available to the authorities for piloting. In order to be useful for the risk analysis processes this external data however needed to be further combined with internal data from the systems of the authorities and further analyzed . As a result, the PROFILE project was initiated in 2018 which focuses on the use of data analytics on internal and external data sources to improve risk assessment. From a theoretical perspective it appears that the literature on big data and analytics and the literature on upscaling of digital trade infrastructures focus on quite different issues. However, based on the flow of the projects where we have been involved they appear closely inter-related. The main question that we aim to explore is: What is a potential conceptual ground to link research on upscaling digital trade infrastructures with research on big data analytics? In this paper we propose to use the concept of Value as a linking pin. Understanding this link is essential, as it can reinforce both data analytics research and upscaling of digital trade infrastructures research. While this paper is predominantly focused on the trade domain insights may be useful in the broader setting for exploring conceptual links between information infrastructure research and big data analytics research.Information and Communication Technolog
The cultivation of information infrastructures for international trade: Stakeholder challenges and engagement reasons
The development of information infrastructures for international trade to improve supply chain visibility and security has gained momentum due to technological advances. An information infrastructure is a shared, open, and evolving assemblage of interlinked information systems providing distinct information technology capabilities. Examples of information infrastructures are the internet, electronic market places and music platforms. Information infrastructures can be highly beneficial as shown by the aforementioned examples, yet often fail to deliver expected benefits. This research focuses on the cultivation of information infrastructures which refers to a softer, less disruptive design approach compared to traditional design approaches in which systems are defined through specified functional requirements within strict boundaries. Drawing on different stakeholder views within a European Union project for international trade, this research provides a taxonomy of twelve cultivation challenges and four engagement reasons one can expect in the design phase of information infrastructures. Organizational theory is used to discuss underlying explanations. The paper concludes that the cultivation of an information infrastructure for international trade could be highly rewarding, yet is a challenging and long-lasting endeavor which requires multi-disciplinary expertise. Practitioners can use the insights provided by this research to increase their understanding of information infrastructure cultivation to ultimately increase adoption rates.Information and Communication Technolog
Enabling Supply Chain Visibility and Compliance Through Voluntary Information Sharing with Customs: A Case Study of the Global Quality Traceability System in China Customs
The promotion of digital customs and data analytics have led customs administrations to seek to improve their analytics capabilities and exploit data from the trade community. Despite the increased data analysis capabilities of Customs, the data available to them are still limited to the current mandated declaration. If businesses are willing to share additional commercial information with Customs, it will enable them to make a more accurate risk assessment and ensure compliance. As a new form of Customs and business partnership, voluntary information sharing can be a supplementary data source to the mandated declaration and enable Customs to exploit additional commercial information. This study analysed an exploratory case study on the Global Quality Traceability System (GQTS) initiated by China Customs to investigate how voluntary information sharing can be achieved, and to explore the benefits for the participants. This study demonstrated that voluntary information sharing with Customs implemented through a data pipeline enhanced supply chain visibility and ensured compliance. The private companies who shared information contributed to the supply chain visibility and in return benefited from it.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Information and Communication Technolog
The collaborative realization of public values and business goals: Governance and infrastructure of public-private information platforms
The scale of society's evolving challenges gradually surpasses the capacity of the public sector to address them. Coping with these challenges requires budget-short governments to look for innovative ways to transform and improve their operations and service provisioning models. While in many cases transformation starts from the inside-out (based on policy goals) and focuses on reorganization through ICTs, we notice a different class of initiatives in which external ICT developments are capitalized by governments to transform from the outside-in. One category of ICT innovations that is especially promising for such a transformation is that of information platforms (henceforth platforms), which can be used to connect different stakeholders; public and private. Platforms are not new. Yet, there is not much research on using public–private platforms as part of a transformation effort, the (policy) instruments that are involved, nor about dealing with the cascading multi-level challenges that transformation through platforms offers. This paper addresses these knowledge gaps by drawing on empirical research embedded in two long-term endeavors: (1) standard business reporting between businesses and government agencies and (2) international trade information platforms. In both cases, platforms are being collaboratively developed and used by a collective of public and private organizations. These initiatives reveal that government agencies can steer and shape the development of public–private platforms in a way that enables businesses to pursue their own interest whilst transforming business–government interactions and more generally serving collective interests and public value. Our findings indicate that once a public–private governance structure is accepted by stakeholders and adapted to fit with the technical dimensions of the information infrastructure, even platforms that are driven by the private sector can start to evolve in a way that enables extensive transformation of the operations of government.Engineering, Systems and ServicesTechnology, Policy and Managemen
An Agent Based Inter-organizational Collaboration Framework: OperA+ (extended abstract)
Infrastructures, Systems and ServicesTechnology, Policy and Managemen
SANOM results for OAEI 2018
Simulated annealing-based ontology matching (SANOM) participates for the second time at the ontology alignment evaluation initiative (OAEI) 2018. This paper contains the configuration of SANOM and its results on the anatomy and conference tracks. In comparison to the OAEI 2017, SANOM has improved significantly, and its results are competitive with the state-of-the-art systems. In particular, SANOM has the highest recall rate among the participated systems in the conference track, and is competitive with AML, the best performing system, in terms of F-measure. SANOM is also competitive with LogMap on the anatomy track, which is the best performing system in this track with no usage of particular biomedical background knowledge. SANOM has been adapted to the HOBBIT platfrom and is now available for the registered users. abstract environment.Information and Communication TechnologyWeb Information System
A Comparative Study of Ontology Matching Systems via Inferential Statistics
Comparing ontology matching systems are typically performed by comparing their average performances over multiple datasets. However, this paper examines the alignment systems using statistical inference since averaging is statistically unsafe and inappropriate. The statistical tests for comparison of two or multiple alignment systems are theoretically and empirically reviewed. For comparison of two systems, the Wilcoxon signed-rank and McNemar's mid-p and asymptotic tests are recommended due to their robustness and statistical safety in different circumstances. The Friedman and Quade tests with their corresponding post-hoc procedures are studied for comparison of multiple systems, and their [dis]advantages are discussed. The statistical methods are then applied to benchmark and multifarm tracks from the ontology matching evaluation initiative (OAEI) 2015 and their results are reported and visualized by critical difference diagrams.Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Information and Communication Technolog
- …