246,795 research outputs found
The BioPAX community standard for pathway data sharing
Biological Pathway Exchange (BioPAX) is a standard language to represent biological pathways at the molecular and cellular level and to facilitate the exchange of pathway data. The rapid growth of the volume of pathway data has spurred the development of databases and computational tools to aid interpretation; however, use of these data is hampered by the current fragmentation of pathway information across many databases with incompatible formats. BioPAX, which was created through a community process, solves this problem by making pathway data substantially easier to collect, index, interpret and share. BioPAX can represent metabolic and signaling pathways, molecular and genetic interactions and gene regulation networks. Using BioPAX, millions of interactions, organized into thousands of pathways, from many organisms are available from a growing number of databases. This large amount of pathway data in a computable form will support visualization, analysis and biological discovery
Does Trade Facilitation Matter in Bilateral Trade ?
This paper estimates an augmented gravity model incorporating different aspects of Trade Facilitation in develop and developing countries. Trade Facilitation is defined as measures that aim at making international trade easier by eliminating administrative delays, simplifying commercial procedures, increasing transparency, security and the place of new technologies in trade. This paper provides new theoretical and empirical enhancements. On the one hand, the model is based on theoretical foundations related to monopolistic competition and border effects. The orginality of this paper is that Trade Facilitation facets are included in the model. On the other hand, the empirical achievement of the paper is that it uses different databases allowing us to take into account many features of Trade Facilitation. I use several databases coming from different sources : Doing business (World Bank) and Institutional Profiles (CEPII). My main findings show that transaction time for imports and number of documents for exports have a negative impact on trade. Our sample is split into sub-samples in order to take into account the impact of development level. It turns out that Trade Facilitation aspects have not the same impact on developed and developing countries. Finally, we conclude that some perishable (food and beverages), seasonal (wearing apparels) and high-value added products are more sensitive to import time than other products. Hard industries are rather sensitive to export documents.Trade facilitation, gravity models, border effects.
Prescription Drug Monitoring Programs: Evidence-based Practices to Optimize Prescriber Use
As the opioid crisis continues to ravage communities across the United States, policymakers and public health officials are increasingly using new tools such as prescription drug monitoring programs (PDMPs)āstate-based electronic databases that track the dispensing of certain controlled substancesāto stem the misuse of prescription opioids and reduce overdose deaths.Ā Ā PDMPs can be used to monitor patient use of these drugs and inform prescribing decisions. However, the number of prescribers actually using these databases in clinical care remains low.A new report from The Pew Charitable Trusts and the Institute for Behavioral Health, Heller School for Social Policy and Management at Brandeis University finds that states can increase prescriber use of PDMPs by adopting one or more of eight evidence-based practices: Ā Prescriber use mandates, or state laws and regulations that require prescribers to view a patient's PDMP data under certain circumstances. Mandates can rapidly increase PDMP utilization and immediately affect prescriber behavior, which can help prevent "doctor shopping"āwhen patients seek the same or similar drugs from multiple prescribers and pharmacies in a short period.Delegate access, which allows prescribers to authorize someone on staff, such as a nurse or other member of the health care team, to access the PDMP data on their behalf. The majority of states allow delegate access; evidence suggests such access addresses workflow barriers and increases PDMP use.Unsolicited reports, where prescribers are proactively notified about patients who may be at risk for harm based on their controlled substance prescription history. These alerts can help increase prescriber use in two ways: by motivating them to review patient data and informing unenrolled prescribers about the existence of the PDMP.Improving data timeliness, or increasing the frequency at which data are uploaded into PDMP databases. Many states now require dispensers to upload new data on a daily basis, which increases the timeliness of information and encourages PDMP use.Streamlining enrollment by making it easier for prescribers and delegates to register with their state PDMPs. Enrollment is required before clinicians can check PDMP data, so making this process faster and easier can increase use.Educational and promotional initiatives that help prescribers understand how PDMPs work and encourage their use. Such activities can spur enrolled prescribers and delegates to check PDMP data and inform unenrolled clinicians about the value of these databases.Ā Integrating PDMP data with health information technology, which helps prescribers seamlessly access PDMPs through electronic health records or other IT systems. Pilot projects across the country found that prescribers reported PDMP data were easier to access when the system was integrated into daily workflows.Ā Enhancing PDMP user interfaces, or redesigning how data are presented, to help prescribers more quickly analyze prescribing information and make better-informed decisions.Of the eight practices, mandates are the single most effective way to increase prescriber use. But a mandate alone does not mean that prescribers will use the PDMP effectively in clinical decision-making. Therefore, state officials should explore the other seven strategies and adopt a combination of practices that works best for their program. PDMPs can play a critical role in curbing prescription opioid misuse, but only if states take steps to ensure that the data are easy to access and understand.
Rancang Bangun Sistem Informasi Pencarian dan Penyewaan Rumah Kost di Palangka Raya Berbasis Website
Information systems are currently growing rapidly which allows all jobs in life to be assisted by information technology. This technology that can help make things easier, faster, safer, and more effective is clearly very useful for anyone who uses it. This includes helping micro, small and medium enterprises (MSME),especially boarding house owners, in promoting their boarding houses and making it easier for someone who is looking for a boarding house to live in. This boarding house search and rental information system is a website-based information system as a means to provide boarding house information for boarding house seekers, as well as for boarding house managers to promote the boarding houses they rent and manage all things related to the boarding house on the website quickly and easy. The methodology used in making this Information System is to use the waterfall method with stages, with stages, namely Requitments Definition depicted by Flowcharts, Requirements Analysis and Definition described by DFD (Data Flow Diagrams), ERD (Entity Relationship Diagrams) and databases. Implementation and Unit Testing with the programming languages used, namely HTML, PHP, CSS, Bootstrap, Javascript and MySQL, Integration and System Testing using the Blackbox Method
Explain to me like I am five -- Sentence Simplification Using Transformers
Sentence simplification aims at making the structure of text easier to read
and understand while maintaining its original meaning. This can be helpful for
people with disabilities, new language learners, or those with low literacy.
Simplification often involves removing difficult words and rephrasing the
sentence. Previous research have focused on tackling this task by either using
external linguistic databases for simplification or by using control tokens for
desired fine-tuning of sentences. However, in this paper we purely use
pre-trained transformer models. We experiment with a combination of GPT-2 and
BERT models, achieving the best SARI score of 46.80 on the Mechanical Turk
dataset, which is significantly better than previous state-of-the-art results.
The code can be found at https://github.com/amanbasu/sentence-simplification
Storage Solutions for Big Data Systems: A Qualitative Study and Comparison
Big data systems development is full of challenges in view of the variety of
application areas and domains that this technology promises to serve.
Typically, fundamental design decisions involved in big data systems design
include choosing appropriate storage and computing infrastructures. In this age
of heterogeneous systems that integrate different technologies for optimized
solution to a specific real world problem, big data system are not an exception
to any such rule. As far as the storage aspect of any big data system is
concerned, the primary facet in this regard is a storage infrastructure and
NoSQL seems to be the right technology that fulfills its requirements. However,
every big data application has variable data characteristics and thus, the
corresponding data fits into a different data model. This paper presents
feature and use case analysis and comparison of the four main data models
namely document oriented, key value, graph and wide column. Moreover, a feature
analysis of 80 NoSQL solutions has been provided, elaborating on the criteria
and points that a developer must consider while making a possible choice.
Typically, big data storage needs to communicate with the execution engine and
other processing and visualization technologies to create a comprehensive
solution. This brings forth second facet of big data storage, big data file
formats, into picture. The second half of the research paper compares the
advantages, shortcomings and possible use cases of available big data file
formats for Hadoop, which is the foundation for most big data computing
technologies. Decentralized storage and blockchain are seen as the next
generation of big data storage and its challenges and future prospects have
also been discussed
Recommended from our members
An Open Context for Near Eastern Archaeology
The common use by archaeologists of ubiquitous technologies such as computers and digital cameras means that archaeological research projects now produce huge amounts of diverse, digital documentation. However, while the technology is available to collect this documentation, we still largely lack community-accepted dissemination channels appropriate for such torrents of data. Open Context aims to help fill this gap by providing open access data publication services for archaeology. Open Context has a flexible and generalized technical architecture that can accommodate most archaeological datasets, despite the lack of common recording systems or other documentation standards. It includes a variety of tools to make data dissemination easier and more worthwhile. Authorship is clearly identified through citation tools, including web-based publication systems that enable individuals to upload their own data for review, and collaboration is facilitated through easy download and "tagging" features. Near Eastern archaeologists will benefit from Open Context's flexibility to share a variety of content from diverse projects, no matter how large or small. This article was originally published in Near Eastern Archaeology (ISSN 1094-2076), Volume 70, Number 4, December 2007
Topic Maps as a Virtual Observatory tool
One major component of the VO will be catalogs measuring gigabytes and
terrabytes if not more. Some mechanism like XML will be used for structuring
the information. However, such mechanisms are not good for information
retrieval on their own. For retrieval we use queries. Topic Maps that have
started becoming popular recently are excellent for segregating information
that results from a query. A Topic Map is a structured network of hyperlinks
above an information pool. Different Topic Maps can form different layers above
the same information pool and provide us with different views of it. This
facilitates in being able to ask exact questions, aiding us in looking for gold
needles in the proverbial haystack. Here we discuss the specifics of what Topic
Maps are and how they can be implemented within the VO framework.
URL: http://www.astro.caltech.edu/~aam/science/topicmaps/Comment: 11 pages, 5 eps figures, to appear in SPIE Annual Meeting 2001
proceedings (Astronomical Data Analysis), uses spie.st
Recommended from our members
Conversations with chemists: information seeking behavior of chemistry faculty in the electronic age.
This manuscript is a final draft of the article as submitted to the Haworth journal Science and Technology Libraries in December 2002. Due to editorial error, Haworth published an earlier draft of this paper instead of the final draft. They declined to rectify this error in the online version of the journal. The reader is advised that the author considers this version to be the definitive final draft that should have been published but was not. Scholars wishing to cite this work should preferably cite this final preprint, rather than the published article.Six faculty members in the Department of Chemistry and Biochemistry at the University of Texas at Austin were interviewed one-on-one to gather information about their information-seeking behavior, favored resources, and opinions about the transition from a print to an electronic information environment. In most cases, these chemistry faculty members have eagerly embraced the enhanced access to chemical information made possible by the steady addition of electronic journals and networked database systems. The most-cited benefits include significant time-saving and convenience as well as access to more journals than ever. As a result, use of the physical library and its printed collections by faculty is declining. Chemistry faculty interviewed expressed a strong self-reliance in their information-seeking skills, and showed sophistication in their choice of tools.UT Librarie
- ā¦