317 research outputs found

    Integrating personalized medical test contents with XML and XSL-FO

    Full text link
    Background: In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Methods: Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. Results: The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. Conclusions: The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale

    FuGEFlow: data model and markup language for flow cytometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Flow cytometry technology is widely used in both health care and research. The rapid expansion of flow cytometry applications has outpaced the development of data storage and analysis tools. Collaborative efforts being taken to eliminate this gap include building common vocabularies and ontologies, designing generic data models, and defining data exchange formats. The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt) standard was recently adopted by the International Society for Advancement of Cytometry. This standard guides researchers on the information that should be included in peer reviewed publications, but it is insufficient for data exchange and integration between computational systems. The Functional Genomics Experiment (FuGE) formalizes common aspects of comprehensive and high throughput experiments across different biological technologies. We have extended FuGE object model to accommodate flow cytometry data and metadata.</p> <p>Methods</p> <p>We used the MagicDraw modelling tool to design a UML model (Flow-OM) according to the FuGE extension guidelines and the AndroMDA toolkit to transform the model to a markup language (Flow-ML). We mapped each MIFlowCyt term to either an existing FuGE class or to a new FuGEFlow class. The development environment was validated by comparing the official FuGE XSD to the schema we generated from the FuGE object model using our configuration. After the Flow-OM model was completed, the final version of the Flow-ML was generated and validated against an example MIFlowCyt compliant experiment description.</p> <p>Results</p> <p>The extension of FuGE for flow cytometry has resulted in a generic FuGE-compliant data model (FuGEFlow), which accommodates and links together all information required by MIFlowCyt. The FuGEFlow model can be used to build software and databases using FuGE software toolkits to facilitate automated exchange and manipulation of potentially large flow cytometry experimental data sets. Additional project documentation, including reusable design patterns and a guide for setting up a development environment, was contributed back to the FuGE project.</p> <p>Conclusion</p> <p>We have shown that an extension of FuGE can be used to transform minimum information requirements in natural language to markup language in XML. Extending FuGE required significant effort, but in our experiences the benefits outweighed the costs. The FuGEFlow is expected to play a central role in describing flow cytometry experiments and ultimately facilitating data exchange including public flow cytometry repositories currently under development.</p

    SEMANTICALLY INTEGRATED E-LEARNING INTEROPERABILITY AGENT

    Get PDF
    Educational collaboration through e-learning is one of the fields that have been worked on since the emergence of e-learning in educational system. The e-learning standards (e.g. learning object metadata standard) and e-learning system architectures or frameworks, which support interoperation of correlated e-learning systems, are the proposed technologies to support the collaboration. However, these technologies have not been successful in creating boundless educational collaboration through e-learning. In particular, these technologies offer solutions with their own requirements or limitations and endeavor challenging efforts in applying the technologies into their elearning system. Thus, the simpler the technology enhances possibility in forging the collaboration. This thesis explores a suite of techniques for creating an interoperability tool model in e-learning domain that can be applied on diverse e-learning platforms. The proposed model is called the e-learning Interoperability Agent or eiA. The scope of eiA focuses on two aspects of e-learning: Learning Objects (LOs) and the users of elearning itself. Learning objects that are accessible over the Web are valuable assets for sharing knowledge in teaching, training, problem solving and decision support. Meanwhile, there is still tacit knowledge that is not documented through LOs but embedded in form of users' expertise and experiences. Therefore, the establishment of educational collaboration can be formed by the users of e-learning with a common interest in a specific problem domain. The eiA is a loosely coupled model designed as an extension of various elearning systems platforms. The eiA utilizes XML (eXtensible Markup Language) technology, which has been accepted as the knowledge representation syntax, to bridge the heterogeneous platforms. At the end, the use of eiA as facilitator to mediate interconununication between e-leaming systems is to engage the creation of semantically Federated e-learning Community (FeC). Eventually, maturity of the FeC is driven by users' willingness to grow the community, by means of increasing the elearning systems that use eiA and adding new functionalities into eiA

    The Evolution, current status, and future direction of XML

    Get PDF
    The Extensible Markup Language (XML) is now established as a multifaceted open-ended markup language and continues to increase in popularity. The major players that have shaped its development include the United States government, several key corporate entities, and the World Wide Web Consortium (W3C). This paper will examine these influences on XML and will address the emergence, the current status, and the future direction of this language. In addition, it will review best practices and research that have contributed to the continued development and advancement of XML

    The tissue micro-array data exchange specification: a web based experience browsing imported data

    Get PDF
    BACKGROUND: The AIDS and Cancer Specimen Resource (ACSR) is an HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers to approved researchers HIV infected biologic samples and uninfected control tissues including tissue cores in micro-arrays (TMA) accompanied by de-identified clinical data. Researchers interested in the type and quality of TMA tissue cores and the associated clinical data need an efficient method for viewing available TMA materials. Because each of the tissue samples within a TMA has separate data including a core tissue digital image and clinical data, an organized, standard approach to producing, navigating and publishing such data is necessary. The Association for Pathology Informatics (API) extensible mark-up language (XML) TMA data exchange specification (TMA DES) proposed in April 2003 provides a common format for TMA data. Exporting TMA data into the proposed format offers an opportunity to implement the API TMA DES. Using our public BrowseTMA tool, we created a web site that organizes and cross references TMA lists, digital "virtual slide" images, TMA DES export data, linked legends and clinical details for researchers. Microsoft Excel(® )and Microsoft Word(® )are used to convert tabular clinical data and produce an XML file in the TMA DES format. The BrowseTMA tool contains Extensible Stylesheet Language Transformation (XSLT) scripts that convert XML data into Hyper-Text Mark-up Language (HTML) web pages with hyperlinks automatically added to allow rapid navigation. RESULTS: Block lists, virtual slide images, legends, clinical details and exports have been placed on the ACSR web site for 14 blocks with 1623 cores of 2.0, 1.0 and 0.6 mm sizes. Our virtual microscope can be used to view and annotate these TMA images. Researchers can readily navigate from TMA block lists to TMA legends and to clinical details for a selected tissue core. Exports for 11 blocks with 3812 cores from three other institutions were processed with the BrowseTMA tool. Fifty common data elements (CDE) from the TMA DES were used and 42 more created for site-specific data. Researchers can download TMA clinical data in the TMA DES format. CONCLUSION: Virtual TMAs with clinical data can be viewed on the Internet by interested researchers using the BrowseTMA tool. We have organized our approach to producing, sorting, navigating and publishing TMA information to facilitate such review. We have converted Excel TMA data into TMA DES XML, and imported it and TMA DES XML from another institution into BrowseTMA to produce web pages that allow us to browse through the merged data. We proposed enhancements to the TMA DES as a result of this experience. We implemented improvements to the API TMA DES as a result of using exported data from several institutions. A document type definition was written for the API TMA DES (that optionally includes proposed enhancements). Independent validators can be used to check exports against the DTD (with or without the proposed enhancements). Linking tissue core images to readily navigable clinical data greatly improves the value of the TMA

    Mukautuvat XML-pohjaiset multimediapalvelut

    Get PDF
    The emergence of mobile computing requires new kinds of technologies for building services. HTML has been traditionally used to describe documents on the Internet, but it can no longer fulfil these new demands. New mobile devices are compact and limited in respect of processing power, screen size and navigation. Content has to be separated from the layout for the services to be accessible from various computing environments and devices. Web standards are already moving towards XML technology. The flexibility of these new XML-related standards makes it possible to create new kind of platform independent services. However, some of the standards are relatively new and haven't really been tested in practice, not to mention how well they work together. An interactive multimedia service was built to demonstrate some of the new standards. This demonstration service features XML, XSL, ECMAScript and XForms standards. Evaluation of the service showed that all the used standards work smoothly together. Adaptive multimedia services can be created using these technologies. However, the downside at the moment is the lack of proper tools. Especially XSL FO and XForms are very complicated and require a lot of studying. Before powerful and easy-to-use tools are available, developing services can be quite troublesome.Langattoman viestinnän yleistyminen vaatii uudenlaisia tekniikoita palvelujen rakentamiseen. HTML:ää on perinteisesti käytetty dokumenttien kuvauskielenä internetissä, mutta se ei enää pysty tyydyttämään uudenlaisia vaatimuksia. Uudet langattomat päätelaitteet ovat pienikokoisia ja niillä on rajallisesti prosessointitehoa sekä tavallisesta työasemasta poikkeavia navigointitapoja. Sisältö ja ulkoasu täytyy pystyä erottamaan toisistaan, jotta palveluja voitaisiin käyttää erilaisissa laiteympäristöissä. WWW-standardit ovat jo nyt siirtymässä kohti XML-yhteensopivia tekniikoita, mahdollistaa uudentyyppisten laiteriippumattomien palvelujen rakentamisen. Jotkut näistä standardeista ovat kuitenkin varsin tuoreita eikä niitä ole käytännössä. Niiden yhteensopivuudestakaan ei ole vielä paljon kokemuksia. Uusien tekniikoiden arvioimiseksi rakennettiin interaktiivinen multimediapalvelu, jossa on käytetty XML, XSL, ECMAScript ja XForms standardeja. Palvelun arviointi näytti, että käytetyt tekniikat toimivat hyvin yhdessä. Näitä tekniikoita käyttämällä voidaan rakentaa mukautuvia multimediapalveluja. Huonoksi puoleksi havaittiin se, että sopivia työkaluja ei vielä ole. Erityisesti XSL FO ja XForms ovat varsin monimutkaisia ja vaativat paljon perehtymistä. Palvelujen rakentaminen voi olla melko työlästä ennen kuin tehokkaita ja helppokäyttöisiä työkaluja on saatavilla

    Transcoding multilingual and non-standard web content to voiceXML

    Get PDF
    Includes abstract.Includes bibliographical references (leaves 112-119).Transcoding systems redesign and reformat already existing web interfaces into other formats so that they can be available to other audiences. For example, change it into audio, sign language or other medium. The bene_t of such systems is less work on meeting the needs of di_erent audiences. This thesis describes the design and the implementation details of a transcoding system called Dinaco. Dinaco is targeted at converting HTML web pages which are created using Extensible MarkupLanguage (XML) technologies to speech interfaces. The di_erentiating feature ofDinaco is that it uses separated annotations during its transcoding process, while previous transcoding systems use HTML dependent annotations. These separated annotations enable Dinaco to pre-normalize non-standard words and to generate VoiceXML interfaces which have semantics of content. The semantics help Textto-Speech (TTS) tools to read multilingual text and to do text normalization. The results from experiments indicate that pre-normalizing non-standard words and appending semantics enable Dinaco to generate VoiceXML interfaces which are more usable than those which are generated by transcoding systems which use HTML dependent annotations. The thesis uses the design of Dinaco to demonstrate how separating annotations makes it possible to write descriptions of content which cannot be written using external HTML dependent annotations and how separating annotations makes it easy to write, maintain, re-use and share annotations

    The Application of XML as a Means of Exchanging Discharge Summaries between Hospital Informations Systems.

    Get PDF
    Achieving interoperability between two or more disparate systems has long been both a strong desire and difficult challenge to information professionals. To make it even more problematic, attributes of an interoperability solution for one situation might not be sufficient for another. However, with the use of XML technologies, interoperability between systems is becoming an attainable goal. In this paper, using the health care system (specifically, discharge summaries) as a backdrop, I explore the issues surrounding an XML-based interoperability solution. The proposed solution creates a connection between a Microsoft Access 2002 database and an Oracle 9i database using XML as the intermediate data format. This paper explores the ramifications of exchanging health data, the current XML application offerings of the two databases in question, and the specific problems that must be addressed when creating an XML-based interoperability solution. The last section explains decision rationales and presents a general framework of steps for reproducing this solution

    Creating a domain specific document type definition for XML: thoughts on content markup for the humanities

    Get PDF
    This paper focuses on the development of an XML (eXtensible Markup Language) DTD (Document Type Definition) for South Asian Studies (SAS.dtd). The goal was to develop a DTD that could be used to markup content, rather than form. The assumption was made that this DTD would be used in conjunction with another DTD that focused on form, such as the TEI DTD (Text Encoding Initiative). The first part of the paper describes the process involved in writing the DTD. This portion relies heavily on the use of examples. The second part of the paper discusses implications of the DTD design; specifically problems of attempting to markup document content. This part of the paper deals with philosophical issues and questions of responsibility
    corecore