6,108 research outputs found

    Situational Enterprise Services

    Get PDF
    The ability to rapidly find potential business partners as well as rapidly set up a collaborative business process is desirable in the face of market turbulence. Collaborative business processes are increasingly dependent on the integration of business information systems. Traditional linking of business processes has a large ad hoc character. Implementing situational enterprise services in an appropriate way will deliver the business more flexibility, adaptability and agility. Service-oriented architectures (SOA) are rapidly becoming the dominant computing paradigm. It is now being embraced by organizations everywhere as the key to business agility. Web 2.0 technologies such as AJAX on the other hand provide good user interactions for successful service discovery, selection, adaptation, invocation and service construction. They also balance automatic integration of services and human interactions, disconnecting content from presentation in the delivery of the service. Another Web technology, such as semantic Web, makes automatic service discovery, mediation and composition possible. Integrating SOA, Web 2.0 Technologies and Semantic Web into a service-oriented virtual enterprise connects business processes in a much more horizontal fashion. To be able run these services consistently across the enterprise, an enterprise infrastructure that provides enterprise architecture and security foundation is necessary. The world is constantly changing. So does the business environment. An agile enterprise needs to be able to quickly and cost-effectively change how it does business and who it does business with. Knowing, adapting to diffident situations is an important aspect of today’s business environment. The changes in an operating environment can happen implicitly and explicitly. The changes can be caused by different factors in the application domain. Changes can also happen for the purpose of organizing information in a better way. Changes can be further made according to the users' needs such as incorporating additional functionalities. Handling and managing diffident situations of service-oriented enterprises are important aspects of business environment. In the chapter, we will investigate how to apply new Web technologies to develop, deploy and executing enterprise services

    Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Get PDF
    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)

    Model-based Semantic Conflict Analysis for Software- and Data-Integration Scenarios

    Get PDF
    The semantic conflict analysis, which is the focus of this technical report, is an approach to automate various design-time verification activities which can be applied during software- or data-integration processes. Specifically, the aspects of semantic matching of business processes and the underlying IT infrastructure as well as of technical aspects of the composite heterogeneous systems will be investigated. The report is part of the BIZYCLE project, which examines applicability of model-based methods, technologies and tools to the large-scale industrial software and data integration scenarios. The semantic conflict analysis is thus part of the overall BIZYCLE conflict analysis process, comprising of semantic, structural, communication, behavior and property analysis, aiming at facilitating and improving standard integration practice. Therefore, the project framework will be briefly introduced first, followed by the detailed semantic annotation and conflict analysis descriptions, and further backed up with the semantic conflict analysis motivation/illustration scenario

    Metamodels and Transformations for Software and Data Integration

    Get PDF
    Metamodels define a foundation for describing software system interfaces which can be used during software or data integration processes. The report is part of the BIZYCLE project, which examines applicability of model-based methods, technologies and tools to the large-scale industrial software and data integration scenarios. The developed metamodels are thus part of the overall BIZYCLE process, comprising of semantic, structural, communication, behavior and property analysis, aiming at facilitating and improving standard integration practice. Therefore, the project framework will be briefly introduced first, followed by the detailed metamodel and transformation description as well as motivation/illustration scenarios

    Towards structured sharing of raw and derived neuroimaging data across existing resources

    Full text link
    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery

    Benefits and swot analysis of iknow estudent services system

    Get PDF
    The implementation of new robust and complex overall systems in any area is in the very least demanding, complicated, extensive, particularized and delicate. Especially if they are planned to be designed for almost entire higher education system in a country. Inevitably at the beginning, the stakeholders in the existing processes and resources will be reluctant to radical change such as the one in the case of iKnow system implementation, setbacks can be experienced in the mentality shifts, workflow adjustments and adaptation, but also in the different starting points in different institutions for such implementations. And this is only before the beginning of usage of the system. As with any big, ERP-like software solution, the first period of implementation may be the scariest, until everyone gets on board. Then the impressions from the intuitive interface, completion of tasks from distance, the overview of many aspects, maybe never even considered before, and the usefulness of the reports will kick in. That is the point from which the added value from the iKnow eStudent Services System will start to pile up improvements in many directions and depths. This paper can serve as an introduction to the benefits, strengths and opportunities that can be expected from iKnow, and food for thought for the involved parties in the realization of the project for its weaknesses and threats. By observing the requirements for the system on one side, and the technical documentation and the software itself on the other, we can conclude that what is asked for has been delivered in the construction area, and time will show that the objectives will be reachable in the very least, if not completely, with timely implementation and proper usage

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575

    Service-Oriented Process Models in Telecommunication Business

    Get PDF
    The thesis concentrates on to evaluate challenges in the business process management and the need for Service-oriented process models in telecommunication business to alleviate the integration work efforts and to reduce total costs of ownership. The business aspect concentrates on operations and business support systems which are tailored for communication service providers. Business processes should be designed in conformance with TeleManagement Forum's integrated business architecture framework. The thesis rationalizes the need to transform organizations and their way of working from vertical silos to horizontal layers and to understand transformational efforts which are needed to adopt a new strategy. Furthermore, the thesis introduces service characterizations and goes deeper into technical requirements that a service compliant middleware system needs to support. At the end of the thesis Nokia Siemens Networks proprietary approach – Process Automation Enabling Suite is introduced, and finally the thesis performs two case studies. The first one is Nokia Siemens Networks proprietary survey which highlights the importance of customer experience management and the second one is an overall research study whose results have been derived from other public surveys covering application integration efforts

    Mallipohjainen järjestelmäintegraatio tuotannonohjausjärjestelmille

    Get PDF
    Application integration becomes more complex as software becomes more advanced. This thesis investigates the applicability of model-driven application integration methods to the software integration of manufacturing execution systems (MES). The goal was to create a code generator that uses models to generate a working program that transfers data from a MES to another information system. The focus of the implementation was on generality. First, past research of MES was reviewed, the means to integrate it with other information systems were investigated, and the international standard ISA-95 and B2MML as well as model-driven engineering (MDE) were revised. Next, requirements were defined for the system. The requirements were divided into user and developer requirements. A suitable design for a code generator was introduced and, after that, implemented and experimented. The experiment was conducted by reading production data from the database of MES-like Delfoi Planner and then transforming that data to B2MML-styled XML-schema. The experiment verified that the code generator functioned as intended. However, compared to a manually created program, the generated code was longer and less efficient. It should also be considered that adopting MDE methods takes time. Therefore, for MDE to be better than traditional programming, the code generator has to be used multiple times in order to achieve the benefits and the systems cannot be too time-critical either. Based on the findings, it can be said, that model-driven application integration methods can be used to integrate MESs, but there are restrictions.Järjestelmäintegraatio vaikeutuu ohjelmien monimutkaistuessa. Tässä työssä tutkitaan mallipohjaisten järjestelmäintegraatiometodien soveltuvuutta tuotannonohjausjärjestelmille (MES). Tavoitteena oli muodostaa koodigeneraattori, joka käyttää malleja luodakseen toimivan ohjelman, joka siirtää tietoa MES-järjestelmästä johonkin toiseen tietojärjestelmään. Toteutuksessa keskityttiin yleistettävyyteen. Aluksi työssä käytiin läpi aikaisempaa tutkimusta MES-järjestelmistä ja mahdollisuuksista integroida niitä toisiin informaatiojärjestelmiin. Lisäksi otettiiin selvää kansainvälisestä ISA-95 standardista ja B2MML:sta sekä mallipohjaisesta tekniikasta (MDE). Tämän jälkeen järjestelmälle määriteltiin vaatimukset, jotka jaettiin käyttäjän ja kehittäjän vaatimuksiin. Koodigeneraattorista tehtiin ehdot täyttävä suunnitelma, joka toteutettiin ja jolla suoritettiin kokeita. Koe toteutettiin lukemalla tuotantodataa MES:n kaltaisen Delfoi Plannerin tietokannasta, jonka jälkeen data muutettiin B2MML tyyliä noudattavaan XML-schema muotoon. Kokeet osoittivat, että koodigeneraattori toimi kuten toivottiin. Kuitenkin havaittiin, että verrattuna manuaalisesti toteutettuun ohjelmaan, luotu ohjelma ei ollut yhtä tehokas ja lisäksi se oli pidempi. Huomattiin myös, että MDE-metodien käyttöönotto vie paljon aikaa. Jotta MDE olisi perinteistä ohjelmointia parempi vaihtoehto, sitä pitäisi käyttää useita kertoja ja sillä luotu järjestelmä ei saisi olla liian aikariippuvainen. Havaintojen perusteella voidaan sanoa, että mallipohjaisia järjestelmäintegraatiometodeja voidaan käyttää MES-järjestelmien integrointiin, mutta sille on rajoituksia
    corecore