2,735 research outputs found

    Situational Enterprise Services

    Get PDF
    The ability to rapidly find potential business partners as well as rapidly set up a collaborative business process is desirable in the face of market turbulence. Collaborative business processes are increasingly dependent on the integration of business information systems. Traditional linking of business processes has a large ad hoc character. Implementing situational enterprise services in an appropriate way will deliver the business more flexibility, adaptability and agility. Service-oriented architectures (SOA) are rapidly becoming the dominant computing paradigm. It is now being embraced by organizations everywhere as the key to business agility. Web 2.0 technologies such as AJAX on the other hand provide good user interactions for successful service discovery, selection, adaptation, invocation and service construction. They also balance automatic integration of services and human interactions, disconnecting content from presentation in the delivery of the service. Another Web technology, such as semantic Web, makes automatic service discovery, mediation and composition possible. Integrating SOA, Web 2.0 Technologies and Semantic Web into a service-oriented virtual enterprise connects business processes in a much more horizontal fashion. To be able run these services consistently across the enterprise, an enterprise infrastructure that provides enterprise architecture and security foundation is necessary. The world is constantly changing. So does the business environment. An agile enterprise needs to be able to quickly and cost-effectively change how it does business and who it does business with. Knowing, adapting to diffident situations is an important aspect of today’s business environment. The changes in an operating environment can happen implicitly and explicitly. The changes can be caused by different factors in the application domain. Changes can also happen for the purpose of organizing information in a better way. Changes can be further made according to the users' needs such as incorporating additional functionalities. Handling and managing diffident situations of service-oriented enterprises are important aspects of business environment. In the chapter, we will investigate how to apply new Web technologies to develop, deploy and executing enterprise services

    A geostatistical simulation algorithm for the homogenisation of climatic time series: a contribution to the homogenisation of monthly precipitation series

    Get PDF
    A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information Management, specialization in Geographic Information SystemsAs defined by the Intergovernmental Panel on Climate Change (IPCC), climate change refers to a change in the state of the climate that can be identified by changes in the statistical characteristics of its properties and that persists for an extended period, typically decades or longer. In order to assess climate change and to develop impact studies, it is imperative that climate signals are clean from any external factors. However, non-natural irregularities are an inevitable part of long-time climate records. They are introduced during the process of measuring and collecting data from weather stations. Accordingly, it is essential to detect and correct those irregularities a priori, through a process called homogenisation. This process became a hot topic in the last decades and many researchers have focused on developing efficient methods. Still, some climatic variables are lacking homogenisation procedures due to their high variability and temporal resolution (e.g., monthly precipitation). We propose the gsimcli (Geostatistical SIMulation for the homogenisation of CLImate data) homogenisation method, which is based on a geostatistical simulation method, namely the direct sequential simulation. The proposed approach considers simulated values of the candidate station’s neighbouring area, defined by the local radius parameter, aiming to account for local characteristics of its climatic zone. gsimcli has other modelling parameters, such as the candidates order in the homogenisation process, the detection parameter, and the correction parameter (also used to fill in missing data). A semi-automatic version of gsimcli is also proposed, where the homogenisation adjustments can be estimated from a comparison series. The efficiency of the gsimcli method is evaluated in the homogenisation of precipitation data. Several homogenisation exercises are presented in a sensitivity analysis of the parameters for two different data sets: real and artificial precipitation data. The assessment of the detection part of gsimcli is based on the comparison with other detection techniques using real data, and extends a previous study for the south of Portugal. Artificial monthly and annual data from a benchmark data set of the HOME project (ACTION COST-ES0601) is used to assess the performance of gsimcli. These results allow the comparison between gsimcli and state-of-the-art methods through the calculation of performance metrics. This research allowed identifying gsimcli parameters that have a high influence in the homogenisation results: correction parameter, grid cell size and local radius parameter. The set of parameters providing the best values of performance metrics are recommended as the most suitable set of homogenisation parameters for monthly precipitation data. Results show gsimcli as a favourable homogenisation method for monthly precipitation data that outperformed a few well established procedures. The filling in of missing data is an advantage when compared to other methods. Taking advantage of its capability of filtering irregularities and providing comparison series, gsimcli can also be used as a pre-homogenisation tool followed by the use of a traditional homogenisation method (semi-automatic approach). As future work, it is recommended the performance assessment of the gsimcli method with denser monitoring networks, and the inclusion of a multivariate geostatistical simulation algorithm in the homogenisation procedure.As alterações climáticas, tal como definidas pelo Painel Intergovernamental para as Alterações Climáticas das Nações Unidas, referem-se a uma modificação no estado do clima que pode ser identificada através de alterações nas suas propriedades estatísticas e que perdura por um largo período de tempo, tipicamente décadas ou períodos mais longos. Para a avaliação das alterações climáticas, e para o desenvolvimento de estudos de impacte, é imperativo que os sinais climáticos estejam isentos de quaisquer fatores externos. Inevitavelmente, as séries temporais de dados climáticos contêm irregularidades não-naturais. Tais irregularidades são introduzidas durante o processo de medição e recolha de dados nas estações meteorológicas. Assim, é essencial a prévia deteção e correção dessas irregularidades, através de um processo chamado homogeneização. Nas últimas décadas, este processo tornou-se um tópico relevante e muitos investigadores procuraram desenvolver métodos de homogeneização eficientes. Contudo, existe um número reduzido de métodos para algumas variáveis climáticas devido à sua elevada variabilidade e resolução temporal (e.g., precipitação mensal). Neste trabalho propomos o método de homogeneização gsimcli (Geostatistical SIMulation for the homogenisation of CLImate data), o qual se baseia num método de simulação geoestatística, a simulação sequencial direta. A abordagem proposta tem em consideração valores simulados na vizinhança da estação candidata, definida pelo parâmetro raio local, com o objetivo de incorporar características locais da sua zona climática. O gsimcli tem outros parâmetros de modelação, tais como a ordem das estações candidatas no processo de homogeneização, o parâmetro de deteção e o parâmetro de correção (também usado na substituição de observações omissas). Propõe-se também uma abordagem semi-automática do gsimcli onde os ajustamentos para a correção de irregularidades podem ser estimados a partir de uma série de comparação. A eficiência do método gsimcli é avaliada na homogeneização de dados de precipitação. São apresentados vários exercícios de homogeneização numa análise de sensibilidade dos parâmetros para dois conjuntos de dados: dados reais e artificiais de precipitação. A avaliação da componente de deteção do gsimcli baseia-se na comparação com outras técnicas de deteção de irregularidades utilizando dados reais, e constitui uma extensão de um estudo anterior para o sul de Portugal. O desempenho do método gsimcli é avaliado a partir de dados artificiais (mensais e anuais) de um conjunto de dados de referência (benchmark) do projeto HOME (ACTION COST-ES0601). Estes resultados permitem a comparação do gsimcli com métodos que se constituem como o estado-da-arte neste domínio, a partir do cálculo de métricas de desempenho. Este estudo permitiu identificar os parâmetros do gsimcli que mais influenciam os resultados da homogeneização: parâmetro de correção, o tamanho da célula e o raio local. O conjunto de parâmetros com os melhores resultados das métricas de desempenho é recomendado como sendo o mais adequado à homogeneização da precipitação mensal. Os resultados mostram que o gsimcli tem um contributo positivo na homogeneização da precipitação mensal, tendo superado o desempenho de alguns métodos de homogeneização bem estabelecidos. A sua capacidade para substituir valores omissos é uma vantagem em relação a outros métodos. Tirando partido da sua capacidade para filtrar irregularidades e para disponibilizar séries de comparação, o gsimcli também pode ser usado como uma ferramenta de pré-homogeneização, seguindo-se a aplicação de um método tradicional de homogeneização (abordagem semi-automática). Como trabalhos futuros, recomenda-se a avaliação de desempenho do método gsimcli com redes meteorológicas mais densas, e a inclusão de um algoritmo de simulação geoestatística multivariada no procedimento de homogeneização

    EzWeb/FAST: Reporting on a Successful Mashup-based Solution for Developing and Deploying Composite Applications in the Upcoming Web of Services

    Get PDF
    Service oriented architectures (SOAs) based on Web Services have attracted a great interest and IT investments during the last years, principally in the context of business-to-business integration within corporate intranets. However, they are nowadays evolving to break through enterprise boundaries, in a revolutionary attempt to make the approach pervasive, leading to what we call a user-centric SOA, i.e. a SOA conceived as a Web of Services made up of compositional resources that empowers end-users to ubiquitously exploit these resources by collaboratively remixing them. In this paper we explore the architectural basis, technologies, frameworks and tools considered necessary to face this novel vision of SOA. We also present the rationale behind EzWeb/FAST: an undergoing EU funded project whose first outcomes could serve as a preliminary proof of concep

    Conceptual development of custom, domain-specific mashup platforms

    Get PDF
    Despite the common claim by mashup platforms that they enable end-users to develop their own software, in practice end-users still don't develop their own mashups, as the highly technical or inexistent user bases of today's mashup platforms testify. The key shortcoming of current platforms is their general-purpose nature, that privileges expressive power over intuitiveness. In our prior work, we have demonstrated that a domainspecific mashup approach, which privileges intuitiveness over expressive power, has much more potential to enable end-user development (EUD). The problem is that developing mashup platforms - domain-specific or not - is complex and time consuming. In addition, domain-specific mashup platforms by their very nature target only a small user basis, that is, the experts of the target domain, which makes their development not sustainable if it is not adequately supported and automated. With this article, we aim to make the development of custom, domain-specific mashup platforms costeffective. We describe a mashup tool development kit (MDK) that is able to automatically generate a mashup platform (comprising custom mashup and component description languages and design-time and runtime environments) from a conceptual design and to provision it as a service. We equip the kit with a dedicated development methodology and demonstrate the applicability and viability of the approach with the help of two case studies. © 2014 ACM

    Developing front-end Web 2.0 technologies to access services, content and things in the future Internet

    Get PDF
    The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative

    Aeronautical engineering: A continuing bibliography, supplement 122

    Get PDF
    This bibliography lists 303 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1980

    Knowledge-based support in Non-Destructive Testing for health monitoring of aircraft structures

    Get PDF
    Maintenance manuals include general methods and procedures for industrial maintenance and they contain information about principles of maintenance methods. Particularly, Non-Destructive Testing (NDT) methods are important for the detection of aeronautical defects and they can be used for various kinds of material and in different environments. Conventional non-destructive evaluation inspections are done at periodic maintenance checks. Usually, the list of tools used in a maintenance program is simply located in the introduction of manuals, without any precision as regards to their characteristics, except for a short description of the manufacturer and tasks in which they are employed. Improving the identification concepts of the maintenance tools is needed to manage the set of equipments and establish a system of equivalence: it is necessary to have a consistent maintenance conceptualization, flexible enough to fit all current equipment, but also all those likely to be added/used in the future. Our contribution is related to the formal specification of the system of functional equivalences that can facilitate the maintenance activities with means to determine whether a tool can be substituted for another by observing their key parameters in the identified characteristics. Reasoning mechanisms of conceptual graphs constitute the baseline elements to measure the fit or unfit between an equipment model and a maintenance activity model. Graph operations are used for processing answers to a query and this graph-based approach to the search method is in-line with the logical view of information retrieval. The methodology described supports knowledge formalization and capitalization of experienced NDT practitioners. As a result, it enables the selection of a NDT technique and outlines its capabilities with acceptable alternatives

    Efficiency of time series homogenization: method comparison with 12 monthly temperature test datasets

    Get PDF
    The aim of time series homogenization is to remove nonclimatic effects, such as changes in station location, instrumentation, observation practices, and so on, from observed data. Statistical homogenization usually reduces the nonclimatic effects but does not remove them completely. In the Spanish ‘‘MULTITEST’’ project, the efficiencies of automatic homogenization methods were tested on large benchmark datasets of a wide range of statistical properties. In this study, test results for nine versions, based on five homogenization methods—the adapted Caussinus-Mestre algorithm for the homogenization of networks of climatic time series (ACMANT), ‘‘Climatol,’’ multiple analysis of series for homogenization (MASH), the pairwise homogenization algorithm (PHA), and ‘‘RHtests’’—are presented and evaluated. The tests were executed with 12 synthetic/surrogate monthly temperature test datasets containing 100–500 networks with 5–40 time series in each. Residual centered root-mean-square errors and residual trend biases were calculated both for individual station series and for network mean series. The results show that a larger fraction of the nonclimatic biases can be removed from station series than from network-mean series. The largest error reduction is found for the long-term linear trends of individual time series in datasets with a high signal-to-noise ratio (SNR), where the mean residual error is only 14%–36% of the raw data error. When the SNR is low, most of the results still indicate error reductions, although with smaller ratios than for large SNR. In general, ACMANT gave the most accurate homogenization results. In the accuracy of individual time series ACMANT is closely followed by Climatol, and for the accurate calculation of mean climatic trends over large geographical regions both PHA and ACMANT are recommended.This research was funded by the Spanish MULTITESTproject (Ministry of Economics and Competitiveness, CGL2014-52901-P)
    corecore