7 research outputs found

    Fast Prototyping of the Internet of Things solutions with IBM Bluemix

    Get PDF
    Fast prototyping for IoT projects has gained attraction in many industries. Today\u27s IT market requires new faster techniques to get business advantages in different industries starting from the energy consumption and retail to the manufacturing, services, and agriculture. Combining sensors and actuators, embedded systems and networks with cloud computing platforms and cognitive services in one project is a very promising approach to address industry needs. Thus, system developers have to be familiar with many design technologies and best practices. At the same time, this approach requires a profound change in the way of interaction between major IoT market actors: suppliers and consumers of cloud platfоrm and services, teams of developers and universities. In this paper, we analyze how to build effectively interaction of major IoT market actors and discuss a platform for such collaboration. We present a collaborative framework for fast prototyping of IoT solutions with different stakeholders participating. The paper demonstrates the results of this approach in the case of interaction between vendor, university, and industry. We consider a number of technological and practical aspects of this collaborative framework using IBM Bluemix cloud platform and IoT templates. We tested this approach in IoT hackathon with a participation of a vendor, local business partners, and industry representatives. Projects developed during this hackathon will be used to illustrate results achieved by applying introduced concept for IoT solutions prototyping

    Cost optimization for data placement strategies in an analytical cloud service

    Get PDF
    Analyzing a large amount of business-relevant data in near-realtime in order to assist decision making became a crucial requirement for many businesses in the last years. Therefore, all major database system vendors offer solutions that assist customers in this requirement with systems that are specially tuned for accelerating analytical workloads. Before the decision is made to buy such a huge and expensive solution, customers are interested in getting a detailed workload analysis in order to estimate potential benefits. Therefore, a more agile solution is desirable having lower barriers to entry that allows customers to assess analytical solutions for their workloads and lets data scientists experiment with available data on test systems before rolling out valuable analytical reports on a production system. In such a scenario where separate systems are deployed for handling transactional workloads of daily customers business and conducting business analytics on either a cloud service or a dedicated accelerator appliance, data management and placement strategies are of high importance. Multiple approaches exist for keeping the data set in-sync and guaranteeing data coherence with unique characteristics regarding important metrics that impact query performance, such as the latency when data will be propagated, achievable throughputs for larger data volumes, or the amount of required CPU to detect and deploy data changes. So the important heuristics are analyzed and evolved in order to develop a general model for data placement and maintenance strategies. Based on this theoretical model, a prototype is also implemented that predicts these metrics

    Data Warehousing in the Cloud

    Get PDF
    Um data warehouse, mais que um conceito, é um sistema concebido para armazenar a informação relacionada com as atividades de uma organização de forma consolidada e que sirva de ponto único para toda e qualquer relatório ou análise que possa ser efetuada. Este sistema possibilita a análise de grandes volumes de informação que tipicamente têm origem nos sistemas transacionais de uma organização (OLTP – Online Transaction Processing). Este conceito surgiu da necessidade de integrar dados corporativos espalhados pelos vários servidores aplicacionais que uma organização possa ter, para que fosse possível tornar os dados acessíveis a todos os utilizadores que necessitam de consumir informação e tomar decisões com base nela. Com o surgimento de cada vez mais dados, surgiu também a necessidade de os analisar. No entanto os sistemas de data warehouse atuais não têm a capacidade suficiente para o tratamento da quantidade enorme de dados que atualmente é produzida e que necessita de ser tratada e analisada. Surge então o conceito de cloud computing. Cloud computing é um modelo que permite o acesso ubíquo e a pedido, através da Internet, a um conjunto de recursos de computação partilhados ou não (tais como redes, servidores ou armazenamento) que podem ser rapidamente aprovisionados ou libertados apenas com um simples pedido e sem intervenção humana para disponibilizar/libertar. Neste modelo, os recursos são praticamente ilimitados e em funcionamento conjunto debitam um poder de computação muito elevado que pode e deve ser utilizado para os mais variados fins. Da conjugação de ambos estes conceitos, surge o cloud data warehouse que eleva a forma como os sistemas tradicionais de data warehouse são definidos ao permitir que as suas fontes possam estar localizada em qualquer lugar desde que acessível pela Internet, tirando também partido do grande poder computacional de uma infraestrutura na nuvem. Apesar das vantagens reconhecidas, há ainda alguns desafios sendo dois dos mais sonantes a segurança e a forma como os dados são transferidos para a nuvem. Nesta dissertação foi feito um estudo comparativo entre variadas soluções de data warehouse na cloud com o objectivo de recomendar a melhor solução de entre as estudadas e alvo de testes. Foi feita uma avaliação com base em critérios da Gartner e num inquérito sobre o tema. Desta primeira avaliação surgiram as duas soluções que foram alvo de uma comparação mais fina e sobre as quais foram feitos os testes cuja avaliação ditou a recomendação.A data warehouse, rather than a concept, is a system designed to store the information related to the activities of an organization in a consolidated way and that serves as a single point of truth for any report or analysis that can be carried out. It enables the analysis of large amounts of information that typically comes from the organization's transactional systems (OLTP). This concept arose from the need to integrate corporate data across multiple application servers that an organization might have, so that it would be possible to make data accessible to all users who need to consume information and make decisions based on it. With the appearance of more and more data, there has also been a need to analyze it. However, today's data warehouse systems do not have the capacity to handle the huge amount of data that is currently produced and needs to be handled or analyzed. Then comes the concept of cloud computing. Cloud computing is a model that enables ubiquitous and on-demand access to a set of shared or non-shared computing resources (such as networks, servers, or storage) that can be quickly provisioned or released only with a simple request and without human intervention to get it done. In this model, the features are almost unlimited and in working together they bring a very high computing power that can and should be used for the most varied purposes. From the combination of both these concepts, emerges the cloud data warehouse. It elevates the way traditional data warehouse systems are defined by allowing their sources to be located anywhere as long as it is accessible through the Internet, also taking advantage of the great computational power of an infrastructure in the cloud. Despite the recognized advantages, there are still some challenges. Two of the most important are the security and the way data is transferred to the cloud. In this dissertation a comparative study between several data warehouse solutions in the cloud was carried out with the aim of recommending the best solution among the studied solutions. An assessment was made based on Gartner criteria and a survey on the subject. From this first evaluation came the two solutions that were the target of a finer comparison and on which the tests whose assessment dictated the recommendation were made

    Scalable Geospatial Analytics with IBM DB2 and R

    Get PDF
    Application of spatial analysis of data of new york taxi companies, combining R with IBM DBS

    Aspects of event-driven cloud-native application development

    Get PDF
    Managing and configuring servers have been challenging burdens. Therefore, serverless computing has been introduced to overcome the complexities of managing servers, get rid of operations and handle different terms such as scalability and high availability. Codes or binaries in serverless computing are executed upon direct invocation or as a response to events in a highly scalable manner, which makes it most appropriate for building event-driven applications.An example of such serverless computing services provider, is OpenWhisk the open-sourced project by IBM. To receive events and react to them by invoking or running OpenWhisk actions (codes or Docker containers), OpenWhisk provides an ecosystem of packages of services to enable, facilitate, ease the usage of the services and subscribe to their events. While OpenWhisk provides different powerful means and tools to interact with events, it lacks a number of important services (event sources) packages within the OpenWhisk ecosystem which are necessary to ease and facilitate subscribing to receive their events and using the different functionalities of the services. This thesis enriches the OpenWhisk ecosystem by integrating and enabling more services. A use-case-based approach is chosen to select services to be integrated and enabled. The proposed use-case is an Early Warning System (EWS) used to warn the public for disasters, possible incidents and helping rescue survivors. In this approach, diversity of the integrated services in terms of domains and vendors are guaranteed to avoid vendor lock-in and provide flexibility in the available services. The integrated services were then categorized based on taxonomic categories using the domain of services to ease out finding and organizing packages

    Business Intelligence em Avaliações de Alunos da Licenciatura em Engenharia Informática

    Get PDF
    Numa instituição de ensino, tal como em qualquer organização, é importante monitorizar o desempenho dos diferentes processos de negócio, especialmente com o objetivo da melhoria contínua. Numa instituição de ensino há processos de negócio “comuns”, mas os processos de aprendizagem revestem-se da maior importância. É fundamental analisar o desempenho destes processos, estudando os comportamentos específicos de unidades curriculares e/ou subgrupos de alunos e agir em conformidade para contribuir na melhoria da aprendizagem. Por outro lado, a monitorização também é importante para conseguir comprovar o bom funcionamento dos processos perante entidades avaliadoras e de acreditação, como a Ordem dos Engenheiros, ENAEE ou A3ES. Estas entidades solicitam um conjunto de informação estatística sobre variados aspetos da atividade letiva, com o objetivo de analisar o desempenho e o bom funcionamento, classificando-a e, se for caso disso, reconhecê-la com prémios de excelência. Atualmente, o Diretor da Licenciatura de Engenharia Informática monitoriza a informação curricular dos seus alunos a partir de ficheiros em formato de folha de cálculo, extraídos do Portal da Instituição. Dada a complexidade de análise e de propensão a erros que este formato pode induzir, foi encontrada a necessidade de criação de um sistema de Business Intelligence que permita o carregamento, armazenamento e manutenção de dados, de forma mais simples e automatizada. Depois de definidos então os objetivos para a criação de um sistema de armazenamento, o próximo passo passou por fazer um levantamento do estado de arte, definindo alguns conceitos considerados mais importantes para compreensão de toda a dissertação, estudando arquiteturas possíveis e algumas das ferramenta que foram utilizadas para o desenvolvimento da solução (ferramentas ETL e ferramentas possíveis de utilizar para futura apresentação de dados OLAP), concluindo-se com uma análise atual do mercado, apresentando algumas soluções criadas para o contexto escolar. No contexto de avaliações de alunos, é apresentada uma possível solução desenhada para a resolução do problema, sendo esta baseada num sistema com a capacidade de armazenamento de grandes volumes de dados e respetiva manutenção de histórico. Foram feitos testes para comprovar a credibilidade do sistema, não só pelo carregamento das fontes de dados disponibilizadas, mas também pela criação de um conjunto de análises identificadas como as mais utilizadas pelo Diretor da Licenciatura.In the context of a school system, like any organization, it’s important to monitor the performance of the different business processes, especially to support and improve steadily. The school system has common business processes, but the most important are learning processes. It’s crucial make an analisys about the processes performance, studying school specific behaviors and acting accordingly to improve learning experience. On the other hand, monitoring is also important to be able to check the correct functioning of the school system towards evaluative and accreditation institutions, such as the “Ordem dos Engenheiros”, ENAEE or A3ES. These entities require a set of statistical information on various aspects of teaching activity, in order to analyse the performance and operation, classifying and, where appropriate, recognise it with excellence awards. Currently, the Director of Informatics Engineering Degree monitors its student’s curriculum information through files in a spreadsheet format, extracted from the institution website. Given the complexity of analysis and error-prone that this format can induce, was found the need to create a Business Intelligence system that allows loading, storage and maintenance of data, more simple and in an automated way. After all solution goals were defined to create the data warehouse, the next step passed to survey the state of the art, defining some concepts considered more important to understanding the whole dissertation, studying possible architectures and some of the tool that have been used for the development of the solution (ETL tools and possible tools to use for future presentation data OLAP), concluding with a current market analysis, showing some solutions created for the school context. In the context of student’s evaluations, a possible solution designed to solve the problem is presented below, which is based on a system with the ability to store large amounts of data and related history maintenance. At last, tests to prove the credibility of the system have been made, not only by the loading of the available data sources, but also by creating a set of identified analyses as the most used by the Director of the Degree
    corecore