67 research outputs found

    CLOUD COMPUTING: A REVIEW OF PAAS, IAAS, SAAS SERVICES AND PROVIDERS

    Get PDF
    Cloud computing has become an important factor for businesses, developers, workers, because it provides tools and Web applications that allows storing information on external servers. Also, Cloud computing offers advantages such as: cost reduction, information access from anywhere, to mention but a few. Nowadays, there are several Cloud computing providers such as: Google Apps, Zoho, AppEngine, Amazon E2C, among others. These providers offer Software, Infrastructure or Platform as a Service. Taking this into account, this paper presents a general review of Cloud computing providers in order to allow users, enterprises, and developers select the one that meets their needs

    CLOUD COMPUTING: A REVIEW OF PAAS, IAAS, SAAS SERVICES AND PROVIDERS

    Full text link

    Adaptive Big Data Pipeline

    Get PDF
    Over the past three decades, data has exponentially evolved from being a simple software by-product to one of the most important companies’ assets used to understand their customers and foresee trends. Deep learning has demonstrated that big volumes of clean data generally provide more flexibility and accuracy when modeling a phenomenon. However, handling ever-increasing data volumes entail new challenges: the lack of expertise to select the appropriate big data tools for the processing pipelines, as well as the speed at which engineers can take such pipelines into production reliably, leveraging the cloud. We introduce a system called Adaptive Big Data Pipelines: a platform to automate data pipelines creation. It provides an interface to capture the data sources, transformations, destinations and execution schedule. The system builds up the cloud infrastructure, schedules and fine-tunes the transformations, and creates the data lineage graph. This system has been tested on data sets of 50 gigabytes, processing them in just a few minutes without user intervention.ITESO, A. C

    An analysis of the cloud computing platform

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2009.Includes bibliographical references.A slew of articles have been written about the fact that computing will eventually go in the direction of electricity. Just as most software users these days also own the hardware that runs the software, electricity users in the days of yore used to generate their own power. However, over time with standardization in voltage and frequency of generated power and better distribution mechanisms the generation of electricity was consolidated amongst fewer utility providers. The same is being forecast for computing infrastructure. Its is being touted that more and more users will rent computing infrastructure from a utility or "cloud" provider instead of maintaining their own hardware. This phenomenon or technology is being referred to Cloud Computing or Utility Computing. Cloud computing has been in existence in some form or the other since the beginning of computing. However, the advent of vastly improved software, hardware and communication technologies has given special meaning to the term cloud computing and opened up a world of possibilities. It is possible today to start an ecommerce or related company without investing in datacenters. This has turned out to be very beneficial to startups and smaller companies that want to test the efficacy of their idea before making any investment in expensive hardware. Corporations like Amazon, SalesForce.com, Google, IBM, Sun Microsystems, and many more are offering or planning to offer these infrastructure services in one form or another.(cont.) An ecosystem has already been created and going by the investment and enthusiasm in this space the ecosystem is bound to grow. This thesis tries to define and explain the fundamentals of cloud computing. It looks at the technical aspects of this industry and the kind of applications where cloud can be used. It also looks at the economic value created by the platform, the network externalities, its effect on traditional software companies and their reaction to this technology. The thesis also tries to apply the principle of multi-homing, coring and tipping to the cloud-computing platform and explain the results. The hurdles for both users and providers of this service are also examined in this thesis.by Ratnadeep Bhattacharjee.S.M

    Sistemas interativos e distribuídos para telemedicina

    Get PDF
    doutoramento Ciências da ComputaçãoDurante as últimas décadas, as organizações de saúde têm vindo a adotar continuadamente as tecnologias de informação para melhorar o funcionamento dos seus serviços. Recentemente, em parte devido à crise financeira, algumas reformas no sector de saúde incentivaram o aparecimento de novas soluções de telemedicina para otimizar a utilização de recursos humanos e de equipamentos. Algumas tecnologias como a computação em nuvem, a computação móvel e os sistemas Web, têm sido importantes para o sucesso destas novas aplicações de telemedicina. As funcionalidades emergentes de computação distribuída facilitam a ligação de comunidades médicas, promovem serviços de telemedicina e a colaboração em tempo real. Também são evidentes algumas vantagens que os dispositivos móveis podem introduzir, tais como facilitar o trabalho remoto a qualquer hora e em qualquer lugar. Por outro lado, muitas funcionalidades que se tornaram comuns nas redes sociais, tais como a partilha de dados, a troca de mensagens, os fóruns de discussão e a videoconferência, têm o potencial para promover a colaboração no sector da saúde. Esta tese teve como objetivo principal investigar soluções computacionais mais ágeis que permitam promover a partilha de dados clínicos e facilitar a criação de fluxos de trabalho colaborativos em radiologia. Através da exploração das atuais tecnologias Web e de computação móvel, concebemos uma solução ubíqua para a visualização de imagens médicas e desenvolvemos um sistema colaborativo para a área de radiologia, baseado na tecnologia da computação em nuvem. Neste percurso, foram investigadas metodologias de mineração de texto, de representação semântica e de recuperação de informação baseada no conteúdo da imagem. Para garantir a privacidade dos pacientes e agilizar o processo de partilha de dados em ambientes colaborativos, propomos ainda uma metodologia que usa aprendizagem automática para anonimizar as imagens médicasDuring the last decades, healthcare organizations have been increasingly relying on information technologies to improve their services. At the same time, the optimization of resources, both professionals and equipment, have promoted the emergence of telemedicine solutions. Some technologies including cloud computing, mobile computing, web systems and distributed computing can be used to facilitate the creation of medical communities, and the promotion of telemedicine services and real-time collaboration. On the other hand, many features that have become commonplace in social networks, such as data sharing, message exchange, discussion forums, and a videoconference, have also the potential to foster collaboration in the health sector. The main objective of this research work was to investigate computational solutions that allow us to promote the sharing of clinical data and to facilitate the creation of collaborative workflows in radiology. By exploring computing and mobile computing technologies, we have designed a solution for medical imaging visualization, and developed a collaborative system for radiology, based on cloud computing technology. To extract more information from data, we investigated several methodologies such as text mining, semantic representation, content-based information retrieval. Finally, to ensure patient privacy and to streamline the data sharing in collaborative environments, we propose a machine learning methodology to anonymize medical images

    Enhancing cyber assets visibility for effective attack surface management : Cyber Asset Attack Surface Management based on Knowledge Graph

    Get PDF
    The contemporary digital landscape is filled with challenges, chief among them being the management and security of cyber assets, including the ever-growing shadow IT. The evolving nature of the technology landscape has resulted in an expansive system of solutions, making it challenging to select and deploy compatible solutions in a structured manner. This thesis explores the critical role of Cyber Asset Attack Surface Management (CAASM) technologies in managing cyber attack surfaces, focusing on the open-source CAASM tool, Starbase, by JupiterOne. It starts by underlining the importance of comprehending the cyber assets that need defending. It acknowledges the Cyber Defense Matrix as a methodical and flexible approach to understanding and addressing cyber security challenges. A comprehensive analysis of market trends and business needs validated the necessity of asset security management tools as fundamental components in firms' security journeys. CAASM has been selected as a promising solution among various tools due to its capabilities, ease of use, and seamless integration with cloud environments using APIs, addressing shadow IT challenges. A practical use case involving the integration of Starbase with GitHub was developed to demonstrate the CAASM's usability and flexibility in managing cyber assets in organizations of varying sizes. The use case enhanced the knowledge graph's aesthetics and usability using Neo4j Desktop and Neo4j Bloom, making it accessible and insightful even for non-technical users. The thesis concludes with practical guidelines in the appendices and on GitHub for reproducing the use case

    Gestión de métricas de seguridad sobre proveedores de servicios Cloud

    Get PDF
    El mundo de las Tecnologías de Información y Comunicación (TIC) se encuentra en un proceso de evolución constante. Una de las tendencias actuales es el uso de servicios de Cloud Computing, también conocido como la “Nube”. Estos servicios proporcionan el acceso bajo demanda a recursos, como herramientas software, servidores o sistemas de almacenamiento, y que supone una gran reducción de costes de infraestructura para los clientes. Sin embargo, existe cierta desconfianza acerca de los riesgos que conlleva contratar servicios Cloud. Las preocupaciones provienen de temas como la confidencialidad de los datos y su gestión, especialmente con información sensible, o la pérdida de datos. Por ello, la seguridad es un aspecto clave. El objetivo principal de este Proyecto Fin de Grado consiste precisamente en desarrollar una herramienta que permita procesar la información disponible sobre las métricas de seguridad de los proveedores de servicios de Cloud Computing, y hacerla disponible a través de un servicio Web. De esta manera, los usuarios pueden evaluar los servicios proporcionados por diferentes proveedores y compararlos para obtener el servicio que más se ajuste a sus necesidades. En particular, este proyecto se centra el desarrollo de un sistema que permita a otras aplicaciones acceder a los metadatos basados en métricas de seguridad, a través de una API. Dichos metadatos son obtenidos mediante el procesamiento de documentos de validación de servicios Cloud (CAIQ) proporcionados por la Cloud Security Alliance (CSA).Information and Communications Technologies (ICT) are constantly evolving. One of the current trends is the use of Cloud Computing services. These services provide on-demand access to resources such as software tools, servers and storage systems, by representing a reduction of infrastructure costs for customers. However, there is some distrust about the risks of contracting cloud services. The concern stems from privacy management, especially with sensitive information, or data loss; therefore, security is a key point. The main objective of this Final Degree Project is precisely the development of a tool that processes the available information about security metrics of Cloud Computing service providers, and that makes it accessible through a Web service. In this way, users can assess services of different providers and compare them to get the service that best suits their needs. In particular, this project is focused on developing a system that allows other applications to access a security metrics based on providers’ metadata through a Web service. This metadata is obtained by processing cloud service assessment documents (CAIQ) provided by the Cloud Security Alliance (CSA).Ingeniería en Tecnologías de Telecomunicació

    Data-Driven Intelligent Scheduling For Long Running Workloads In Large-Scale Datacenters

    Get PDF
    Cloud computing is becoming a fundamental facility of society today. Large-scale public or private cloud datacenters spreading millions of servers, as a warehouse-scale computer, are supporting most business of Fortune-500 companies and serving billions of users around the world. Unfortunately, modern industry-wide average datacenter utilization is as low as 6% to 12%. Low utilization not only negatively impacts operational and capital components of cost efficiency, but also becomes the scaling bottleneck due to the limits of electricity delivered by nearby utility. It is critical and challenge to improve multi-resource efficiency for global datacenters. Additionally, with the great commercial success of diverse big data analytics services, enterprise datacenters are evolving to host heterogeneous computation workloads including online web services, batch processing, machine learning, streaming computing, interactive query and graph computation on shared clusters. Most of them are long-running workloads that leverage long-lived containers to execute tasks. We concluded datacenter resource scheduling works over last 15 years. Most previous works are designed to maximize the cluster efficiency for short-lived tasks in batch processing system like Hadoop. They are not suitable for modern long-running workloads of Microservices, Spark, Flink, Pregel, Storm or Tensorflow like systems. It is urgent to develop new effective scheduling and resource allocation approaches to improve efficiency in large-scale enterprise datacenters. In the dissertation, we are the first of works to define and identify the problems, challenges and scenarios of scheduling and resource management for diverse long-running workloads in modern datacenter. They rely on predictive scheduling techniques to perform reservation, auto-scaling, migration or rescheduling. It forces us to pursue and explore more intelligent scheduling techniques by adequate predictive knowledges. We innovatively specify what is intelligent scheduling, what abilities are necessary towards intelligent scheduling, how to leverage intelligent scheduling to transfer NP-hard online scheduling problems to resolvable offline scheduling issues. We designed and implemented an intelligent cloud datacenter scheduler, which automatically performs resource-to-performance modeling, predictive optimal reservation estimation, QoS (interference)-aware predictive scheduling to maximize resource efficiency of multi-dimensions (CPU, Memory, Network, Disk I/O), and strictly guarantee service level agreements (SLA) for long-running workloads. Finally, we introduced a large-scale co-location techniques of executing long-running and other workloads on the shared global datacenter infrastructure of Alibaba Group. It effectively improves cluster utilization from 10% to averagely 50%. It is far more complicated beyond scheduling that involves technique evolutions of IDC, network, physical datacenter topology, storage, server hardwares, operating systems and containerization. We demonstrate its effectiveness by analysis of newest Alibaba public cluster trace in 2017. We are the first of works to reveal the global view of scenarios, challenges and status in Alibaba large-scale global datacenters by data demonstration, including big promotion events like Double 11 . Data-driven intelligent scheduling methodologies and effective infrastructure co-location techniques are critical and necessary to pursue maximized multi-resource efficiency in modern large-scale datacenter, especially for long-running workloads

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore