538 research outputs found

    Uppskattning av marknadsosäkerheten för arbetsstationer som en molntjänst

    Get PDF
    Molntjänster ger användare och leverantörer nya möjligheter att konsumera och distribuera IT lösningar som tjänster. Höga investeringskostnader transformeras till mera hanterbara månadsavgifter, när användare köper lösningar som tjänster, vilket tillåter dem att köpa IT resurser enligt behov. Leverantörer kan skapa en flexibel IT infrastruktur, som är automatiserad, standardiserad och virtualiserad, vilket ger stordriftsfördelar. Men vad är åsikten om molntjänster i företag inom olika industrier? Vad är de förväntade fördelarna, tankarna och hur förväntar sig dessa företag att molntjänster kommer stödja deras verksamhet? Detta diplomarbete koncentreras sig delvis på konceptet molntjänster, men huvudfokus ligger på arbetsstationer som en molntjänst lösningar. Arbetet har gjorts som en mångfalls studie inom två svenska industrier; säkerhetsbranschen samt tillverkning- och gruvindustrin. Målet med arbetet är att bestämma marknadsosäkerheten mot dessa lösningar och vad som möjligen skulle driva ett företag att implementera arbetsstationer som en molntjänst. Forskningsfrågorna har skapats med hjälp av teorier framlagda av Mark Gaynor. Data samlas in genom en kvalitativ mångfalls studie. Genom att intervjua IT-personal inom fallföretagen, har de insamlade data sedan analyserats, för att besvara forskningsfrågorna. Resultaten visar tydligt att det finns en marknad för arbetsstationer som en molntjänst. Företag har idag problem med sin nuvarande PC infrastruktur och problem kunde, åtminstone delvis, lösas genom att implementera arbetsstationerna som en molntjänst. Lösningsmodellen är inte komplett och har ännu vissa tekniska brister, som måste lösas före företagen är mogna för en fullständig implementation. För företag som redan idag har lagt ut sin IT-verksamhet hos en tjänsteleverantör, kan arbetsstationer som en molntjänst vara ett naturligt följande steg i företagets outsourcingstrategi. Det är något som också tycks stämma mera generellt. Företag är inte mogna att lägga ut affärskritiskafunktioner i "molnet", men molntjänster som en lösningsmodell övervägs mera när nya lösningar sökes och implementeras. Så icke-affärskritiskafunktioner börjar vara färdiga för att köpas från leverantörer som en tjänst.Cloud computing is a new way to consume and deliver IT solutions as services, transforming high CAPEX to a more manageable OPEX. Customers are able to buy IT services on demand, from a provider hosting an infrastructure, which is automated, standardized, and virtualized, with great flexibility and economies of scale benefits. Alternatively, companies can buy the infrastructure pieces themselves and build private cloud solutions, delivering the same benefits to their users. Cloud computing can deliver these capabilities, but what is the view among practitioners within different industries? What are the observed benefits, beliefs, and thoughts about cloud computing, and how will it help them in their business? This thesis looks at the concept of cloud computing in general, but the main focus of the thesis is desktop cloud solutions. The thesis is completed as a multiple case study within two industries in Sweden, the mining and manufacturing industry, and security services and solutions industry. The objective of the thesis is to try to determine the market uncertainty around desktop cloud solutions, examine the possible management structures and what would trigger customers to implement a desktop cloud solution. To determine the research questions and to create a conceptual framework, the thesis relies on the theories presented by Mark Gaynor. The data collection is done as a qualitative multiple case studies. Data is collected by interviewing IT practitioners from the case industries, then analyzed and used to answer the research questions. The key findings of the thesis show that there are clear market opportunities for desktop cloud solutions. Companies are facing challenges with their current desktop infrastructure and these problems could, at least partly, be solved by a desktop cloud solution. There are still both technical and business concerns, which providers have to solve before customers will implement desktop cloud solutions. For companies used to IT-outsourcing and that have an outsourcing strategy, outsourcing their desktops, is a possible next area in IT-outsourcing. Those companies will most probably choose a solution with a centralized management structure. Companies are not yet ready to move business critical functions into the "cloud", but cloud solutions are considered, when new solutions are implemented. So at the moment noncritical business solutions are starting to get ready to be bought as a service and "moved into the cloud"

    Methods and Applications of Synthetic Data Generation

    Get PDF
    The advent of data mining and machine learning has highlighted the value of large and varied sources of data, while increasing the demand for synthetic data captures the structural and statistical characteristics of the original data without revealing personal or proprietary information contained in the original dataset. In this dissertation, we use examples from original research to show that, using appropriate models and input parameters, synthetic data that mimics the characteristics of real data can be generated with sufficient rate and quality to address the volume, structural complexity, and statistical variation requirements of research and development of digital information processing systems. First, we present a progression of research studies using a variety of tools to generate synthetic network traffic patterns, enabling us to observe relationships between network latency and communication pattern benchmarks at all levels of the network stack. We then present a framework for synthesizing large scale IoT data with complex structural characteristics in a scalable extraction and synthesis framework, and demonstrate the use of generated data in the benchmarking of IoT middleware. Finally, we detail research on synthetic image generation for deep learning models using 3D modeling. We find that synthetic images can be an effective technique for augmenting limited sets of real training data, and in use cases that benefit from incremental training or model specialization, we find that pretraining on synthetic images provided a usable base model for transfer learning

    Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

    Get PDF
    Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities

    Knowledge Discovery in Databases for Competitive Advantage

    Full text link

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Experiences of Public Agency Managers When Making Outsourcing Decisions

    Get PDF
    Managers in state transportation agencies in the United States must frequently choose between using the talents and abilities of in-house staff or outsourcing for road and bridge design projects. Budgetary crises have strongly affected funding for transportation infrastructure. Facing budgetary pressures to suppress costs, managers must frequently make the choice of outsourcing a project or performing it in-house. Yet, decision-making models for these decisions are inadequate. The purpose of this phenomenological study was to explore and describe the lived experiences of public agency managers when making decisions to outsource the core government functions such as road and bridge design projects. The research question was: What are the lived experiences of managers at the public agency when making decisions about whether to outsource core government functions such as road and bridge design projects? Participants were interviewed about their lived experiences at a state Department of Transportation with \u27make or buy\u27 decisions. Purposeful sampling was used to select 19 participants for the interviews and the collected data were coded and used a van Kaam approach for analysis. Five themes emerged as findings: acceptance of outsourcing, benefits versus problems, outsourcing propelled by staff limits, loss of control when a project is outsourced, and political pressure for and against outsourcing. These findings may be relevant for management personnel at U.S. public agencies. The implications for positive social change include improved cost, increased efficiency of use of time and talent of management personnel in state transportation agencies, and cost benefits for both management and public

    A patient agent controlled customized blockchain based framework for internet of things

    Get PDF
    Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph

    The impact of Cloud Computing adoption on IT Service Accounting approaches – A Customer Perspective on IaaS Pricing Models

    Get PDF
    Cloud computing has been recently a trending topic beyond the technological field, due to its implementation and expansion thanks to the internet revolution. Although it has reached end-users’ hands during the past two years, the technology has been used for a longer period in the business world. In a scenario where cost-cutting strategies and start-up companies seem to have an increasing importance in global economy, cloud computing has been one of the pillars of many business’ success in recent times. Companies like Netflix, Instagram or Spotify are recent examples of how an enterprise can grow spectacularly quick and become a market-leader basing its business activity on the cloud technology. This master thesis tries to explain how companies should behave when acquiring a cloud service. Due to the wideness of the cloud market, the specific focus of the work is infrastructure as a service, and pay-as-you-go model was chosen for the study due to the novelty it introduces in the information technology market. Apart from technical details, the economic point of view of cloud computing has also been researched, as not only providers care about how their service have to be priced, but also companies want to predict the expenditure to make in their brand new information service. Through the pages of the work, the predecessors of cloud computing are presented as well as the theories appeared to explain its costs and accounting aspects, to finally explain how cloud computing changed the role. After a brief introduction to cloud computing and its different service models, a market analysis of different providers is performed, to extract the patterns and peculiarities of the actual situation of cloudmarket. Transferring the knowledge obtained in the market analysis, an accounting model is developed, based on costs categories and factors and a metering framework. Finally, a case study is performed applying the model to the market situation extracted from the market analysis.Outgoin

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore