878 research outputs found

    A decentralized framework for cross administrative domain data sharing

    Get PDF
    Federation of messaging and storage platforms located in remote datacenters is an essential functionality to share data among geographically distributed platforms. When systems are administered by the same owner data replication reduces data access latency bringing data closer to applications and enables fault tolerance to face disaster recovery of an entire location. When storage platforms are administered by different owners data replication across different administrative domains is essential for enterprise application data integration. Contents and services managed by different software platforms need to be integrated to provide richer contents and services. Clients may need to share subsets of data in order to enable collaborative analysis and service integration. Platforms usually include proprietary federation functionalities and specific APIs to let external software and platforms access their internal data. These different techniques may not be applicable to all environments and networks due to security and technological restrictions. Moreover the federation of dispersed nodes under a decentralized administration scheme is still a research issue. This thesis is a contribution along this research direction as it introduces and describes a framework, called \u201cWideGroups\u201d, directed towards the creation and the management of an automatic federation and integration of widely dispersed platform nodes. It is based on groups to exchange messages among distributed applications located in different remote datacenters. Groups are created and managed using client side programmatic configuration without touching servers. WideGroups enables the extension of the software platform services to nodes belonging to different administrative domains in a wide area network environment. It lets different nodes form ad-hoc overlay networks on-the-fly depending on message destinations located in distinct administrative domains. It supports multiple dynamic overlay networks based on message groups, dynamic discovery of nodes and automatic setup of overlay networks among nodes with no server-side configuration. I designed and implemented platform connectors to integrate the framework as the federation module of Message Oriented Middleware and Key Value Store platforms, which are among the most widespread paradigms supporting data sharing in distributed systems

    Deep Learning in the Automotive Industry: Applications and Tools

    Full text link
    Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.Comment: 10 page

    A Data-based Guiding Framework for Digital Transformation

    Get PDF
    This paper presents a framework for guiding organizations initiate and sustain digital transformation initiatives. Digital transformation is a long-term journey that an organization embarks on when it decides to question its practices in light of management, operation, and technology challenges. The guiding framework stresses out the importance of data in any digital transformation initiative by suggesting 4 stages referred to as collection, processing, storage, and dissemination. Because digital transformation could impact different areas of an organization for instance, business processes and business models, each stage suggests techniques to expose data. 2 case studies are adopted in the paper to illustrate how the guiding framework is put into action

    Towards Secure Collaboration in Federated Cloud Environments

    Get PDF
    Public administrations across Europe have been actively following and adopting cloud paradigms at various degrees. By establishing modern data centers and consolidating their infrastructures, many organizations already benefit from a range of cloud advantages. However, there is a growing need to further support the consolidation and sharing of resources across different public entities. The ever increasing volume of processed data and diversity of organizational interactions stress this need even further, calling for the integration on the levels of infrastructure, data and services. This is currently hindered by strict requirements in the field of data security and privacy. In this paper, we present ongoing work aimed at enabling secure private cloud federations for public administrations, performed in the scope of the SUNFISH H2020 project. We focus on architectural components and processes that establish cross-organizational enforcement of data security policies in mixed and heterogeneous environments. Our proposal introduces proactive restriction of data flows in federated environments by integrating real-time based security policy enforcement and its post-execution conformance verification. The goal of this framework is to enable secure service integration and data exchange in cross-entity contexts by inspecting data flows and assuring their conformance with security policies, both on organizational and federation level

    A conceptual framework of a cloud-based customer analytics tool for retail SMEs

    Get PDF
    Since customers are seen as a strategic element in a company’s downstream supply chain, many retail organizations have been employing a customer-centric business strategy and started investing into such technologies and solutions known as customer analytics that are capable of processing huge amount customer data for enhanced decision making. Customer analytics has been of significant importance in most developed economies around the world particularly for large organizations. The off-the-shelf analytics solutions provided by vendors are perceived to be unmanageable, risky and unaffordable especially for Small and Medium Enterprises (SMEs) operating in retail sector. This becomes more vital for the SMEs in developing countries especially in the Eastern part of Europe where they constitute a noteworthy part of the economy. The majority of the SMEs in these countries lack of facilities, infrastructure and abilities to perform such analytical applications. Not being able to extract strategic knowledge using customer data is a missing component for them to be competitive and sustainable in the market from relationship marketing point of view. The aim of this paper is to propose a conceptual model that addresses this problem by providing retail SMEs with a cloud-based open platform for customer data analytics and knowledge extraction. The platform will be able to connect with numerous apps already employed at the retail SMEs, acquire customer data and then perform customer analytics in order to produce a rich set of reports and knowledge

    The Practice of Basic Informatics 2020

    Get PDF
    Version 2020/04/02Kyoto University provides courses on 'The Practice of Basic Informatics' as part of its Liberal Arts and Sciences Program. The course is taught at many schools and departments, and course contents vary to meet the requirements of these schools and departments. This textbook is made open to the students of all schools that teach these courses. As stated in Chapter 1, this book is written with the aim of building ICT skills for study at university, that is, ICT skills for academic activities. Some topics may not be taught in class. However, the book is written for self-study by students. We include many exercises in this textbook so that instructors can select some of them for their classes, to accompany their teaching plans. The courses are given at the computer laboratories of the university, and the contents of this textbook assume that Windows 10 and Microsoft Office 2016 are available in these laboratories. In Chapter 13, we include an introduction to computer programming; we chose Python as the programming language because on the one hand it is easy for beginners to learn, and on the other, it is widely used in academic research. To check the progress of students' self-study, we have attached assessment criteria (a 'rubric') of this course as an Appendix. Current ICT is a product of the endeavors of many people. The "Great Idea" columns are included to show appreciation for such work. Dr. Yumi Kitamura and Dr. Hirohisa Hioki wrote Chapters 4 and 13, respectively. The remaining chapters were written by Dr. Hajime Kita. In revision for 2018 edition and after, Dr. Hiroyuki Sakai has participated in the author group, and Dr. Donghui Lin has also joined for English edition 2019. The authors hope that this textbook helps you to improve your academic ICT skill set. The content included in this book is selected based on the reference course plan discussed in the course development team for informatics at the Institute for Liberal Arts and Sciences. In writing this textbook, we obtained advice and suggestions from staffs of the Network Section, Information Infrastructure Division, Department of Planning and Information Management Department, Kyoto University on Chapters 2 and 3, from Mr. Sosuke Suzuki, NTT Communications Corporation also on Chapter 3, Rumi Haratake, Machiko Sakurai and Taku Sakamoto of the User Support Division, Kyoto University Library on Chapter 4. Dr. Masako Okamoto of Center for the Promotion of Excellence in Higher Education, Kyoto University helped us in revision of 2018 Japanese Edition. The authors would like to express their sincere gratitude to the people who supported them

    Cloud technology options towards Free Flow of Data

    Get PDF
    This whitepaper collects the technology solutions that the projects in the Data Protection, Security and Privacy Cluster propose to address the challenges raised by the working areas of the Free Flow of Data initiative. The document describes the technologies, methodologies, models, and tools researched and developed by the clustered projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe. The document gives reference to the Cluster, the individual projects and the technologies produced by them

    The Skeleton in the Hard Drive: Encryption and the Fifth Amendment

    Get PDF
    In Teva Pharmaceuticals USA, Inc. v. Sandoz, Inc., the Supreme Court addressed an oft-discussed jurisprudential disconnect between itself and the U.S. Court of Appeals for the Federal Circuit: whether patent claim construction was “legal” or “factual” in nature, and how much deference is due to district court decision-making in this area. This Article closely examines the Teva opinion and situates it within modern claim construction jurisprudence. The thesis is that the Teva holding is likely to have only very modest effects on the incidence of deference to district court claim construction, but that for unexpected reasons the case is far more important—and potentially beneficial—than it appears. This Article argues that Teva is likely to have a substantial impact on the methodology of patent claim construction. There are at least two reasons for this. First, the players involved in district court patent litigation now have an increased incentive to introduce extrinsic evidence concerning claim meaning and to argue that such evidence is critical to the outcome of claim construction. Second, the Teva opinion itself contemplates a two-step process of evidentiary analysis in claim construction: first an analysis of extrinsic evidence (fact), then an analysis of the weight and direction of such evidence in the patent (law). The post-Teva mode of claim construction in district courts is therefore likely to be far more focused on objective, factual information concerning the ordinary meaning of claim terms, or the ways that skilled artisans would understand claim terms generally. This Article further argues that these changes to the methodology of patent claim construction are generally positive. By anchoring claim meaning in objective evidence and following an established process for evaluating claim terms, this methodology should result in more predictability in litigation-driven claim construction, better drafted patent claims in the longer term, and ultimately, a patent law that more finely tunes the system of incentives it is supposed to regulate—all changes that, if realized, should be welcomed by the patent system, most of its participants, and the public
    corecore