197 research outputs found
Exploring Energy Consumption Issues for video Streaming in Mobile Devices: a Review
The proliferation of high-end mobile devices, such as smart phones, tablets, together have gained the popularity of multimedia streaming among the user. It is found from various studies and survey that at end of 2020 mobile devices will increase drastically and Mobile video streaming will also grow rapidly than overall average mobile traffic. The streaming application in Smartphone heavily depends on the wireless network activities substantially amount of data transfer server to the client. Because of very high energy requirement of data transmitted in wireless interface for video streaming application considered as most energy consuming application. Therefore to optimize the battery USAge of mobile device during video streaming it is essential to understand the various video streaming techniques and there energy consumption issues in different environment. In this paper we explore energy consumption in mobile device while experiencing video streaming and examine the solution that has been discussed in various research to improve the energy consumption during video streaming in mobile devices . We classify the investigation on a different layer of internet protocol stack they utilize and also compare them and provide proof of fact that already exist in modern Smartphone as energy saving mechanism
Power module Data Management System (DMS) study
Computer trades and analyses of selected Power Module Data Management Subsystem issues to support concurrent inhouse MSFC Power Study are provided. The charts which summarize and describe the results are presented. Software requirements and definitions are included
Towards a human-centric data economy
Spurred by widespread adoption of artificial intelligence and machine learning, “data” is becoming
a key production factor, comparable in importance to capital, land, or labour in an increasingly
digital economy. In spite of an ever-growing demand for third-party data in the B2B
market, firms are generally reluctant to share their information. This is due to the unique characteristics
of “data” as an economic good (a freely replicable, non-depletable asset holding a highly
combinatorial and context-specific value), which moves digital companies to hoard and protect
their “valuable” data assets, and to integrate across the whole value chain seeking to monopolise
the provision of innovative services built upon them. As a result, most of those valuable assets
still remain unexploited in corporate silos nowadays.
This situation is shaping the so-called data economy around a number of champions, and it is
hampering the benefits of a global data exchange on a large scale. Some analysts have estimated
the potential value of the data economy in US$2.5 trillion globally by 2025. Not surprisingly, unlocking
the value of data has become a central policy of the European Union, which also estimated
the size of the data economy in 827C billion for the EU27 in the same period. Within the scope of
the European Data Strategy, the European Commission is also steering relevant initiatives aimed
to identify relevant cross-industry use cases involving different verticals, and to enable sovereign
data exchanges to realise them.
Among individuals, the massive collection and exploitation of personal data by digital firms
in exchange of services, often with little or no consent, has raised a general concern about privacy
and data protection. Apart from spurring recent legislative developments in this direction,
this concern has raised some voices warning against the unsustainability of the existing digital
economics (few digital champions, potential negative impact on employment, growing inequality),
some of which propose that people are paid for their data in a sort of worldwide data labour
market as a potential solution to this dilemma [114, 115, 155].
From a technical perspective, we are far from having the required technology and algorithms
that will enable such a human-centric data economy. Even its scope is still blurry, and the question
about the value of data, at least, controversial. Research works from different disciplines have
studied the data value chain, different approaches to the value of data, how to price data assets,
and novel data marketplace designs. At the same time, complex legal and ethical issues with
respect to the data economy have risen around privacy, data protection, and ethical AI practices. In this dissertation, we start by exploring the data value chain and how entities trade data assets
over the Internet. We carry out what is, to the best of our understanding, the most thorough survey
of commercial data marketplaces. In this work, we have catalogued and characterised ten different
business models, including those of personal information management systems, companies born
in the wake of recent data protection regulations and aiming at empowering end users to take
control of their data. We have also identified the challenges faced by different types of entities,
and what kind of solutions and technology they are using to provide their services.
Then we present a first of its kind measurement study that sheds light on the prices of data
in the market using a novel methodology. We study how ten commercial data marketplaces categorise
and classify data assets, and which categories of data command higher prices. We also
develop classifiers for comparing data products across different marketplaces, and we study the
characteristics of the most valuable data assets and the features that specific vendors use to set
the price of their data products. Based on this information and adding data products offered by
other 33 data providers, we develop a regression analysis for revealing features that correlate with
prices of data products. As a result, we also implement the basic building blocks of a novel data
pricing tool capable of providing a hint of the market price of a new data product using as inputs
just its metadata. This tool would provide more transparency on the prices of data products in
the market, which will help in pricing data assets and in avoiding the inherent price fluctuation of
nascent markets.
Next we turn to topics related to data marketplace design. Particularly, we study how buyers
can select and purchase suitable data for their tasks without requiring a priori access to such
data in order to make a purchase decision, and how marketplaces can distribute payoffs for a
data transaction combining data of different sources among the corresponding providers, be they
individuals or firms. The difficulty of both problems is further exacerbated in a human-centric
data economy where buyers have to choose among data of thousands of individuals, and where
marketplaces have to distribute payoffs to thousands of people contributing personal data to a
specific transaction.
Regarding the selection process, we compare different purchase strategies depending on the
level of information available to data buyers at the time of making decisions. A first methodological
contribution of our work is proposing a data evaluation stage prior to datasets being selected
and purchased by buyers in a marketplace. We show that buyers can significantly improve the
performance of the purchasing process just by being provided with a measurement of the performance
of their models when trained by the marketplace with individual eligible datasets. We
design purchase strategies that exploit such functionality and we call the resulting algorithm Try
Before You Buy, and our work demonstrates over synthetic and real datasets that it can lead to
near-optimal data purchasing with only O(N) instead of the exponential execution time - O(2N)
- needed to calculate the optimal purchase. With regards to the payoff distribution problem, we focus on computing the relative value
of spatio-temporal datasets combined in marketplaces for predicting transportation demand and
travel time in metropolitan areas. Using large datasets of taxi rides from Chicago, Porto and
New York we show that the value of data is different for each individual, and cannot be approximated
by its volume. Our results reveal that even more complex approaches based on the
“leave-one-out” value, are inaccurate. Instead, more complex and acknowledged notions of value
from economics and game theory, such as the Shapley value, need to be employed if one wishes
to capture the complex effects of mixing different datasets on the accuracy of forecasting algorithms.
However, the Shapley value entails serious computational challenges. Its exact calculation
requires repetitively training and evaluating every combination of data sources and hence O(N!)
or O(2N) computational time, which is unfeasible for complex models or thousands of individuals.
Moreover, our work paves the way to new methods of measuring the value of spatio-temporal
data. We identify heuristics such as entropy or similarity to the average that show a significant
correlation with the Shapley value and therefore can be used to overcome the significant computational
challenges posed by Shapley approximation algorithms in this specific context.
We conclude with a number of open issues and propose further research directions that leverage
the contributions and findings of this dissertation. These include monitoring data transactions
to better measure data markets, and complementing market data with actual transaction prices
to build a more accurate data pricing tool. A human-centric data economy would also require
that the contributions of thousands of individuals to machine learning tasks are calculated daily.
For that to be feasible, we need to further optimise the efficiency of data purchasing and payoff
calculation processes in data marketplaces. In that direction, we also point to some alternatives
to repetitively training and evaluating a model to select data based on Try Before You Buy and
approximate the Shapley value. Finally, we discuss the challenges and potential technologies that
help with building a federation of standardised data marketplaces.
The data economy will develop fast in the upcoming years, and researchers from different
disciplines will work together to unlock the value of data and make the most out of it. Maybe
the proposal of getting paid for our data and our contribution to the data economy finally flies,
or maybe it is other proposals such as the robot tax that are finally used to balance the power
between individuals and tech firms in the digital economy. Still, we hope our work sheds light on
the value of data, and contributes to making the price of data more transparent and, eventually, to
moving towards a human-centric data economy.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Georgios Smaragdakis.- Secretario: Ángel Cuevas Rumín.- Vocal: Pablo Rodríguez Rodrígue
Business strategy and information systems alignment : a study of the use of enterprise architectures in Australian Government
This thesis investigates the use of Enterprise Architectures ("the logical structuring and
classification of descriptive representations of an enterprise") as enablers of alignment
between business strategy and information systems in public sector agencies. The scope
of this study has been shaped by Australian government policies that have set firm
directions for the delivery of community products and services in the electronic domain.
Foundation management and information systems theories, empirical studies and public
management literature -have been used extensively in grounding this research study. A
substantial body of literature has been reviewed, and this study positioned in the context
of these prior literary works. In particular, the principal alignment theories have been
adopted and the research model developed from the published works of eminent
management and information systems researchers.
The primary research question asks whether Enterprise Architectures are enablers of
business strategy and information systems alignment, and if so, what are the associated
alignment enabling processes? The study's four research themes are: (i) Enterprise
Architecture frameworks and methods; (ii) architectural completeness; (iii) the social
aspects of alignment (management support, business planning style, business plan
communications); and (iv) the formal high level alignment mechanisms used by public
agencies.
The study has used an exploratory qualitative case_study research method that includes
semi-structured and unstructured interviews, archival research and document discovery,
public announcement and presentation information, organisational observations, and
system demonstrations for the collection and triangulation of data. The case studies at four government agencies are presented as metastories of how Enterprise Architectures
and other alignment mechanisms are used within the contextual frame of each public
organisation.
The research shows that Enterprise Architectures can be enablers of alignment within a
public organization environment. Architectures possess the ability to define and describe
the states of the agency business and technology domains, and the intimate domain
relationships and processes that inform the agency's state of alignment. Changes in the
agencies or their operating environments are reflected in the architecture and its
subsequent evolutionary changes (such as new business requiring new supporting
information systems and technology).
Enterprise Architectures were considered as important enablers of alignment with each
agency dedicating specialist corporate resources for architecture development and
maintenance. The case studies showed that the origin (either internally developed or
commercially acquired) of the agency Enterprise Architecture was not necessarily
important for the enabling of alignment. However, organizations would do well to
concentrate their resources on developing and implementing architectures that accurately
represent and integrate the agency business and technology domains.
The research used an architectural requirements framework, adapted from an
International Standard (ISO 15704), to gauge architecture completeness. The study found
that substantially complete architectures integrated the business and information systems
entities, included the necessary components (such as the governance frameworks) to
achieve strategic alignment, and offered opportunities for agency alignment.
Architectures that were deficient in their business, technology or managerial orientations could display reduced clarity of the business and technology states, placing the
organisations at risk of misalignment.
The case research allowed the comparison of centralised and decentralised agency
business structures and information systems, allowing explanations to be developed for
the longer architecture implementation periods, and reduced architecture completeness at
the decentralised agencies. In particular, the research findings point to the non-uniform
application of decentralised resources, and the reduced corporate visibility of
decentralised systems, as reasons for long architecture implementation periods, reduced
completeness, and impaired alignment.
The case studies identified that architectures develop and evolve over time and possess
specific characteristics that assist the alignment process. Architectures acted as focal
points for business entities and processes that are enabled by the supporting information
systems. Architectures provided a mechanism for information systems and technology
governance that jointly support business and information systems requirements.
Architectures enabled agency information structuring and sharing for the support of
business operations. Architectures supported the reuse of systems and technologies for
the delivery of business strategies and plans. Other characteristics, such as using
architecture as a corporate philosophy, were agency-specific and reflected the agency's
culture, people, business capabilities, and corporate history.
The detailed examination of management support, business planning styles and business
plan communications, showed that the social aspects of alignment were important. In
particular the study showed that executive managers must support business and technical
directions through demonstrable understanding of the important business and information systems issues, and cohesive decision-making that is built on sound relationships between
business and technically oriented executives. The case studies also showed that business
plans that are horizontally and vertically integrated, and are well communicated and
understood by stakeholders, assisted the enabling of alignment.
Finally, the study uncovered several formal alignment mechanisms (such as corporate
boards, agency plans, balanced score cards) that were consistent with alignment and
governance theory and government management literature. The findings of the case
research placed this study of alignment in a process or system frame, while empirically
demonstrating that alignment is a continuous and dynamic process that combines several
enabling mechanisms. The study showed that any research or conceptual analysis of
alignment should consider the alignment mechanisms to operate in combination with
each other. Future directions for alignment and architecture research were also described
Network control for a multi-user transputer-based system.
A dissertation submitted to the Faculty of Engineering, University of the
Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of
Master of Science in EngineeringThe MC2/64 system is a configureable multi-user transputer- based system which was
designed using a modular approach. The MC2/64 consists of MC2 Clusters which are
connected using a modified Clos network. The MC2 Clusters were designed and
realised as completely configurable modules using and extending an algorithm based on
Eulerian cycles through a requested graph. This dissertation discusses the configuration
algorithm and the extensions made to the algorithm for the MC2 Clusters.
The total MC2/64 system is not completely configurable as a MC2 Cluster releases only
a limited number of links for inter-cluster connections. This dissertation analyses the
configurability of MC2/64, but also presents algorithms which enhance the usability of
the system from the user's point of view.
The design and the implementation of the network control software are also submitted
as topics in this dissertation. The network control software must allow multiple users to
use the system, but without them influencing each other's transputer domains.
This dissertation therefore seeks to give an overview of network control problems and
the solutions implemented in current MC2/64 systems. The results of the research
done for this dissertation will hopefully aid in the design of future MC2 systems which
will provide South Africa with much needed, low cost, high performance computing
power.Andrew Chakane 201
The way from Lean Product Development (LPD) to Smart Product Development (SPD)
Abstract Lean Product Development (LPD) is the application of lean principles to product development, aiming to develop new or improved products that are successful in the market. LPD deals with the complete process from gathering and generating ideas, through assessing potential success, to developing concepts, evaluating them to create a best concept, detailing the product, testing/developing it and handing over to manufacture. With the beginning of the fourth Industrial Revolution (Industrial 4.0) and the rising efforts to realize a smart factory environment, also product development has to perform a substantial transformation. This paper firstly describes the concept of Lean Product Development as well as new requirements for an intelligent and Smart Product Development (SPD) through the introduction of modern Industry 4.0 related technologies. Based on Axiomatic Design methodology, a set of guidelines for the design of Lean Product Development Processes is presented. These guidelines are linked with concepts from Industry 4.0 in Engineering, showing how a lean and smart product development process can be achieved by the use of advanced and modern technologies and instruments
Evolutionary space platform concept study. Volume 1: Executive summary
The Evolutionary Space Platform Concept Study encompassed a 10 month effort to define, evaluate and compare approaches and concepts for evolving unmanned and manned capability platforms beyond the current Space Platform concepts to an evolutionary goal of establishing a permanent manned presence in space. Areas addressed included: special emphasis trade studies on the current unmanned concept, assessment of manned platform concepts, and utility analysis of a manned platform for defense related missions
Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies. Appendix D: EOS configuration design data. Part 2: Data management system configuration
The Earth Observatory Satellite (EOS) data management system (DMS) is discussed. The DMS is composed of several subsystems or system elements which have basic purposes and are connected together so that the DMS can support the EOS program by providing the following: (1) payload data acquisition and recording, (2) data processing and product generation, (3) spacecraft and processing management and control, and (4) data user services. The configuration and purposes of the primary or high-data rate system and the secondary or local user system are explained. Diagrams of the systems are provided to support the systems analysis
Space shuttle low cost/risk avionics study
All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules
FY 1974 scientific and technical reports, articles, papers, and presentations
Formal NASA technical reports and papers published in technical journals, and presentations by MSFC personnel during FY 1974 are presented. Papers from MSFC contractors are also included
- …