1,191 research outputs found
adPerf: Characterizing the Performance of Third-party Ads
Monetizing websites and web apps through online advertising is widespread in
the web ecosystem. The online advertising ecosystem nowadays forces publishers
to integrate ads from these third-party domains. On the one hand, this raises
several privacy and security concerns that are actively studied in recent
years. On the other hand, given the ability of today's browsers to load dynamic
web pages with complex animations and Javascript, online advertising has also
transformed and can have a significant impact on webpage performance. The
performance cost of online ads is critical since it eventually impacts user
satisfaction as well as their Internet bill and device energy consumption.
In this paper, we apply an in-depth and first-of-a-kind performance
evaluation of web ads. Unlike prior efforts that rely primarily on adblockers,
we perform a fine-grained analysis on the web browser's page loading process to
demystify the performance cost of web ads. We aim to characterize the cost by
every component of an ad, so the publisher, ad syndicate, and advertiser can
improve the ad's performance with detailed guidance. For this purpose, we
develop an infrastructure, adPerf, for the Chrome browser that classifies page
loading workloads into ad-related and main-content at the granularity of
browser activities (such as Javascript and Layout). Our evaluations show that
online advertising entails more than 15% of browser page loading workload and
approximately 88% of that is spent on JavaScript. We also track the sources and
delivery chain of web ads and analyze performance considering the origin of the
ad contents. We observe that 2 of the well-known third-party ad domains
contribute to 35% of the ads performance cost and surprisingly, top news
websites implicitly include unknown third-party ads which in some cases build
up to more than 37% of the ads performance cost
Energy Efficiency Analysis And Optimization For Mobile Platforms
The introduction of mobile devices changed the landscape of computing. Gradually, these devices are replacing traditional personal computer (PCs) to become the devices of choice for entertainment, connectivity, and productivity. There are currently at least 45.5 million people in the United States who own a mobile device, and that number is expected to increase to 1.5 billion by 2015.
Users of mobile devices expect and mandate that their mobile devices have maximized performance while consuming minimal possible power. However, due to the battery size constraints, the amount of energy stored in these devices is limited and is only growing by 5% annually. As a result, we focused in this dissertation on energy efficiency analysis and optimization for mobile platforms. We specifically developed SoftPowerMon, a tool that can power profile Android platforms in order to expose the power consumption behavior of the CPU. We also performed an extensive set of case studies in order to determine energy inefficiencies of mobile applications. Through our case studies, we were able to propose optimization techniques in order to increase the energy efficiency of mobile devices and proposed guidelines for energy-efficient application development. In addition, we developed BatteryExtender, an adaptive user-guided tool for power management of mobile devices. The tool enables users to extend battery life on demand for a specific duration until a particular task is completed. Moreover, we examined the power consumption of System-on-Chips (SoCs) and observed the impact on the energy efficiency in the event of offloading tasks from the CPU to the specialized custom engines. Based on our case studies, we were able to demonstrate that current software-based power profiling techniques for SoCs can have an error rate close to 12%, which needs to be addressed in order to be able to optimize the energy consumption of the SoC. Finally, we summarize our contributions and outline possible direction for future research in this field
Energy-Aware Development and Labeling for Mobile Applications
Today, mobile devices such as smart phones and tablets have become ubiquitous and are used everywhere. Millions of software applications can be purchased and installed on these devices, customizing them to personal interests and needs. However, the frequent use of mobile devices has let a new problem become omnipresent: their limited operation time, due to their limited energy capacities.
Although energy consumption can be considered as being a hardware problem, the amount of energy required by today’s mobile devices highly depends on their current workloads, being highly influenced by the software running on them. Thus, although only hardware modules are consuming energy, operating systems, middleware services, and mobile applications highly influence the energy consumption of mobile devices, depending on how efficient they use and control hardware modules. Nevertheless, most of today’s mobile applications totally ignore their influence on the devices’ energy consumption, leading to energy wastes, shorter operation times, and thus, frustrated application users. A major reason for this energy-unawareness is the lack for appropriate tooling for the development of energy-aware mobile applications.
As many mobile applications are today behaving energy-unaware and various mobile applications providing similar services exist, mobile application users aim to optimize their devices by installing applications being known as energy-saving or energy-aware; meaning that they consume less energy while providing the same services as their competitors. However, scarce information on the applications’ energy usage is available and, thus, users are forced to install and try many applications manually, before finding the applications fulfilling their personal functional, non-functional, and energy requirements.
This thesis addresses the lack of tooling for the development of energy-aware mobile applications and the lack of comparability of mobile applications in terms of energy-awareness with the following two contributions: First, it proposes JouleUnit, an energy profiling and testing framework using unit-tests for the execution of application workloads while profiling their energy consumption in parallel. By extending a well-known testing concept and providing tooling integrated into the development environment Eclipse, JouleUnit requires a low learning curve for the integration into existing development and testing processes. Second, for the comparability of mobile applications in terms of energy efficiency, this thesis proposes an energy benchmarking and labeling service. Mobile applications belonging to the same usage domain are energy-profiled while executing a usage-domain specific benchmark in parallel. Thus, their energy consumption for specific use cases can be evaluated and compared afterwards. To abstract and summarize the profiling results, energy labels are derived that summarize the applications’ energy consumption over all evaluated use cases as a simple energy grade, ranging from A to G. Besides, users can decide how to weigh specific use cases for the computation of energy grades, as it is likely that different users use the same applications differently. The energy labeling service has been implemented for Android applications and evaluated for three different usage domains (being web browsers, email clients, and live wallpapers), showing that different mobile applications indeed differ in their energy consumption for the same services and, thus, their comparison is both possible and sensible. To the best of my knowledge, this is the first approach providing mobile application users comparable energy consumption information on mobile applications without installing and testing them on their own mobile devices
GAUMLESS: Modelling the Capitalization of Human Action on the Internet
The focus of this thesis is on a field of study related to information design, namely visual modelling, and the application of its concepts and frameworks to a case study on the use of Internet cookies. It represents an opportunity to enhance information design’s relevancy as an adaptive discipline; i.e., borrowing and learning from various knowledge domains in representing phenomena for the purposes of decision-making and action-generation.
As a critical design project, the thesis endeavors to inform Internet users and other audiences of the exploitation inherent in the data-mining processes employed by websites for generating cookies and to expose the risks to users. This focus was motivated by a concern with the ignorance, or at least the casual awareness, of many Internet users of the implications of giving their consent to the use of cookies. The thesis employs a qualitative research methodology that consolidates information design principles, conventions and processes; a distillation of relevant modelling frameworks; and pan-disciplinary philosophical perspectives (i.e., cybernetics, systems theory, and social system theory) into a visual model that represents the cookie system.
The significance of this study’s contribution to design theory lies in the manner in which boundaries to its research methodology (based on the study’s purpose, goals and targeted audience) were determined and the singular visual modelling process developed in consideration of the myriad relevant knowledge-domains, extensive data sources and esoteric technical aspects of the system under study. Whereas simplification in a visual model is a key factor for knowledge-creation and establishing usability, its effectiveness to inform and inspire is also measured by its level of accuracy and comprehensiveness.
In concentrating on human behaviour and decision-making contexts and applications, information design has the capacity to help meet personal and social needs and consequently can be a societal force for innovation and progress. The thesis’ visual model is an example of this potential in its intention to represent the cookie process and to raise awareness of its personal and social implications. The study validates the responsibility of the information designer to not prescribe actions or solutions but rather to impart knowledge, support decision-making, and inspire critical reflection
Recommended from our members
To boardrooms and sustainability: the changing nature of segmentation
Market segmentation is the process by which customers in markets with some heterogeneity
are grouped into smaller homogeneous segments of more ‘similar’ customers. A market
segment is a group of individuals, groups or organisations sharing similar characteristics and
buying behaviour that cause them to have relatively similar needs and purchasing behaviour.
Segmentation is not a new concept: for six decades marketers have, in various guises, sought to
break-down a market into sub-groups of users, each sharing common needs, buying behavior
and marketing requirements. However, this approach to target market strategy development
has been rejuvenated in the past few years. Various reasons account for this upsurge in the
usage of segmentation, examination of which forms the focus of this white paper.
Ready access to data enables faster creation of a segmentation and the testing of propositions to
take to market. ‘Big data’ has made the re-thinking of target market segments and value
propositions inevitable, desirable, faster and more flexible. The resulting information has
presented companies with more topical and consumer-generated insights than ever before.
However, many marketers, analytics directors and leadership teams feel over-whelmed by the
sheer quantity and immediacy of such data.
Analytical prowess in consultants and inside client organisations has benefited from a stepchange,
using new heuristics and faster computing power, more topical data and stronger
market insights. The approach to segmentation today is much smarter and has stretched well
away from the days of limited data explored only with cluster analysis. The coverage and wealth
of the solutions are unimaginable when compared to the practices of a few years ago. Then,
typically between only six to ten segments were forced into segmentation solutions, so that an
organisation could cater for these macro segments operationally as well as understand them
intellectually. Now there is the advent of what is commonly recognised as micro segmentation,
where the complexity of business operations and customer management requires highly
granular thinking. In support of this development, traditional agency/consultancy roles have
transitioned into in-house business teams led by data, campaign and business change planners.
The challenge has shifted from developing a granular segmentation solution that describes all
customers and prospects, into one of enabling an organisation to react to the granularity of the
solution, deploying its resources to permit controlled and consistent one-to-one interaction
within segments. So whilst the cost of delivering and maintaining the solution has reduced with
technology advances, a new set of systems, costs and skills in channel and execution
management is required to deliver on this promise. These new capabilities range from rich
feature creative and content management solutions, tailored copy design and deployment tools,
through to instant messaging middleware solutions that initiate multi-streams of activity in a
variety of analytical engines and operational systems.
Companies have recruited analytics and insight teams, often headed by senior personnel, such as
an Insight Manager or Analytics Director. Indeed, the situations-vacant adverts for such
personnel out-weigh posts for brand and marketing managers. Far more companies possess the
in-house expertise necessary to help with segmentation analysis. Some organisations are also
seeking to monetise one of the most regularly under-used latent business assets… data.
Developing the capability and culture to bring data together from all corners of a business, the open market, commercial sources and business partners, is a step-change, often requiring a
Chief Data Officer. This emerging role has also driven the professionalism of data exploration,
using more varied and sophisticated statistical techniques.
CEOs, CFOs and COOs increasingly are the sponsor of segmentation projects as well as the users
of the resulting outputs, rather than CMOs. CEOs because recession has forced re-engineering of
value propositions and the need to look after core customers; CFOs because segmentation leads
to better and more prudent allocation of resources – especially NPD and marketing – around the
most important sub-sets of a market; COOs because they need to better look after key
customers and improve their satisfaction in service delivery. More and more it is recognised that
with a new segmentation comes organisational realignment and change, so most business
functions now have an interest in a segmentation project, not only the marketers.
Largely as a result of the digital era and the growth of analytics, directors and company
leadership teams are becoming used to receiving more extensive market intelligence and
quickly updated customer insight, so leading to faster responses to market changes, customer
issues, competitor moves and their own performance. This refreshing of insight and a leadership
team’s reaction to this intelligence often result in there being more frequent modification of a
target market strategy and segmentation decisions.
So many projects set up to consider multi-channel strategy and offerings; digital marketing;
customer relationship management; brand strategies; new product and service development;
the re-thinking of value propositions, and so forth, now routinely commence with a
segmentation piece in order to frame the ongoing work. Most organisations have deployed
CRM systems and harnessed associated customer data. CRM first requires clarity in segment
priorities. The insights from a CRM system help inform the segmentation agenda and steer how
they engage with their important customers or prospects. The growth of CRM and its ensuing
data have assisted the ongoing deployment of segmentation.
One of the biggest changes for segmentation is the extent to which it is now deployed by
practitioners in the public and not-for-profit sectors, who are harnessing what is termed social
marketing, in order to develop and to execute more shrewdly their targeting, campaigns and
messaging. For Marketing per se, the interest in the marketing toolkit from non-profit
organisations, has been big news in recent years. At the very heart of the concept of social
marketing is the market segmentation process.
The extreme rise in the threat to security from global unrest, terrorism and crime has focused
the minds of governments, security chiefs and their advisors. As a result, significant resources,
intellectual capability, computing and data management have been brought to bear on the
problem. The core of this work is the importance of identifying and profiling threats and so
mitigating risk. In practice, much of this security and surveillance work harnesses the tools
developed for market segmentation and the profiling of different consumer behaviours.
This white paper presents the findings from interviews with leading exponents of segmentation
and also the insights from a recent study of marketing practitioners relating to their current
imperatives and foci. More extensive views of some of these ‘leading lights’ have been sought
and are included here in order to showcase the latest developments and to help explain both
the ongoing surge of segmentation and the issues under-pinning its practice. The principal
trends and developments are thereby presented and discussed in this paper
Recommended from our members
Energy-efficient mobile Web computing
Next-generation Web services will be primarily accessed through mobile devices. However, mobile devices are low-performance and stringently energy-constrained. In my dissertation, I propose the design of a high-performance and energy-efficient mobile Web computing substrate. It is a hardware/software co-designed system that delivers satisfactory user quality-of-service (QoS) experience on a mobile energy budget. The key insight is that the traditional interfaces between different Web stacks need to be enhanced with new abstractions that express user QoS experience and that expose architectural-level complexities. On the basis of the enhanced interfaces, I propose synergistic cross-layer optimizations across the processor architecture, Web runtime, programming language, and application layers to maximize the whole system efficiency. The contributions made in this dissertation will likely have a long-term impact because the target application domain, the Web, is becoming a universal mobile development platform, and because our solutions target the fundamental computation layers of the Web domain.Electrical and Computer Engineerin
Personality representation: predicting behaviour for personalised learning support
The need for personalised support systems comes from the growing number of students that are being supported within institutions with shrinking resources. Over the last decade the use of computers and the Internet within education has become more predominant. This opens up a range of possibilities in regard to spreading that resource further and more effectively. Previous attempts to create automated systems such as intelligent tutoring systems and learning companions have been criticised for being pedagogically ineffective and relying on large knowledge sources which restrict their domain of application. More recent work on adaptive hypermedia has resolved some of these issues but has been criticised for the lack of support scope, focusing on learning paths and alternative content presentation. The student model used within these systems is also of limited scope and often based on learning history or learning styles.This research examines the potential of using a personality theory as the basis for a personalisation mechanism within an educational support system. The automated support system is designed to utilise a personality based profile to predict student behaviour. This prediction is then used to select the most appropriate feedback from a selection of reflective hints for students performing lab based programming activities. The rationale for the use of personality is simply that this is the concept psychologists use for identifying individual differences and similarities which are expressed in everyday behaviour. Therefore the research has investigated how these characteristics can be modelled in order to provide a fundamental understanding of the student user and thus be able to provide tailored support. As personality is used to describe individuals across many situations and behaviours, the use of such at the core of a personalisation mechanism may overcome the issues of scope experienced by previous methods.This research poses the following question: can a representation of personality be used to predict behaviour within a software system, in such a way, as to be able to personalise support?Putting forward the central claim that it is feasible to capture and represent personality within a software system for the purpose of personalising services.The research uses a mixed methods approach including a number and combination of quantitative and qualitative methods for both investigation and determining the feasibility of this approach.The main contribution of the thesis has been the development of a set of profiling models from psychological theories, which account for both individual differences and group similarities, as a means of personalising services. These are then applied to the development of a prototype system which utilises a personality based profile. The evidence from the evaluation of the developed prototype system has demonstrated an ability to predict student behaviour with limited success and personalise support.The limitations of the evaluation study and implementation difficulties suggest that the approach taken in this research is not feasible. Further research and exploration is required –particularly in the application to a subject area outside that of programming
- …