3,473 research outputs found
Matching Contextual Ads and Web Page Contents through Computational Advertising: Getting the Best Match
The technological transformation and automation of digital content delivery has revolutionized the media industry. What is more, the Internet is rapidly turning into an advertising channel. Just in the United States, Internet advertising revenues hit $7.3 billion for the first quarter of 2011, representing a 23 percent increase over the same period in 2010 (iab.net, 2011). Beneficiaries of this investment and growth are search engines such as Google, Yahoo, and MSN. Also, Malaysian advertising landscape is gradually shifting its traditional media forms to the emergent of Internet advertising but still at a budding stage. The latter shows much room for growth, as the industry fuels to content digitization on Web applications.
In this project, the types of Internet advertising that is going to be discussed on are Contextual Ads and Sponsored Search Ads, but the major scope will be on Contextual Advertising. Given that, these types of advertising have the central challenge of finding the âbest matchâ between a given context and a suitable advertisement, through principled way of computational methods. Hence, it is also referred as Computational advertising. Furthermore, there are four main players that exists in the Internet advertising ecosystem that are going to be discussed in this study, which are; Users, Advertisers, Ad Exchange and Publishers.
Hence in order to find ways to counter the centre challenge, this research study will mainly address two objectives, which are to successfully make the best Contextual Ads selections that match to the Web Page contents through the concept of Computational advertising, and to ensure that there is a valuable connection between the Web pages and the Contextual Ads.
Thus, the scope of the study will be mainly on discussing about the theory of Computational advertising itself, besides elaborating on Contextual Ads, matching Contextual Ads and Web pages and also, finding the most feasible way in creating the valuable connection between Contextual Ads and the Web pages. Moreover, at the end of every discussion in every subtopic, some insights on the Internet advertising in Malaysian context are discussed as per related issue.
v
Consequently, this study employed two main methods to address the research questions rose. Those methods include extensive research and analysis on previous literature works and journals, and also in depth surveys to collect related data and information in real-life situations. Every part of gathered data and findings will then be analyzed accordingly. All discussions, conclusion and future recommendations are presented as per sections. Hence in order to prove the working mechanism of matching Contextual Ads and Web pages by using Computational advertising approach, Web pages together with the ads matching system, will then be developed through FYP-II timeline, as the final product of the study
Data-driven Job Search Engine Using Skills and Company Attribute Filters
According to a report online, more than 200 million unique users search for
jobs online every month. This incredibly large and fast growing demand has
enticed software giants such as Google and Facebook to enter this space, which
was previously dominated by companies such as LinkedIn, Indeed and
CareerBuilder. Recently, Google released their "AI-powered Jobs Search Engine",
"Google For Jobs" while Facebook released "Facebook Jobs" within their
platform. These current job search engines and platforms allow users to search
for jobs based on general narrow filters such as job title, date posted,
experience level, company and salary. However, they have severely limited
filters relating to skill sets such as C++, Python, and Java and company
related attributes such as employee size, revenue, technographics and
micro-industries. These specialized filters can help applicants and companies
connect at a very personalized, relevant and deeper level. In this paper we
present a framework that provides an end-to-end "Data-driven Jobs Search
Engine". In addition, users can also receive potential contacts of recruiters
and senior positions for connection and networking opportunities. The high
level implementation of the framework is described as follows: 1) Collect job
postings data in the United States, 2) Extract meaningful tokens from the
postings data using ETL pipelines, 3) Normalize the data set to link company
names to their specific company websites, 4) Extract and ranking the skill
sets, 5) Link the company names and websites to their respective company level
attributes with the EVERSTRING Company API, 6) Run user-specific search queries
on the database to identify relevant job postings and 7) Rank the job search
results. This framework offers a highly customizable and highly targeted search
experience for end users.Comment: 8 pages, 10 figures, ICDM 201
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
Towards the disintermediation of creative music search: Analysing queries to determine important facets
Purpose: Creative professionals search for music to accompany moving images in films, advertising, television. Some larger music rights holders (record companies and music publishers) organise their catalogues to allow online searching. These digital libraries are organised by various subjective musical facets as well as by artist and title metadata. The purpose of this paper is to present an analysis of written queries relating to creative music search, contextualised and discussed within the findings of text analyses of a larger research project whose aim is to investigate meaning making in this search process.
Method: A facet analysis of a collection of written music queries is discussed in relation to the organisation of the music in a selection of bespoke search engines.
Results: Subjective facets, in particular Mood, are found to be highly important in query formation. Unusually, detailed Music Structural aspects are also key.
Conclusions: These findings are discussed in relation to disintermediation of this process. It is suggested that there are barriers to this, both in terms of classification and also commercial / legal factors
Targeted Advertising using Location
Advertising is the key factor of any social sites. The explosive growth of social networks increases the prolific availability in customer tastes and preferences. This data can be exploited to serve the customers better and offer them the advertisements to the customers. To provide relevant advertisements to consumers, its important to consider the location of the consumer as well. The consumers will be highly contented if the offers shown to them are easily accessible in nearby areas. we propose a model combining the idea of social and spatial data to provide targeted advertisements. Social data is acquired through user's Facebook profile and location of the user is found with the help of Beacons. In these we are also using the concept of GPS (Global Positioning System). GPS helps for providing the service globally, in which we can provide multiple services to multiple users. The GPS system operates independently of any internet reception, though these technologies can enhance the usefulness of the GPS positioning information
CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap
After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in
multimedia search engines, we have identified and analyzed gaps within European research effort during our second year.
In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio-
economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown
of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on
requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the
community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our
Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as
National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core
technological gaps that involve research challenges, and âenablersâ, which are not necessarily technical research
challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal
challenges
FilteredWeb: A Framework for the Automated Search-Based Discovery of Blocked URLs
Various methods have been proposed for creating and maintaining lists of
potentially filtered URLs to allow for measurement of ongoing internet
censorship around the world. Whilst testing a known resource for evidence of
filtering can be relatively simple, given appropriate vantage points,
discovering previously unknown filtered web resources remains an open
challenge.
We present a new framework for automating the process of discovering filtered
resources through the use of adaptive queries to well-known search engines. Our
system applies information retrieval algorithms to isolate characteristic
linguistic patterns in known filtered web pages; these are then used as the
basis for web search queries. The results of these queries are then checked for
evidence of filtering, and newly discovered filtered resources are fed back
into the system to detect further filtered content.
Our implementation of this framework, applied to China as a case study, shows
that this approach is demonstrably effective at detecting significant numbers
of previously unknown filtered web pages, making a significant contribution to
the ongoing detection of internet filtering as it develops.
Our tool is currently deployed and has been used to discover 1355 domains
that are poisoned within China as of Feb 2017 - 30 times more than are
contained in the most widely-used public filter list. Of these, 759 are outside
of the Alexa Top 1000 domains list, demonstrating the capability of this
framework to find more obscure filtered content. Further, our initial analysis
of filtered URLs, and the search terms that were used to discover them, gives
further insight into the nature of the content currently being blocked in
China.Comment: To appear in "Network Traffic Measurement and Analysis Conference
2017" (TMA2017
- âŠ