374 research outputs found

    Scraping the Social? Issues in live social research

    Get PDF
    What makes scraping methodologically interesting for social and cultural research? This paper seeks to contribute to debates about digital social research by exploring how a ‘medium-specific’ technique for online data capture may be rendered analytically productive for social research. As a device that is currently being imported into social research, scraping has the capacity to re-structure social research, and this in at least two ways. Firstly, as a technique that is not native to social research, scraping risks to introduce ‘alien’ methodological assumptions into social research (such as an pre-occupation with freshness). Secondly, to scrape is to risk importing into our inquiry categories that are prevalent in the social practices enabled by the media: scraping makes available already formatted data for social research. Scraped data, and online social data more generally, tend to come with ‘external’ analytics already built-in. This circumstance is often approached as a ‘problem’ with online data capture, but we propose it may be turned into virtue, insofar as data formats that have currency in the areas under scrutiny may serve as a source of social data themselves. Scraping, we propose, makes it possible to render traffic between the object and process of social research analytically productive. It enables a form of ‘real-time’ social research, in which the formats and life cycles of online data may lend structure to the analytic objects and findings of social research. By way of a conclusion, we demonstrate this point in an exercise of online issue profiling, and more particularly, by relying on Twitter to profile the issue of ‘austerity’. Here we distinguish between two forms of real-time research, those dedicated to monitoring live content (which terms are current?) and those concerned with analysing the liveliness of issues (which topics are happening?)

    PDF Text Searching System

    Get PDF
    This project is to develop a text searching system that assist users to develop a simple PDFtext-searching system, whichis capable of searching and processing the information in text files on user PC and in local networks. The main purpose of developing this project is to assist users in finding PDF documents and files within their local drives, where the appropriate documents can be found by entering the desired search terms (keywords) in the PDF Text Searching System. There are two objectives that have been set for this project. The first objective is to perform a study and have a better understanding on the software that will be used in order to develop PDF text-searching system, and the second objective is to develop a PDF text-searching system, which is capable of searching and processing the information in text files on userPC and in local networks. For the methodology, Rapid Application Development (RAD) approach has beenemployed. The methodology has been chosenbecause it is effective and suitable for short duration project. It was designed for developer and user to join together and work intensively toward their goal. By using the RAD methodology, the project is able to be completed within the time allocated. In the results and discussion part, it covers all the outcome that obtains from the project completion, which is based on the surveys conducted and questionnaires. In this chapter, the findings that were gain will determine whether the proposed system is acceptable and meet with the user's needs. In order to provide better services, some suggestion being carried out for future enhancement. This can improve the current system to be more efficient and effective

    Search engine optimization and advertising in social media : case: Total Sec

    Get PDF
    Competition in modern search engine rankings is ruthless. Page one visibility in search engines has become a challenging task which requires knowledge of the anatomy of search engines. The purpose of this study is to provide a company with better understanding of search engines, social media marketing and to provide them the tools for improvements. The study is done to enhance the company’s own ranking in the search engines. The study concentrates on Google search engine which is the largest search engine provider at the moment and the most used by the case company. The study has its main weight in the theory of the search engines. The theory of the study is based on multiple up to date literature publications and electronic sources. The study is carried out by a business student with the attempt to make the theory and practice understandable to business oriented people with less background in information technology. It is important to understand the modern search engine algorithms in order to compete in constantly changing market environment. The study takes a look at the basic fundamentals and tools in order to gain better rankings in the Google search engine. Total Sec Oy will work as a case company for the thesis. The study aims to improve Total Sec website ranking compared to the competitor without sacrificing the user friendliness of the site. As the main result, the study revealed deficiencies in the Web site’s keyword meta tag references which have an impact on findability of a page. In addition it exposed incoherency in the linking structure of the site.Kilpailu modernien hakukoneiden hakemistosijoituksesta on armotonta. NĂ€kyvyyden saaminen ensimmĂ€iselle sivulle on tullut haastava tehtĂ€vĂ€, joka vaatii ymmĂ€rrystĂ€ hakukoneiden toimintaperiaatteista. TĂ€mĂ€n tutkimuksen tarkoitus on tarjota yritykselle parempi ymmĂ€rrys hakukoneiden toiminnasta, sosiaalisen median markkinoinnista sekĂ€ esitellĂ€ parannuksiin tarvittavia vĂ€lineitĂ€. Tutkimus pyrkii tehostamaan yrityksen omaa sijoitusta hakukoneessa. Tutkimus keskittyy Googlen hakukoneeseen, joka on tĂ€llĂ€ hetkellĂ€ markkinoiden johtava hakukone sekĂ€ toimeksiantoyrityksen markkinoinnin kĂ€ytössĂ€. Tutkimuksen pÀÀpaino on hakukoneiden toiminnan teorian tutkimisessa. Teoria osuus perustuu ajankohtaisiin kirjallisiin julkaisuihin sekĂ€ elektronisiin lĂ€hteisiin. Tutkimuksen on tehnyt liiketalouden opiskelija, joka on pyrkinyt esittĂ€mÀÀn teorian ja kĂ€ytĂ€nnön helposti ymmĂ€rrettĂ€vĂ€ssĂ€ muodossa. Tutkimus on suunnattu talousalan ihmisille, joilla ei ole laajaa tietĂ€mystĂ€ informaatioteknologiasta. Kilpaillakseen jatkuvasti muuttuvassa markkinointiympĂ€ristössĂ€ on tĂ€rkeÀÀ ymmĂ€rtÀÀ modernien hakukoneiden algorytmejĂ€. Tutkimus perehtyy olennaiseen tietoon ja vĂ€lineisin joilla Googlen hakukone sijoitusta voi parantaa. Tutkimuksen toimeksiantajana toimii Total Sec Oy. Tutkimuksen tavoitteena on parantaa Total Secin sivustojen nĂ€kyvyyttĂ€ ja sijoitusta verrattuna kilpailijoihin, kuitenkaan tinkimĂ€ttĂ€ sivustojen kĂ€yttĂ€jĂ€ystĂ€vĂ€llisyydestĂ€. TĂ€rkeimpĂ€nĂ€ tuloksena tutkimus paljasti puutteita sivustojen hakusanojen meta koodi merkinnöissĂ€ jotka vaikuttavat sivun löydettĂ€vyyteen. TĂ€mĂ€n lisĂ€ksi tutkimus paljasti epĂ€johdonmukaisuuksia sivujen linkkirakenteessa

    Security Analysis of Web and Embedded Applications

    Get PDF
    As we put more trust in the computer systems we use the need for securityis increasing. And while security features like HTTPS are becomingcommonplace on the web, securing applications remains dicult. This thesisfocuses on analyzing dierent computer ecosystems to detect vulnerabilitiesand develop countermeasures. This includesweb browsers,web applications,and cyber-physical systems such as Android Automotive.For web browsers, we analyze how new security features might solve aproblem but introduce new ones. We show this by performing a systematicanalysis of the new Content Security Policy (CSP) directive navigate-to.In our research, we nd that it does introduce new vulnerabilities, to whichwe recommend countermeasures. We also create AutoNav, a tool capable ofautomatically suggesting navigation policies for this directive.To improve the security of web applications, we develop a novel blackboxmethod by combining the strengths of dierent black-box methods. Weimplement this in our scanner Black Widow, which we compare with otherleading web application scanners. Black Widow both improves the coverageof the web application and nds more vulnerabilities, including ones inPrestashop, WordPress, and HotCRP.For embedded systems,We analyze the new attack vectors introduced bycombining a phone OS with vehicle APIs and nd new attacks pertaining tosafety, privacy, and availability. Furthermore, we create AutoTame, which isdesigned to analyze third-party apps for vehicles for the vulnerabilities wefound

    Do Social Bots Dream of Electric Sheep? A Categorisation of Social Media Bot Accounts

    Get PDF
    So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.Comment: Accepted for publication in the Proceedings of the Australasian Conference on Information Systems, 201

    Crawler 2.0: A search tool to assist law enforcement with investigations

    Get PDF
    Over the past few years, the internet has been evolving rapidly and a new paradigm in web development has taken shape. Often referred to as Web 2.0, it is a shift in web development which focuses on sharing information and allowing user interaction. The sharing of information by users has resulted in a new location for law enforcement to discover evidence. However, the process of locating this evidence is often a tedious one. Crawler 2.0 is a tool with law enforcement\u27s needs in mind. It is a web crawler and parser with Web 2.0 technology in mind. Given a Web 2.0 page as a starting point, it will interpret known content types and provide a basis for keyword searches. Crawler 2.0 is intended to be expandable for the addition of new, updated, or custom sites and technologies

    Discovering location based services: A unified approach for heterogeneous indoor localization systems

    Get PDF
    The technological solutions and communication capabilities offered by the Internet of Things paradigm, in terms of raising availability of wearable devices, the ubiquitous internet connection, and the presence on the market of service-oriented solutions, have allowed a wide proposal of Location Based Services (LBS). In a close future, we foresee that companies and service providers will have developed reliable solutions to address indoor positioning, as basis for useful location based services. These solutions will be different from each other and they will adopt different hardware and processing techniques. This paper describes the proposal of a unified approach for Indoor Localization Systems that enables the cooperation between heterogeneous solutions and their functional modules. To this end, we designed an integrated architecture that, abstracting its main components, allows a seamless interaction among them. Finally, we present a working prototype of such architecture, which is based on the popular Telegram application for Android, as an integration demonstrator. The integration of the three main phases –namely the discovery phase, the User Agent self-configuration, and the indoor map retrieval/rendering– demonstrates the feasibility of the proposed integrated architectur

    Just Google It: Keywords, Digital Marketing, and the Professional Writer

    Get PDF
    A modern world is a digital one. People now search as much as they socialize on the Internet, and every day millions of people are asking Google questions. Subsequently, Google promptly provides myriads of answers. My year-long Honors in the Discipline research project analyzes a staple of the digital era: the Google search engine. My research combines my Data Analytics and Mathematics minors with my English-Professional Writing major to bridge the gaps between my humanities and mathematical interests. From its origin to its current state and all the cookies in between, I uncover the Google Search Engine and the power it has over the current technological climate. One seemingly simple algorithm has significantly changed the way the world retrieves and perceives information. SEO. Google Analytics. Keywords. Content. Social Media. Blogs. This modern terminology makes up most of the job descriptions professional writing students will encounter in their searches for post-graduation employment. Therefore, my project serves as an independent exploration into an emerging professional field. My research describes the Google Analytics certification process, Google’s PageRank algorithm, and hands-on exploration of the two through a fall internship experience. Thus, this research project serves as my exploration into this shifting industry. Through a combination of my linguistic and mathematical interests with my professional goals, I hope to creatively contribute to both the humanities and the sciences, while inspiring other students and professionals to do so as well
    • 

    corecore