8,581 research outputs found

    AltURI: a thin middleware for simulated robot vision applications

    Get PDF
    Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use

    Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information

    Get PDF
    Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information -- either intentionally released by the person herself, or unintentionally leaked or published by third parties, often with detrimental effects on the individual's privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request is eligible and lawful. We propose a universal framework Oblivion to support the automation of the right to be forgotten in a scalable, provable and privacy-preserving manner. First, Oblivion enables a user to automatically find and tag her disseminated personal information using natural language processing and image recognition techniques and file a request in a privacy-preserving manner. Second, Oblivion provides indexing systems with an automated and provable eligibility mechanism, asserting that the author of a request is indeed affected by an online resource. The automated ligibility proof ensures censorship-resistance so that only legitimately affected individuals can request the removal of corresponding links from search results. We have conducted comprehensive evaluations, showing that Oblivion is capable of handling 278 removal requests per second, and is hence suitable for large-scale deployment

    From the invalidity of a General Classification Theory to a new organization of knowledge for the millennium to come

    Get PDF
    Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation. Wien, 3-5 Juli 2006The idea of organizing knowledge and the determinism in classifícation structures implicitly involve certain limits which are translated into a General Theory on the Classifícation of Knowledge, given that classifícation responds to specific parameters and structures more than to a theoretical concept. The classifícation of things is a refiection of their classifícation by man, and this is what determines classifícation structures. The classifícation and organization of knowledge are presented to us as an artificial construct or as a useful fiction elaborated by man. Positivist knowledge reached its peak in the 20* century when science classifications and implemented classifícation systems based on the latter were to be gestated and Consolidated. Pragmatism was to serve as the epistemological and theoretical basis for science and its classifícation. If the classifícation of the sciences has given rise to clastification systems, the organisation and representation of knowledge has to currendy give rise to the context of the globalisation of electronic information in the hypertextual organisational form of electronic information where, if in information the médium ivas the message, in organisation the médium is the structure. The virtual reality of electronic information delves even deeper into it; the process is completed as the subject attempts to look for information. This information market needs standards of an international nature for documents and data. This body of information organization will be characterized by its dynamic nature. If formal and material structures change our concept of knowledge and the way it is structured, then this organization will undergo dynamic change along with the material and formal structures of the real world. The semantic web is a qualitative leap which can be glimpsed on tiie new knowledge horizon; the latter would be shaped with the full integration of contents and data, the language itself would include data and its rules of reason or representation system. The new organisation of knowledge points to a totally nCw conception; post-modern epistemology has yet to be articulated. In the 21 st century, the organization of electronic information is presenting a novel hypertextual, non-linear architecture that will lead to a new change in the paradigm for organization of knowledge for the mülennium to come.Publicad

    A novel defense mechanism against web crawler intrusion

    Get PDF
    Web robots also known as crawlers or spiders are used by search engines, hackers and spammers to gather information about web pages. Timely detection and prevention of unwanted crawlers increases privacy and security of websites. In this research, a novel method to identify web crawlers is proposed to prevent unwanted crawler to access websites. The proposed method suggests a five-factor identification process to detect unwanted crawlers. This study provides the pretest and posttest results along with a systematic evaluation of web pages with the proposed identification technique versus web pages without the proposed identification process. An experiment was performed with repeated measures for two groups with each group containing ninety web pages. The outputs of the logistic regression analysis of treatment and control groups confirm the novel five-factor identification process as an effective mechanism to prevent unwanted web crawlers. This study concluded that the proposed five distinct identifier process is a very effective technique as demonstrated by a successful outcome

    Ono: an open platform for social robotics

    Get PDF
    In recent times, the focal point of research in robotics has shifted from industrial ro- bots toward robots that interact with humans in an intuitive and safe manner. This evolution has resulted in the subfield of social robotics, which pertains to robots that function in a human environment and that can communicate with humans in an int- uitive way, e.g. with facial expressions. Social robots have the potential to impact many different aspects of our lives, but one particularly promising application is the use of robots in therapy, such as the treatment of children with autism. Unfortunately, many of the existing social robots are neither suited for practical use in therapy nor for large scale studies, mainly because they are expensive, one-of-a-kind robots that are hard to modify to suit a specific need. We created Ono, a social robotics platform, to tackle these issues. Ono is composed entirely from off-the-shelf components and cheap materials, and can be built at a local FabLab at the fraction of the cost of other robots. Ono is also entirely open source and the modular design further encourages modification and reuse of parts of the platform

    Let the Crawlers Crawl: On Virtual Gatekeepers and the Right to Exclude Indexing

    Get PDF
    Symposium: Copyright\u27s Balance in an Internet Worl
    • …
    corecore