716 research outputs found

    PRONOM-ROAR: Adding Format Profiles to a Repository Registry to Inform Preservation Services

    Get PDF
    To date many institutional repository (IR) software suppliers have pushed the IR as a digital preservation solution. We argue that the digital preservation of objects in IRs may better be achieved through the use of light-weight, add-on services. We present such a service – PRONOM-ROAR – that generates file format profiles for IRs. This demonstrates the potential of using third- party services to provide preservation expertise to IR managers by making use of existing machine interfaces to IRs

    RGG: A general GUI Framework for R scripts

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required.</p> <p>Results</p> <p>We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR.</p> <p>Conclusion</p> <p>RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at <url>http://rgg.r-forge.r-project.org</url></p

    Forensic Box for Quick Network-Based Security Assessments

    Get PDF
    Network security assessments are seen as important, yet cumbersome and time consuming tasks, mostly due to the use of different and manually operated tools. These are often very specialized tools that need to be mastered and combined, besides requiring sometimes that a testing environment is set up. Nonetheless, in many cases, it would be useful to obtain an audit in a swiftly and on-demand manner, even if with less detail. In such cases, these audits could be used as an initial step for a more detailed evaluation of the network security, as a complement to other audits, or aid in preventing major data leaks and system failures due to common configuration, management or implementation issues. This dissertation describes the work towards the design and development of a portable system for quick network security assessments and the research on the automation of many tasks (and associated tools) composing that process. An embodiment of such system was built using a Raspberry Pi 2, several well known open source tools, whose functions vary from network discovery, service identification, Operating System (OS) fingerprinting, network sniffing and vulnerability discovery, and custom scripts and programs for connecting all the different parts that comprise the system. The tools are integrated in a seamless manner with the system, to allow deployment in wired or wireless network environments, where the device carries out a mostly automated and thorough analysis. The device is near plug-and-play and produces a structured report at the end of the assessment. Several simple functions, such as re-scanning the network or doing Address Resolution Protocol (ARP) poisoning on the network are readily available through a small LCD display mounted on top of the device. It offers a web based interface for finer configuration of the several tools and viewing the report, aso developed within the scope of this work. Other specific outputs, such as PCAP files with collected traffic, are available for further analysis. The system was operated in controlled and real networks, so as to verify the quality of its assessments. The obtained results were compared with the results obtained through manually auditing the same networks. The achieved results showed that the device was able to detect many of the issues that the human auditor detected, but showed some shortcomings in terms of some specific vulnerabilities, mainly Structured Query Language (SQL) injections. The image of the OS with the pre-configured tools, automation scripts and programs is available for download from [Ber16b]. It comprises one of the main outputs of this work.As avaliações de segurança de uma rede (e dos seus dispositivos) são vistas como tarefas importantes, mas pesadas e que consomem bastante tempo, devido à utilização de diferentes ferramentas manuais. Normalmente, estas ferramentas são bastante especializadas e exigem conhecimento prévio e habituação, e muitas vezes a necessidade de criar um ambiente de teste. No entanto, em muitos casos, seria útil obter uma auditoria rápida e de forma mais direta, ainda que pouco profunda. Nesses moldes, poderia servir como passo inicial para uma avaliação mais detalhada, complementar outra auditoria, ou ainda ajudar a prevenir fugas de dados e falhas de sistemas devido a problemas comuns de configuração, gestão ou implementação dos sistemas. Esta dissertação descreve o trabalho efetuado com o objetivo de desenhar e desenvolver um sistema portátil para avaliações de segurança de uma rede de forma rápida, e também a investigação efetuada com vista à automação de várias tarefas (e ferramentas associadas) que compõem o processo de auditoria. Uma concretização do sistema foi criada utilizando um Raspberry Pi 2, várias ferramentas conhecidas e de código aberto, cujas funcionalidades variam entre descoberta da rede, identificação de sistema operativo, descoberta de vulnerabilidades a captura de tráfego na rede, e scripts e programas personalizados que interligam as várias partes que compõem o sistema. As ferramentas são integradas de forma transparente no sistema, que permite ser lançado em ambientes cablados ou wireless, onde o dispositivo executa uma análise meticulosa e maioritariamente automatizada. O dispositivo é praticamente plug and play e produz um relatório estruturado no final da avaliação. Várias funções simples, tais como analisar novamente a rede ou efetuar ataques de envenenamento da cache Address Resolution Protocol (ARP) na rede estão disponíveis através de um pequeno ecrã LCD montado no topo do dispositivo. Este oferece ainda uma interface web, também desenvolvida no contexto do trabalho, para configuração mais específica das várias ferramentas e para obter acesso ao relatório da avaliação. Outros outputs mais específicos, como ficheiros com tráfego capturado, estão disponíveis a partir desta interface. O sistema foi utilizado em redes controladas e reais, de forma a verificar a qualidade das suas avaliações. Os resultados obtidos foram comparados com aqueles obtidos através de auditoria manual efetuada às mesmas redes. Os resultados obtidos mostraram que o dispositivo deteta a maioria dos problemas que um auditor detetou manualmente, mas mostrou algumas falhas na deteção de algumas vulnerabilidades específicas, maioritariamente injeções Structured Query Language (SQL). A imagem do Sistema Operativo com as ferramentas pré-configuradas, scripts de automação e programas está disponível para download de [Ber16b]. Esta imagem corresponde a um dos principais resultados deste trabalho

    Design of Theoretical Framework: Global and Local Parameters Requirements for Libraries

    Get PDF
    Library is one of the important aspect in modern reading environment. Theoretical framework is an inevitable and indispensable for each and every library in the field of automated and digital library system. In this original research paper all the parameters have selected on the basis of global recommendations and local requirements for libraries in six theoretical sections. Designing the theoretical framework in the following areas such as (i) Theoretical framework of integrated library system cluster (ii) Theoretical framework of community communication and interaction (iii) Theoretical framework of digital media archiving cluster (iv) Theoretical framework of content management system (v) Theoretical framework of learning content management system (vi) Theoretical framework of federated search system. Integrated library system cluster two things are more important development of ILS and open source ILS software. On the other hand it also crafted the requirement of parameters selection and it can be developed in three ways such as basic parameters settings, theoretical framework for housekeeping operations, and theoretical framework for information retrieval system. Software selection and parameter selection is also an pivotal tasks in the field or theoretical framework of community communication and interaction. Theoretical framework of digital media archiving cluster can be developed in three sections such as selection of software, selection of standards, and metadata selection for all the libraries. Content management system can be developed in three ways such as workflow of content management system, software selection in CMS cluster, and parameters selection in CMS cluster. Development of theoretical framework of learning content management system for libraries in three sections such as Components of Learning Content Management System , Software selection in LCMS cluster, and Parameters selection in LCMS cluster. Software selection and parameters selection is also an important components in the federated search system theoretical framework for the development of single window based interface

    Master of Science

    Get PDF
    thesisAs the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often choose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this thesis, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface

    Dynamic Data Extraction and Data Visualization with Application to the Kentucky Mesonet

    Get PDF
    There is a need to integrate large-scale database, high-performance computing engines and geographical information system technologies into a user-friendly web interface as a platform for data visualization and customized statistical analysis. We present some concepts and design ideas regarding dynamic data storage and extraction by making use of open-source computing and mapping technologies. We implemented our methods to the Kentucky Mesonet automated weather mapping workflow. The main components of the work flow includes a web based interface, a robust database and computing infrastructure designed for both general users and power users such as modelers and researchers

    Fingerprinting a Organization Using Metadata of Public Documents

    Get PDF
    Paljud ettevõtted ja asutused kasutavad äritegevuseks Interneti, et muuta informatsioon enda pakutavate toodete ja teenuste kohta kättesaadavamaks. Tihtipeale need ettevõtted ja asutused jagavad oma veebilehel elektroonilisi dokumente (näiteks tabelid statistiliste andmetega, juhendid, näited ja õpetused, artiklid, blanketid ja muud dokumendid), mida peetakse vajalikuks jagada. Dokumendid, mis on veebilehtedel kõigile internetikasutajatele vabalt kättesaadavad, võivad sisaldada metaandmeid. Metaandmed on andmed, mis kirjeldavad teisi andmeid, ehk metaandmed kirjeldavad dokumendi sisu ja dokumendi üldiseid omadusi. Metaandmed on näiteks kasutajanimi, kes dokumendi koostas, salvestas, printis või redigeeris, kuid lisaks ka ajatemplid millal eelpool mainitud tegevusi tehti. Täiendavalt võib dokumentides olla informatsiooni arvutite ja infosüsteemide kohta, kus seda dokumenti töödeldi. Metaandmete lisamine dokumentidele toimub valdavalt automaatselt ning kui metaandmeid dokumendist eemaldatud pole, võib dokumendi metaandmetesse sattuda tundlikku informatsiooni kasutaja ja asutuse kohta. Metaandmete olemasolu dokumendis on paljude kasutajate jaoks teadmata ning nad ei ole teadlikud, et võivad potentsiaalselt lekitada informatsiooni asutuse või süsteemide kohta, kus dokumenti töödeldi. Seda informatsiooni on võimalik kasutada küberrünnakute läbiviimiseks või asutuse kaardistamiseks. See magistritöö uurib dokumentide metaandmeid, mis on ligipääsetavad Eesti riigiasutuste veebilehtedel ning mis on kõigile Internetikasutajatele vabalt kättesaadavad. Täpsemalt on vaatluse alla võetud kolme riigiasutuse veebilehel olevad dokumentide metaandmed, et välja selgitada, kas nendes peituvat informatsiooni on võimalik kasutada asutuse kaardistamiseks ja võimalike küberrünnakute teostamiseks. Selle täideviimiseks kasutati kahest etapist koosnevat meetodit. Esimene etapp tugines meetodite välja töötamisel, kuidas asutusi kaardistada, kasutades ainult dokumentide metaandmeid. Teine etapp kirjeldas esimeses etapis välja töötatud meetodi rakendamisel saadud tulemuste analüüsist ja järeldustest.Tehtud analüüsi tulemus näitas, et peaaegu kõik dokumendid sisaldavad metaandmeid, mida on võimalik ära kasutada ühel või teisel viisil asutuse kaardistamiseks või küberrünnakute läbiviimiseks. Magistritöös analüüsisime kokku 2643 dokumenti, millest 12-nel olid metaandmed eemaldatud. Ülejäänud dokumendid sisaldasid informatsiooni kilde, mis kirjeldavad keskkonda kus dokumente on töödeldud ja sisaldasid informatsiooni, mida on võimalik kasutada küberrünnakute läbiviimiseks. Lõputöö on kirjutatud inglise keeles ning sisaldab teksti 77 leheküljel, 6 peatükki, 41 joo-nist ja 26 tabelit.Many companies and organizations use Internet for their business activities to make infor-mation about their products and services more available for customers. Often those organi-zations and companies share electronic documents on their websites, such as manuals, whitepapers, guidelines, templates, and other documents which are considered as im-portant to share. Documents which are uploaded on organizations’ websites can contain extra information, such as metadata.Metadata is defined as data which describes other data. Metadata associated with docu-ments can contain information about names of authors, creators information, documents general properties, the name of the server, or path where the document was modi-fied. Metadata is added into documents mainly by automated process when document is created, and if documents’ metadata is not properly removed before sharing, it could con-tain sensitive information. Usually people are not aware about metadata existence in doc-uments and could unwillingly leak information about their organization or about them-selves. This information can be used for fingerprinting basis or conducting cyber attacks.In this thesis paper, electronic documents’ metadata which are shared on Estonian gov-ernmental organizations websites were analyzed. More specifically, three institutions’ pub-lic documents’ metadata were observed in order to identify metadata vulnerabilities that can be used for fingerprinting purposes. To achieve that, a fingerprinting method was de-veloped and utilized against observed websites. This thesis is divided into two different stages, where first stage describes the developed fingerprinting method, and second stage presents the outcomes of metadata analysis with the developed method.The results of the conducted research showed that almost all documents which were ana-lyzed contained information which could be used for fingerprinting purposes. We pro-cessed 2643 documents, where only 12 documents had metadata properly removed. All other documents contained pieces of information that describes environment where docu-ment was created and additionally exposed information that could be used for conducting cyber-attacks
    corecore