33,155 research outputs found

    SPEIR: Scottish Portals for Education, Information and Research. Final Project Report: Elements and Future Development Requirements of a Common Information Environment for Scotland

    Get PDF
    The SPEIR (Scottish Portals for Education, Information and Research) project was funded by the Scottish Library and Information Council (SLIC). It ran from February 2003 to September 2004, slightly longer than the 18 months originally scheduled and was managed by the Centre for Digital Library Research (CDLR). With SLIC's agreement, community stakeholders were represented in the project by the Confederation of Scottish Mini-Cooperatives (CoSMiC), an organisation whose members include SLIC, the National Library of Scotland (NLS), the Scottish Further Education Unit (SFEU), the Scottish Confederation of University and Research Libraries (SCURL), regional cooperatives such as the Ayrshire Libraries Forum (ALF)1, and representatives from the Museums and Archives communities in Scotland. Aims; A Common Information Environment For Scotland The aims of the project were to: o Conduct basic research into the distributed information infrastructure requirements of the Scottish Cultural Portal pilot and the public library CAIRNS integration proposal; o Develop associated pilot facilities by enhancing existing facilities or developing new ones; o Ensure that both infrastructure proposals and pilot facilities were sufficiently generic to be utilised in support of other portals developed by the Scottish information community; o Ensure the interoperability of infrastructural elements beyond Scotland through adherence to established or developing national and international standards. Since the Scottish information landscape is taken by CoSMiC members to encompass relevant activities in Archives, Libraries, Museums, and related domains, the project was, in essence, concerned with identifying, researching, and developing the elements of an internationally interoperable common information environment for Scotland, and of determining the best path for future progress

    Studying Interaction Methodologies in Video Retrieval

    Get PDF
    So far, several approaches have been studied to bridge the problem of the Semantic Gap, the bottleneck in image and video retrieval. However, no approach is successful enough to increase retrieval performances significantly. One reason is the lack of understanding the user's interest, a major condition towards adapting results to a user. This is partly due to the lack of appropriate interfaces and the missing knowledge of how to interpret user's actions with these interfaces. In this paper, we propose to study the importance of various implicit indicators of relevance. Furthermore, we propose to investigate how this implicit feedback can be combined with static user profiles towards an adaptive video retrieval model

    Cheese: Food Perception and Food Choice

    Get PDF
    In light of the increasing interest in the economic and socio-political impact of the ‘traditional food’ trend, it is essential to understand the determinant factors that lead to traditional consumer choices. The standardization of sensory quality evaluation methods marks the pressing need for food product certification, particularly foods with specific sensory characteristics, such as those with a Protected Designation of Origin (PDO). Consumer perception of particular foods, especially for foods that are culturally and socially contingent, such as cheese, must be understood as both a psychophysical reflex and a learned social practice. Consumers create their own perceptions based on the overall intrinsic or extrinsic cheese characteristics, mainly sensory characteristics that reflect others' attributes. These characteristics are normally linked to the specific cheese manufacture process. Some patents propose the use of adapted cheesemaking equipment (EP1982582A2), suitable for the manufacture of small-scale cheeses, such as some PDO cheese. Thus, sensory evaluation of any kind of cheese is based, in the initial phase, on knowledge of the sensory methods for cheese evaluation and, in a second phase, on the familiarity of the cheese characteristics and verbalization of desirable and undesirable attributes. This paper presents a case study based on the traditional food product, Évora cheese, assembled with PDO cheeses, whose sensory and physicochemical quality attributes are essential in order to obtain this designation and ensure the genuine properties that characterize them, as well as ascertaining exactly how they are perceived and further accepted by the consumer

    Losing the War Against Dirty Money: Rethinking Global Standards on Preventing Money Laundering and Terrorism Financing

    Get PDF
    Following a brief overview in Part I.A of the overall system to prevent money laundering, Part I.B describes the role of the private sector, which is to identify customers, create a profile of their legitimate activities, keep detailed records of clients and their transactions, monitor their transactions to see if they conform to their profile, examine further any unusual transactions, and report to the government any suspicious transactions. Part I.C continues the description of the preventive measures system by describing the government\u27s role, which is to assist the private sector in identifying suspicious transactions, ensure compliance with the preventive measures requirements, and analyze suspicious transaction reports to determine those that should be investigated. Parts I.D and I.E examine the effectiveness of this system. Part I.D discusses successes and failures in the private sector\u27s role. Borrowing from theory concerning the effectiveness of private sector unfunded mandates, this Part reviews why many aspects of the system are failing, focusing on the subjectivity of the mandate, the disincentives to comply, and the lack of comprehensive data on client identification and transactions. It notes that the system includes an inherent contradiction: the public sector is tasked with informing the private sector how best to detect launderers and terrorists, but to do so could act as a road map on how to avoid detection should such information fall into the wrong hands. Part I.D discusses how financial institutions do not and cannot use scientifically tested statistical means to determine if a particular client or set of transactions is more likely than others to indicate criminal activity. Part I.D then turns to a discussion of a few issues regarding the impact the system has but that are not related to effectiveness, followed by a summary and analysis of how flaws might be addressed. Part I.E continues by discussing the successes and failures in the public sector\u27s role. It reviews why the system is failing, focusing on the lack of assistance to the private sector in and the lack of necessary data on client identification and transactions. It also discusses how financial intelligence units, like financial institutions, do not and cannot use scientifically tested statistical means to determine probabilities of criminal activity. Part I concludes with a summary and analysis tying both private and public roles together. Part II then turns to a review of certain current techniques for selecting income tax returns for audit. After an overview of the system, Part II first discusses the limited role of the private sector in providing tax administrators with information, comparing this to the far greater role the private sector plays in implementing preventive measures. Next, this Part turns to consider how tax administrators, particularly the U.S. Internal Revenue Service, select taxpayers for audit, comparing this to the role of both the private and public sectors in implementing preventive measures. It focuses on how some tax administrations use scientifically tested statistical means to determine probabilities of tax evasion. Part II then suggests how flaws in both private and public roles of implementing money laundering and terrorism financing preventive measures might be theoretically addressed by borrowing from the experience of tax administration. Part II concludes with a short summary and analysis that relates these conclusions to the preventive measures system. Referring to the analyses in Parts I and II, Part III suggests changes to the current preventive measures standard. It suggests that financial intelligence units should be uniquely tasked with analyzing and selecting clients and transactions for further investigation for money laundering and terrorism financing. The private sector\u27s role should be restricted to identifying customers, creating an initial profile of their legitimate activities, and reporting such information and all client transactions to financial intelligence units

    ETL for data science?: A case study

    Get PDF
    Big data has driven data science development and research over the last years. However, there is a problem - most of the data science projects don't make it to production. This can happen because many data scientists don’t use a reference data science methodology. Another aggravating element is data itself, its quality and processing. The problem can be mitigated through research, progress and case studies documentation about the topic, fostering knowledge dissemination and reuse. Namely, data mining can benefit from other mature fields’ knowledge that explores similar matters, like data warehousing. To address the problem, this dissertation performs a case study about the project “IA-SI - Artificial Intelligence in Incentives Management”, which aims to improve the management of European grant funds through data mining. The key contributions of this study, to the academia and to the project’s development and success are: (1) A combined process model of the most used data mining process models and their tasks, extended with the ETL’s subsystems and other selected data warehousing best practices. (2) Application of this combined process model to the project and all its documentation. (3) Contribution to the project’s prototype implementation, regarding the data understanding and data preparation tasks. This study concludes that CRISP-DM is still a reference, as it includes all the other data mining process models’ tasks and detailed descriptions, and that its combination with the data warehousing best practices is useful to the project IA-SI and potentially to other data mining projects.A big data tem impulsionado o desenvolvimento e a pesquisa da ciência de dados nos últimos anos. No entanto, há um problema - a maioria dos projetos de ciência de dados não chega à produção. Isto pode acontecer porque muitos deles não usam uma metodologia de ciência de dados de referência. Outro elemento agravador são os próprios dados, a sua qualidade e o seu processamento. O problema pode ser mitigado através da documentação de estudos de caso, pesquisas e desenvolvimento da área, nomeadamente o reaproveitamento de conhecimento de outros campos maduros que exploram questões semelhantes, como data warehousing. Para resolver o problema, esta dissertação realiza um estudo de caso sobre o projeto “IA-SI - Inteligência Artificial na Gestão de Incentivos”, que visa melhorar a gestão dos fundos europeus de investimento através de data mining. As principais contribuições deste estudo, para a academia e para o desenvolvimento e sucesso do projeto são: (1) Um modelo de processo combinado dos modelos de processo de data mining mais usados e as suas tarefas, ampliado com os subsistemas de ETL e outras recomendadas práticas de data warehousing selecionadas. (2) Aplicação deste modelo de processo combinado ao projeto e toda a sua documentação. (3) Contribuição para a implementação do protótipo do projeto, relativamente a tarefas de compreensão e preparação de dados. Este estudo conclui que CRISP-DM ainda é uma referência, pois inclui todas as tarefas dos outros modelos de processos de data mining e descrições detalhadas e que a sua combinação com as melhores práticas de data warehousing é útil para o projeto IA-SI e potencialmente para outros projetos de data mining

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    Drawing Elena Ferrante's Profile. Workshop Proceedings, Padova, 7 September 2017

    Get PDF
    Elena Ferrante is an internationally acclaimed Italian novelist whose real identity has been kept secret by E/O publishing house for more than 25 years. Owing to her popularity, major Italian and foreign newspapers have long tried to discover her real identity. However, only a few attempts have been made to foster a scientific debate on her work. In 2016, Arjuna Tuzzi and Michele Cortelazzo led an Italian research team that conducted a preliminary study and collected a well-founded, large corpus of Italian novels comprising 150 works published in the last 30 years by 40 different authors. Moreover, they shared their data with a select group of international experts on authorship attribution, profiling, and analysis of textual data: Maciej Eder and Jan Rybicki (Poland), Patrick Juola (United States), Vittorio Loreto and his research team, Margherita Lalli and Francesca Tria (Italy), George Mikros (Greece), Pierre Ratinaud (France), and Jacques Savoy (Switzerland). The chapters of this volume report the results of this endeavour that were first presented during the international workshop Drawing Elena Ferrante's Profile in Padua on 7 September 2017 as part of the 3rd IQLA-GIAT Summer School in Quantitative Analysis of Textual Data. The fascinating research findings suggest that Elena Ferrante\u2019s work definitely deserves \u201cmany hands\u201d as well as an extensive effort to understand her distinct writing style and the reasons for her worldwide success
    corecore