325 research outputs found

    Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics

    Full text link
    Data from high-energy physics (HEP) experiments are collected with significant financial and human effort and are mostly unique. An inter-experimental study group on HEP data preservation and long-term analysis was convened as a panel of the International Committee for Future Accelerators (ICFA). The group was formed by large collider-based experiments and investigated the technical and organisational aspects of HEP data preservation. An intermediate report was released in November 2009 addressing the general issues of data preservation in HEP. This paper includes and extends the intermediate report. It provides an analysis of the research case for data preservation and a detailed description of the various projects at experiment, laboratory and international levels. In addition, the paper provides a concrete proposal for an international organisation in charge of the data management and policies in high-energy physics

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Master/worker parallel discrete event simulation

    Get PDF
    The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.Ph.D.Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richar

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage

    Grey Literature in Library and Information Studies

    Get PDF
    The further rise of electronic publishing has come to change the scale and diversity of grey literature facing librarians and other information practitioners. This compiled work brings together research and authorship over the past decade dealing with both the supply and demand sides of grey literature. While this book is written with students and instructors of Colleges and Schools of Library and Information Science in mind, it likewise serves as a reader for information professionals working in any and all like knowledge-based communities

    Development of an historical landscape photography database to support landscape change analysis in the Northeast of Portugal

    Get PDF
    Repeat photography is an efficient, effective and useful method to identify trends of changes in the landscapes. It was used to illustrate long-term changes occurring in the landscapes. In the Northeast of Portugal, landscapes changes is currently driven mostly by agriculture abandonment and agriculture and energy policy. However, there is a need to monitoring changes in the region using a multitemporal and multiscale approach. This project aimed to establish an online repository of oblique digital photography from the region to be used to register the condition of the landscape as recorded in historical and contemporary photography over time as well as to support qualitative and quantitative assessment of change in the landscape using repeat photography techniques and methods. It involved the development of a relational database and a series of web-based services using PHP: Hypertext Preprocessor language, and the development of an interface, with Joomla, of pictures uploading and downloading by users. The repository will make possible to upload, store, search by location, theme, or date, display, and download pictures for Northeastern Portugal. The website service is devoted to help researchers to obtain quickly the photographs needed to apply RP through a developed search engine. It can be accessed at: http://esa.ipb.pt/digitalandscape/.A fotografia histórica é um método útil e eficiente para realizar estudos comparativos e evolutivos das alterações da paisagem e, em geral, da geografia. Tem sido amplamente usado para ilustrar as alterações mais importantes ocorridas cronologicamente nas paisagens. No Nordeste de Portugal, as alterações da paisagem devem-se, sobretudo, ao abandono da exploração agrícola que teve como consequência a florestação de novas áreas, bem como pelas construções para aproveitamento da energia hidráulica ou eólica. Em súmula, é evidente a necessidade de monitorizar as alterações da geografia da região usando uma abordagem multi-temporal e multi-escala. Este trabalho teve por objetivo principal a implementação de um repositório digital para registos fotográficos históricos da paisagem da região de Trás-os-Montes, com o intuito de disponibilizar serviços web que permitem o armazenamento e o acesso aos registos fotográficos históricos e contemporâneos das paisagens da região, permitindo assim uma análise quantitativa e qualitativa da evolução dessas paisagens. Em termos práticos, envolveu a criação de uma base de dados relacional e uma pletora de serviços web usando recursos de programação para a web, nomeadamente PHP e Javascript. Requereu igualmente a criação de um website para a centralização e disponibilização dos serviços, este foi elaborado com base em Joomla. Assim, disponibiliza-se à comunidade académica, e não só, um conjunto de serviços digitais para o estudo, com base em fotografia, das alterações na paisagem em Trás-os-Montes. O website pode ser acedido em http://esa.ipb.pt/digitalandscape/

    An evaluation of the ‘open source internet research tool’: a user-centred and participatory design approach with UK law enforcement

    Get PDF
    As part of their routine investigations, law enforcement conducts open source research; that is, investigating and researching using publicly available information online. Historically, the notion of collecting open sources of information is as ingrained as the concept of intelligence itself. However, utilising open source research in UK law enforcement is a relatively new concept not generally, or practically, considered until after the civil unrest seen in the UK’s major cities in the summer of 2011. While open source research focuses on the understanding of bein‘publicly available’, there are legal, ethical and procedural issues that law enforcement must consider. This asks the following mainresearch question: What constraints do law enforcement face when conducting open source research? From a legal perspective, law enforcement officials must ensure their actions are necessary and proportionate, more so where an individual’s privacy is concerned under human rights legislation and data protection laws such as the General Data Protection Regulation. Privacy issues appear, though, when considering the boom and usage of social media, where lines can be easily blurred as to what is public and private. Guidance from Association of Chief Police Officers (ACPO) and, now, the National Police Chief’s Council (NPCC) tends to be non-committal in tone, but nods towards obtaining legal authorisation under the Regulation of Investigatory Powers Act (RIPA) 2000 when conducting what may be ‘directed surveillance’. RIPA, however, pre-dates the modern era of social media by several years, so its applicability as the de-facto piece of legislation for conducting higher levels of open source research is called into question. 22 semi-structured interviews with law enforcement officials were conducted and discovered a grey area surrounding legal authorities when conducting open source research. From a technical and procedural aspect of conducting open source research, officers used a variety of software tools that would vary both in price and quality, with no standard toolset. This was evidenced from 20 questionnaire responses from 12 police forces within the UK. In an attempt to bring about standardisation, the College of Policing’s Research, Identifying and Tracing the Electronic Suspect (RITES) course recommended several capturing and productivity tools. Trainers on the RITES course, however, soon discovered the cognitive overload this had on the cohort, who would often spend more time learning to use the tools than learn about open source research techniques. The problem highlighted above prompted the creation of Open Source Internet Research Tool (OSIRT); an all-in-one browser for conducting open source research. OSIRT’s creation followed the user-centred design (UCD) method, with two phases of development using the software engineering methodologies ‘throwaway prototyping’, for the prototype version, and ‘incremental and iterative development’ for the release version. OSIRT has since been integrated into the RITES course, which trains over 100 officers a year, and provides a feedback outlet for OSIRT. System Usability Scale questionnaires administered on RITES courses have shown OSIRT to be usable, with feedback being positive. Beyond the RITES course, surveys, interviews and observations also show OSIRT makes an impact on everyday policing and has reduced the burden officers faced when conducting opens source research. OSIRT’s impact now reaches beyond the UK and sees usage across the globe. OSIRT contributes to law enforcement output in countries such as the USA, Canada, Australia and even Israel, demonstrating OSIRT’s usefulness and necessity are not only applicable to UK law enforcement. This thesis makes several contributions both academically and from a practical perspective to law enforcement. The main contributions are: • Discussion and analysis of the constraints law enforcement within the UK face when conducting open source research from a legal, ethical and procedural perspective. • Discussion, analysis and reflective discourse surrounding the development of a software tool for law enforcement and the challenges faced in what is a unique development. • An approach to collaborating with those who are in ‘closed’ environments, such as law enforcement, to create bespoke software. Additionally, this approach offers a method of measuring the value and usefulness of OSIRT with UK law enforcement. • The creation and integration of OSIRT in to law enforcement and law enforcement training packages

    Programming and parallelising applications for distributed infrastructures

    Get PDF
    The last decade has witnessed unprecedented changes in parallel and distributed infrastructures. Due to the diminished gains in processor performance from increasing clock frequency, manufacturers have moved from uniprocessor architectures to multicores; as a result, clusters of computers have incorporated such new CPU designs. Furthermore, the ever-growing need of scienti c applications for computing and storage capabilities has motivated the appearance of grids: geographically-distributed, multi-domain infrastructures based on sharing of resources to accomplish large and complex tasks. More recently, clouds have emerged by combining virtualisation technologies, service-orientation and business models to deliver IT resources on demand over the Internet. The size and complexity of these new infrastructures poses a challenge for programmers to exploit them. On the one hand, some of the di culties are inherent to concurrent and distributed programming themselves, e.g. dealing with thread creation and synchronisation, messaging, data partitioning and transfer, etc. On the other hand, other issues are related to the singularities of each scenario, like the heterogeneity of Grid middleware and resources or the risk of vendor lock-in when writing an application for a particular Cloud provider. In the face of such a challenge, programming productivity - understood as a tradeo between programmability and performance - has become crucial for software developers. There is a strong need for high-productivity programming models and languages, which should provide simple means for writing parallel and distributed applications that can run on current infrastructures without sacri cing performance. In that sense, this thesis contributes with Java StarSs, a programming model and runtime system for developing and parallelising Java applications on distributed infrastructures. The model has two key features: first, the user programs in a fully-sequential standard-Java fashion - no parallel construct, API call or pragma must be included in the application code; second, it is completely infrastructure-unaware, i.e. programs do not contain any details about deployment or resource management, so that the same application can run in di erent infrastructures with no changes. The only requirement for the user is to select the application tasks, which are the model's unit of parallelism. Tasks can be either regular Java methods or web service operations, and they can handle any data type supported by the Java language, namely les, objects, arrays and primitives. For the sake of simplicity of the model, Java StarSs shifts the burden of parallelisation from the programmer to the runtime system. The runtime is responsible from modifying the original application to make it create asynchronous tasks and synchronise data accesses from the main program. Moreover, the implicit inter-task concurrency is automatically found as the application executes, thanks to a data dependency detection mechanism that integrates all the Java data types. This thesis provides a fairly comprehensive evaluation of Java StarSs on three di erent distributed scenarios: Grid, Cluster and Cloud. For each of them, a runtime system was designed and implemented to exploit their particular characteristics as well as to address their issues, while keeping the infrastructure unawareness of the programming model. The evaluation compares Java StarSs against state-of-the-art solutions, both in terms of programmability and performance, and demonstrates how the model can bring remarkable productivity to programmers of parallel distributed applications

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems
    corecore