51 research outputs found
CumuloNimbo: parallel-distributed transactional processing
CumuloNimbo aims at solving the lack of scalability of transactional applications that represent a large fraction of existing applications. CumuloNimbo aims at conceiving, architecting and developing a transactional, coherent, elastic and ultra scalable Platform as a Service. Its goals are: Ultra scalable and dependable, able to scale from a few users to many millions of users while at the same time providing continuous availability; Support transparent migration of multi-tier applications (e.g. Java EE applications, relational DB applications, etc.) to the cloud with automatic scalability and elasticity. Avoid reprogramming of applications and non-transparent scalability techniques such as sharding. Support transactions for new data stores such as cloud data stores, graph databases, etc.The main challenges are: Update ultrascalability (million update transactions per second and as many read-only transactions as needed). Strong transactional consistency. Non-intrusive elasticity. Inexpensive high availability. Low latency. CumuloNimbo goes beyond the state of the art by scaling transparently transactional applications to very large rates without sharding, the current practice in Today?s cloud. In this paper we describe CumuloNimbo architecture and its performance
The VINEYARD Approach: Versatile, Integrated, Accelerator-Based, Heterogeneous Data Centres.
Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application
Cancer stem cells from human glioblastoma resemble but do not mimic original tumors after in vitro passaging in serum-free media
Human gliomas harbour cancer stem cells (CSCs) that evolve along the course of the disease, forming highly heterogeneous subpopulations within the tumour mass. These cells possess self-renewal properties and appear to contribute to tumour initiation, metastasis and resistance to therapy. CSC cultures isolated from surgical samples are considered the best preclinical in vitro model for primary human gliomas. However, it is not yet well characterized to which extent their biological and functional properties change during in vitro passaging in the serum-free culture conditions. Here, we demonstrate that our CSC-enriched cultures harboured from one to several CSC clones from the human glioma sample. When xenotransplanted into mouse brain, these cells generated tumours that reproduced at least three different dissemination patterns found in original tumours. Along the passages in culture, CSCs displayed increased expression of stem cell markers, different ratios of chromosomal instability events, and a varied response to drug treatment. Our findings highlight the need for better characterization of CSC-enriched cultures in the context of their evolution in vitro, in order to uncover their full potential as preclinical models in the studies aimed at identifying molecular biomarkers and developing new therapeutic approaches of human gliomas.Peer reviewe
Resilient Computing Curriculum
This Deliverable presents the MSc Curriculum in Resilient Computing suggested by ReSIST. It includes the description of the syllabi for all the courses in the two semesters of the first year, those for the common courses in semester 3 in the second year together with an exemplification of possible application tracks with the related courses. This MSc curriculum has been updated and
completed taking advantage of a large open discussion inside and outside ReSIST. This MSc Curriculum is on-line on the official ReSIST web site, where all information is available together with all the support material generated by ReSIST and all other relevant freely available support material.European Commission through NoE IST-4-026764-NOE (ReSIST
Resilient Computing Courseware
This Deliverable describes the courseware in support to teaching Resilient Computing
in a Curriculum for an MSc track following the scheme of the Bologna process. The development of the supporting material for such a curriculum has required a rather intensive activity that involved not only the partners in ReSIST but also a much
larger worldwide community with the aim of identifying available updated support
material that can be used to build a progressive and methodical line of teaching to accompany students and interested persons in a profitable learning process. All this material is on-line on the official ReSIST web site http://www.resistnoe.org/, can be viewed and downloaded for use in a class and constitutes, at our knowledge, the first, almost comprehensive attempt, to build a database of support material related to Dependable and Resilient Computing.European Commission through NoE IST-4-026764-NOE (ReSIST
Adding Breadth to CSI and CS2 Courses Through Visual and Interactive Programming Projects
The aim of programming projects in CSKS2 is to put in practice concepts and techniques learnt during lectures. Programming projects serve a dual purpose: first, the students get to practice the programming concepts taught in class, and second, they are introduced to an array of topics that they will cover later in their computer science education. In this work, we present programming projects we have successfully used in CSKS2. These topics have added breadth to CSlKS2 as well as whetted our students’ appetite by exposing them to concurrent programming, event-driven programming, graphics management and human-computer interfaces, data compression, image processing and genetic algorithms. We also include the background material, such as tools and libraries we have provided our students to render the more difficult projects amenable to our introductory computer science classes. 1
Distributed Database Systems: The Case for NewSQL
International audienceUntil a decade ago, the database world was all SQL, distributed, sometimes replicated, and fully consistent. Then, web and cloud applications emerged that need to deal with complex big data, and NoSQL came in to address their requirements, trading consistency for scalability and availability. NewSQL has been the latest technology in the big data management landscape, combining the scalability and availability of NoSQL with the consistency and usability of SQL. By blending capabilities only available in different kinds of database systems such as fast data ingestion and SQL queries and by providing online analytics over operational data, NewSQL opens up new opportunities in many application domains where real-time decisions are critical. NewSQL may also simplify data management, by removing the traditional separation between NoSQL and SQL (ingest data fast, query it with SQL), as well as between operational database and data warehouse / data lake (no more ETLs!). However, a hard problem is scaling out transactions in mixed operational and analytical (HTAP) workloads over big data, possibly coming from different data stores (HDFS, SQL, NoSQL). Today, only a few NewSQL systems have solved this problem. In this paper, we make the case for NewSQL, introducing their basic principles from distributed database systems and illustrating with Spanner and LeanXcale, two of the most advanced systems in terms of scalable transaction management
New Spanish Dinotrema species with propodeal areola or mainly sculptured propodeum (Hymenoptera, Braconidae, Alysiinae)
The illustrated descriptions of eight new species of the genus Dinotrema with either the propodeum mainly sculptured ora large propodeal areola from Spain are provided, viz. D. amparoae sp. n., D. benifassaense sp. n., D. lagunasense sp. n., D. pilarae sp. n., D. robertoi sp. n., D. teresae sp. n., D. tinencaense sp. n., and D. torreviejaense sp. n.
New western Palaearctic Dinotrema species with mesoscutal pit and only medially sculptured propodeum (Hymenoptera, Braconidae, Alysiinae)
Descriptions of four new species of the genus Dinotrema Foerster with a mesoscutal pit and only medially sculptured propodeum are given. Dinotrema alysiae sp. n. (Denmark, England, Netherlands, Spain), D. paramicum sp. n. (Denmark, Finland), D. tirolense sp. n. (Italy) and D. valvulatum sp. n. (Denmark, Italy)
PolyVaccine: Protecting Web Servers against Zero-Day, Polymorphic and Metamorphic Exploits ⋆
Abstract—Today web servers are ubiquitous having become critical infrastructures of many organizations. However, they are still one of the most vulnerable parts of organizations infrastructure. Exploits are many times used by worms to fast propagate across the full Internet being web servers one of their main targets. New exploit techniques have arouse in the last few years that have rendered useless traditional IDS techniques based on signature identification. Exploits use polymorphism (code encryption) and metamorphism (code obfuscation) to evade detection from signature-based IDSs. In this paper, we address precisely the topic of how to protect web servers against zero-day (new), polymorphic, and metamorphic malware embedded in data streams (requests) that target web servers. We rely on a novel technique to detect harmful binary code injection (i.e., exploits) in HTTP requests that is more efficient than current techniques based on binary code emulation or instrumentation of virtual engines. The detection of exploits is done through sandbox processes. The technique is complemented by another set of techniques such as caching, and pooling, to reduce its cost to neglectable levels. Our technique has little assumptions regarding the exploit unlike previous approaches that assume the existence of sled or getPC code, loops, read of the payload, writes to different addresses, etc. The evaluation shows that caching is highly effective and that the average latency introduced by our system is neglectable. I
- …