155 research outputs found
When the Few Outweigh the Many: Illicit Content Recognition with Few-Shot Learning
The anonymity and untraceability benefits of the Dark web account for the
exponentially-increased potential of its popularity while creating a suitable
womb for many illicit activities, to date. Hence, in collaboration with
cybersecurity and law enforcement agencies, research has provided approaches
for recognizing and classifying illicit activities with most exploiting textual
dark web markets' content recognition; few such approaches use images that
originated from dark web content. This paper investigates this alternative
technique for recognizing illegal activities from images. In particular, we
investigate label-agnostic learning techniques like One-Shot and Few-Shot
learning featuring the use Siamese neural networks, a state-of-the-art approach
in the field. Our solution manages to handle small-scale datasets with
promising accuracy. In particular, Siamese neural networks reach 90.9% on
20-Shot experiments over a 10-class dataset; this leads us to conclude that
such models are a promising and cheaper alternative to the definition of
automated law-enforcing machinery over the dark web
Characterization of polyaniline-detonation nanodiamond nanocomposite fibers by atomic force microscopy based technique
Polyaniline (PANI) fibers were synthesized in presence of detonantion nanodiamond (DND) particles by precipitation polymerization technique. Morphological, electrical and mechanical characterizations of the obtained PANI/DND nanocomposited have been performed by different either standard or advanced atomic force microscopy (AFM) based techniques. Morphological characterization by tapping mode AFM supplied information about the structure of fibers and ribbons forming the PANI/DND network. An AFM based technique that takes advantage of an experimental configuration specifically devised for the purpose was used to assess the electrical properties of the fibers, in particular to verify their conductivity. Finally, mechanical characterization was carried out synergically using two different and recently proposed AFM based techniques, one based on AFM tapping mode and the other requiring AFM contact mode, which probed the nanocomposited nature of PANI/DND fiber sample down to different depths. © 2013 Elsevier Ltd. All rights reserved
Verifying big data topologies by-design: a semi-automated approach
Big data architectures have been gaining momentum in recent years. For instance, Twitter uses stream processing frameworks like Apache Storm to analyse billions of tweets per minute and learn the trending topics. However, architectures that process big data involve many different components interconnected via semantically different connectors. Such complex architectures make possible refactoring of the applications a difficult task for software architects, as applications might be very different with respect to the initial designs. As an aid to designers and developers, we developed OSTIA (Ordinary Static Topology Inference Analysis) that allows detecting the occurrence of common anti-patterns across big data architectures and exploiting software verification techniques on the elicited architectural models. This paper illustrates OSTIA and evaluates its uses and benefits on three industrial-scale case-studies
Scaling relations of cluster elliptical galaxies at z~1.3. Distinguishing luminosity and structural evolution
[Abridged] We studied the size-surface brightness and the size-mass relations
of a sample of 16 cluster elliptical galaxies in the mass range
10^{10}-2x10^{11} M_sun which were morphologically selected in the cluster RDCS
J0848+4453 at z=1.27. Our aim is to assess whether they have completed their
mass growth at their redshift or significant mass and/or size growth can or
must take place until z=0 in order to understand whether elliptical galaxies of
clusters follow the observed size evolution of passive galaxies. To compare our
data with the local universe we considered the Kormendy relation derived from
the early-type galaxies of a local Coma Cluster reference sample and the WINGS
survey sample. The comparison with the local Kormendy relation shows that the
luminosity evolution due to the aging of the stellar content already assembled
at z=1.27 brings them on the local relation. Moreover, this stellar content
places them on the size-mass relation of the local cluster ellipticals. These
results imply that for a given mass, the stellar mass at z~1.3 is distributed
within these ellipticals according to the same stellar mass profile of local
ellipticals. We find that a pure size evolution, even mild, is ruled out for
our galaxies since it would lead them away from both the Kormendy and the
size-mass relation. If an evolution of the effective radius takes place, this
must be compensated by an increase in the luminosity, hence of the stellar mass
of the galaxies, to keep them on the local relations. We show that to follow
the Kormendy relation, the stellar mass must increase as the effective radius.
However, this mass growth is not sufficient to keep the galaxies on the
size-mass relation for the same variation in effective radius. Thus, if we want
to preserve the Kormendy relation, we fail to satisfy the size-mass relation
and vice versa.Comment: Accepted for publication in A&A, updated to match final journal
versio
RADON: Rational decomposition and orchestration for serverless computing
Emerging serverless computing technologies, such as function as a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying the management of cloud-native services and allowing cost savings through billing and scaling at the level of individual functions. Serverless computing is therefore rapidly shifting the attention of software vendors to the challenge of developing cloud applications deployable on FaaS platforms. In this vision paper, we present the research agenda of the RADON project (http://radon-h2020.eu), which aims to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. RADON applications will consist of fine-grained and independent microservices that can efficiently and optimally exploit FaaS and container technologies. Our methodology strives to tackle complexity in designing such applications, including the solution of optimal decomposition, the reuse of serverless functions as well as the abstraction and actuation of event processing chains, while avoiding cloud vendor lock-in through models
Definición de una arquitectura de referencia para plataformas de servicios de datos
Big Data se refiere a conjuntos de datos cuyo volumen, velocidad y variedad dificultan su captura, gestión y procesamiento mediante tecnologías y herramientas convencionales. Este concepto ha generado nuevas necesidades en las organizaciones para permitir la captura, almacenamiento y análisis de datos con estas características y así obtener información relevante para la toma de decisiones. Un reto para las organizaciones es la implementación de una arquitectura que permita cubrir estas necesidades, ya que deben considerar las diferentes tecnologías existentes y deben establecer las políticas para el gobierno de datos que están en manos de los usuarios. Una arquitectura de referencia de una plataforma de analítica de datos, que se desvincule de herramientas tecnológicas es una guía que le permite a las organizaciones trazar un camino para lograr la gestión de grandes volúmenes de datos y así tener herramientas efectivas para la toma de decisiones empresariales. La arquitectura de referencia es lo suficientemente general como para implementarse con diferentes tecnologías, paradigmas informáticos y software analítico, dependiendo de los requisitos y propósitos de cada organización. En el proyecto desarrollado se realizó la implementación de la arquitectura con datos de la atención de urgencias en centros hospitalarios de la ciudad de Medellín. Uno de los resultados del trabajo de investigación es que la arquitectura propuesta considera diferentes tipos de usuario y de fuentes de datos, no genera dependencia por el tipo de herramientas tecnológica que se utilizan y establece una capa para el gobierno de datos.Big Data refers to data set whose volume, velocity, and variety make it difficult to capture, manage and process using conventional technologies and tools. This concept is generating new needs in organizations to allow the capture, storage, and analysis of data with these characteristics and thus obtain relevant information for decision-making. A challenge for organizations is the implementation of an architecture that covers these needs, since they must consider the different existing technologies and must establish the policies for data governance that will be available to users. A reference architecture of a data analytics platform that is capable of decoupling from technological tools will be a guide that will allow organizations to define a path to achieve the management of these data and thus have effective tools for make decisions in the company. The reference architecture is general enough to be implemented with different technologies, computing paradigms and analytical software, depending on the requirements and purposes of each organization. In the developed project, the architecture was implemented with data from emergency care in hospitals in the Medellín city. One of the results of the research work is that the proposed architecture considers different types of user and data sources, does not generate dependency due to the type of technological tools used and establishes a layer for data governance.Magíster en Ingeniería de SoftwareMaestrí
Local elastic measurement in nanostructured materials via atomic force acoustic microscopy technique
The response of nematodes to deep-sea CO2 sequestration : a quantile regression approach
Author Posting. © The Author(s), 2010. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in Deep Sea Research Part I: Oceanographic Research Papers 57 (2010): 696-707, doi:10.1016/j.dsr.2010.03.003.One proposed approach to ameliorate the effects of global warming is sequestration of
the greenhouse gas CO2 in the deep sea. To evaluate the environmental impact of this
approach, we exposed the sediment-dwelling fauna at the mouth of the Monterey
Submarine Canyon (3262 m) and a site on the nearby continental rise (3607 m) to CO2-
rich water. We measured meiobenthic nematode population and community metrics
after ~30-day exposures along a distance gradient from the CO2 source and with
sediment depth to infer the patterns of mortality. We also compared the nematode
response with that of harpacticoid copepods. Nematode abundance, average sediment
depth, tail-group composition, and length: width ratio did not vary with distance from
the CO2 source. However, quantile regression showed that nematode length and
diameter increased in close proximity to the CO2 source in both experiments. Further,
the effects of CO2 exposure and sediment depth (nematodes became more slender at
one site, but larger at the other, with increasing depth in the sediment) varied with body
size. For example, the response of the longest nematodes differed from those of
average length. We propose that nematode body length and diameter increases were
induced by lethal exposure to CO2-rich water and that nematodes experienced a high
rate of mortality in both experiments. In contrast, copepods experienced high mortality
rates in only one experiment suggesting that CO2 sequestration effects are taxon
specific.The Department of Energy
Office of Biological and Environmental Research supported this research under award
numbers DE‐FG02‐05ER64070 and DE‐FG03‐01ER63065 and the U.S. Department of
Energy, Fossil Energy Group (award DE‐FC26‐00NT40929). We also appreciate
significant support provided by the Monterey Bay Aquarium Research Institute (project
200002)
- …