11,152 research outputs found
Hierarchical Deep Learning Architecture For 10K Objects Classification
Evolution of visual object recognition architectures based on Convolutional
Neural Networks & Convolutional Deep Belief Networks paradigms has
revolutionized artificial Vision Science. These architectures extract & learn
the real world hierarchical visual features utilizing supervised & unsupervised
learning approaches respectively. Both the approaches yet cannot scale up
realistically to provide recognition for a very large number of objects as high
as 10K. We propose a two level hierarchical deep learning architecture inspired
by divide & conquer principle that decomposes the large scale recognition
architecture into root & leaf level model architectures. Each of the root &
leaf level models is trained exclusively to provide superior results than
possible by any 1-level deep learning architecture prevalent today. The
proposed architecture classifies objects in two steps. In the first step the
root level model classifies the object in a high level category. In the second
step, the leaf level recognition model for the recognized high level category
is selected among all the leaf models. This leaf level model is presented with
the same input object image which classifies it in a specific category. Also we
propose a blend of leaf level models trained with either supervised or
unsupervised learning approaches. Unsupervised learning is suitable whenever
labelled data is scarce for the specific leaf level models. Currently the
training of leaf level models is in progress; where we have trained 25 out of
the total 47 leaf level models as of now. We have trained the leaf models with
the best case top-5 error rate of 3.2% on the validation data set for the
particular leaf models. Also we demonstrate that the validation error of the
leaf level models saturates towards the above mentioned accuracy as the number
of epochs are increased to more than sixty.Comment: As appeared in proceedings for CS & IT 2015 - Second International
Conference on Computer Science & Engineering (CSEN 2015
Forum Session at the First International Conference on Service Oriented Computing (ICSOC03)
The First International Conference on Service Oriented Computing (ICSOC) was held in Trento, December 15-18, 2003. The focus of the conference ---Service Oriented Computing (SOC)--- is the new emerging paradigm for distributed computing and e-business processing that has evolved from object-oriented and component computing to enable building agile networks of collaborating business applications distributed within and across organizational boundaries. Of the 181 papers submitted to the ICSOC conference, 10 were selected for the forum session which took place on December the 16th, 2003. The papers were chosen based on their technical quality, originality, relevance to SOC and for their nature of being best suited for a poster presentation or a demonstration. This technical report contains the 10 papers presented during the forum session at the ICSOC conference. In particular, the last two papers in the report ere submitted as industrial papers
Using Ontologies for the Design of Data Warehouses
Obtaining an implementation of a data warehouse is a complex task that forces
designers to acquire wide knowledge of the domain, thus requiring a high level
of expertise and becoming it a prone-to-fail task. Based on our experience, we
have detected a set of situations we have faced up with in real-world projects
in which we believe that the use of ontologies will improve several aspects of
the design of data warehouses. The aim of this article is to describe several
shortcomings of current data warehouse design approaches and discuss the
benefit of using ontologies to overcome them. This work is a starting point for
discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure
Adaptive learning-based resource management strategy in fog-to-cloud
Technology in the twenty-first century is rapidly developing and driving us into a new smart computing world, and emerging lots
of new computing architectures. Fog-to-Cloud (F2C) is among one of them, which emerges to ensure the commitment for
bringing the higher computing facilities near to the edge of the network and also help the large-scale computing system to be
more intelligent. As the F2C is in its infantile state, therefore one of the biggest challenges for this computing paradigm is to
efficiently manage the computing resources. Mainly, to address this challenge, in this work, we have given our sole interest for
designing the initial architectural framework to build a proper, adaptive and efficient resource management mechanism in F2C.
F2C has been proposed as a combined, coordinated and hierarchical computing platform, where a vast number of
heterogeneous computing devices are participating. Notably, their versatility creates a massive challenge for effectively handling
them. Even following any large-scale smart computing system, it can easily recognize that various kind of services is served for
different purposes. Significantly, every service corresponds with the various tasks, which have different resource requirements.
So, knowing the characteristics of participating devices and system offered services is giving advantages to build effective and
resource management mechanism in F2C-enabled system. Considering these facts, initially, we have given our intense focus for
identifying and defining the taxonomic model for all the participating devices and system involved services-tasks.
In any F2C-enabled system consists of a large number of small Internet-of-Things (IoTs) and generating a continuous and
colossal amount of sensing-data by capturing various environmental events. Notably, this sensing-data is one of the key
ingredients for various smart services which have been offered by the F2C-enabled system. Besides that, resource statistical
information is also playing a crucial role, for efficiently providing the services among the system consumers. Continuous
monitoring of participating devices generates a massive amount of resource statistical information in the F2C-enabled system.
Notably, having this information, it becomes much easier to know the device's availability and suitability for executing some tasks
to offer some services. Therefore, ensuring better service facilities for any latency-sensitive services, it is essential to securely
distribute the sensing-data and resource statistical information over the network. Considering these matters, we also proposed
and designed a secure and distributed database framework for effectively and securely distribute the data over the network.
To build an advanced and smarter system is necessarily required an effective mechanism for the utilization of system resources.
Typically, the utilization and resource handling process mainly depend on the resource selection and allocation mechanism. The
prediction of resources (e.g., RAM, CPU, Disk, etc.) usage and performance (i.e., in terms of task execution time) helps the
selection and allocation process. Thus, adopting the machine learning (ML) techniques is much more useful for designing an
advanced and sophisticated resource allocation mechanism in the F2C-enabled system. Adopting and performing the ML
techniques in F2C-enabled system is a challenging task. Especially, the overall diversification and many other issues pose a
massive challenge for successfully performing the ML techniques in any F2C-enabled system. Therefore, we have proposed and
designed two different possible architectural schemas for performing the ML techniques in the F2C-enabled system to achieve
an adaptive, advance and sophisticated resource management mechanism in the F2C-enabled system. Our proposals are the
initial footmarks for designing the overall architectural framework for resource management mechanism in F2C-enabled system.La tecnologia del segle XXI avança ràpidament i ens condueix cap a un nou món intel·ligent, creant nous models d'arquitectures informàtiques. Fog-to-Cloud (F2C) és un d’ells, i sorgeix per garantir el compromís d’acostar les instal·lacions informàtiques a prop de la xarxa i també ajudar el sistema informàtic a gran escala a ser més intel·ligent. Com que el F2C es troba en un estat preliminar, un dels majors reptes d’aquest paradigma tecnològic és gestionar eficientment els recursos informàtics. Per fer front a aquest repte, en aquest treball hem centrat el nostre interès en dissenyar un marc arquitectònic per construir un mecanisme de gestió de recursos adequat, adaptatiu i eficient a F2C.F2C ha estat concebut com una plataforma informàtica combinada, coordinada i jeràrquica, on participen un gran nombre de dispositius heterogenis. La seva versatilitat planteja un gran repte per gestionar-los de manera eficaç. Els serveis que s'hi executen consten de diverses tasques, que tenen requisits de recursos diferents. Per tant, conèixer les característiques dels dispositius participants i dels serveis que ofereix el sistema és un requisit per dissenyar mecanismes eficaços i de gestió de recursos en un sistema habilitat per F2C. Tenint en compte aquests fets, inicialment ens hem centrat en identificar i definir el model taxonòmic per a tots els dispositius i sistemes implicats en l'execució de tasques de serveis. Qualsevol sistema habilitat per F2C inclou en un gran nombre de dispositius petits i connectats (conegut com a Internet of Things, o IoT) que generen una quantitat contínua i colossal de dades de detecció capturant diversos events ambientals. Aquestes dades són un dels ingredients clau per a diversos serveis intel·ligents que ofereix F2C. A més, el seguiment continu dels dispositius participants genera igualment una gran quantitat d'informació estadística. En particular, en tenir aquesta informació, es fa molt més fàcil conèixer la disponibilitat i la idoneïtat dels dispositius per executar algunes tasques i oferir alguns serveis. Per tant, per garantir millors serveis sensibles a la latència, és essencial distribuir de manera equilibrada i segura la informació estadística per la xarxa. Tenint en compte aquests assumptes, també hem proposat i dissenyat un entorn de base de dades segura i distribuïda per gestionar de manera eficaç i segura les dades a la xarxa. Per construir un sistema avançat i intel·ligent es necessita un mecanisme eficaç per a la gestió de l'ús dels recursos del sistema. Normalment, el procés d’utilització i manipulació de recursos depèn principalment del mecanisme de selecció i assignació de recursos. La predicció de l’ús i el rendiment de recursos (per exemple, RAM, CPU, disc, etc.) en termes de temps d’execució de tasques ajuda al procés de selecció i assignació. Adoptar les tècniques d’aprenentatge automàtic (conegut com a Machine Learning, o ML) és molt útil per dissenyar un mecanisme d’assignació de recursos avançat i sofisticat en el sistema habilitat per F2C. L’adopció i la realització de tècniques de ML en un sistema F2C és una tasca complexa. Especialment, la diversificació general i molts altres problemes plantegen un gran repte per realitzar amb èxit les tècniques de ML. Per tant, en aquesta recerca hem proposat i dissenyat dos possibles esquemes arquitectònics diferents per realitzar tècniques de ML en el sistema habilitat per F2C per aconseguir un mecanisme de gestió de recursos adaptatiu, avançat i sofisticat en un sistema F2C. Les nostres propostes són els primers passos per dissenyar un marc arquitectònic general per al mecanisme de gestió de recursos en un sistema habilitat per F2C.Postprint (published version
Engineering Crowdsourced Stream Processing Systems
A crowdsourced stream processing system (CSP) is a system that incorporates
crowdsourced tasks in the processing of a data stream. This can be seen as
enabling crowdsourcing work to be applied on a sample of large-scale data at
high speed, or equivalently, enabling stream processing to employ human
intelligence. It also leads to a substantial expansion of the capabilities of
data processing systems. Engineering a CSP system requires the combination of
human and machine computation elements. From a general systems theory
perspective, this means taking into account inherited as well as emerging
properties from both these elements. In this paper, we position CSP systems
within a broader taxonomy, outline a series of design principles and evaluation
metrics, present an extensible framework for their design, and describe several
design patterns. We showcase the capabilities of CSP systems by performing a
case study that applies our proposed framework to the design and analysis of a
real system (AIDR) that classifies social media messages during time-critical
crisis events. Results show that compared to a pure stream processing system,
AIDR can achieve a higher data classification accuracy, while compared to a
pure crowdsourcing solution, the system makes better use of human workers by
requiring much less manual work effort
Conceptualizing the IT Artifact for MIS Research
The notion of the information technology (IT) artifact has received a great deal of attention, particularly since Benbasat and Zmud’s (2003) call for it to be the core of the information systems discipline. Yet, little work has focused on defining and discussing the IT artifact in a way that can facilitate consistent treatment across studies. In this paper, we develop a taxonomy of the IT artifact. The taxonomy is derived from literature on general systems theory and Akerman and Tyree’s (2006) architectural ontology. We provide a preliminary explication of the taxonomy using four different systems as examples. We also discuss the iterative approach we will take to develop the taxonomy more completely. Our objective is to develop a taxonomy that will provide a language for IS researchers to use when discussing the IT artifact
- …