344 research outputs found
The Emerging Internet of Things Marketplace From an Industrial Perspective: A Survey
The Internet of Things (IoT) is a dynamic global information network
consisting of internet-connected objects, such as Radio-frequency
identification (RFIDs), sensors, actuators, as well as other instruments and
smart appliances that are becoming an integral component of the future
internet. Over the last decade, we have seen a large number of the IoT
solutions developed by start-ups, small and medium enterprises, large
corporations, academic research institutes (such as universities), and private
and public research organisations making their way into the market. In this
paper, we survey over one hundred IoT smart solutions in the marketplace and
examine them closely in order to identify the technologies used,
functionalities, and applications. More importantly, we identify the trends,
opportunities and open challenges in the industry-based the IoT solutions.
Based on the application domain, we classify and discuss these solutions under
five different categories: smart wearable, smart home, smart, city, smart
environment, and smart enterprise. This survey is intended to serve as a
guideline and conceptual framework for future research in the IoT and to
motivate and inspire further developments. It also provides a systematic
exploration of existing research and suggests a number of potentially
significant research directions.Comment: IEEE Transactions on Emerging Topics in Computing 201
Sensor Search Techniques for Sensing as a Service Architecture for The Internet of Things
The Internet of Things (IoT) is part of the Internet of the future and will
comprise billions of intelligent communicating "things" or Internet Connected
Objects (ICO) which will have sensing, actuating, and data processing
capabilities. Each ICO will have one or more embedded sensors that will capture
potentially enormous amounts of data. The sensors and related data streams can
be clustered physically or virtually, which raises the challenge of searching
and selecting the right sensors for a query in an efficient and effective way.
This paper proposes a context-aware sensor search, selection and ranking model,
called CASSARAM, to address the challenge of efficiently selecting a subset of
relevant sensors out of a large set of sensors with similar functionality and
capabilities. CASSARAM takes into account user preferences and considers a
broad range of sensor characteristics, such as reliability, accuracy, location,
battery life, and many more. The paper highlights the importance of sensor
search, selection and ranking for the IoT, identifies important characteristics
of both sensors and data capture processes, and discusses how semantic and
quantitative reasoning can be combined together. This work also addresses
challenges such as efficient distributed sensor search and
relational-expression based filtering. CASSARAM testing and performance
evaluation results are presented and discussed.Comment: IEEE sensors Journal, 2013. arXiv admin note: text overlap with
arXiv:1303.244
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
Domain Conditioned Adaptation Network
Tremendous research efforts have been made to thrive deep domain adaptation
(DA) by seeking domain-invariant features. Most existing deep DA models only
focus on aligning feature representations of task-specific layers across
domains while integrating a totally shared convolutional architecture for
source and target. However, we argue that such strongly-shared convolutional
layers might be harmful for domain-specific feature learning when source and
target data distribution differs to a large extent. In this paper, we relax a
shared-convnets assumption made by previous DA methods and propose a Domain
Conditioned Adaptation Network (DCAN), which aims to excite distinct
convolutional channels with a domain conditioned channel attention mechanism.
As a result, the critical low-level domain-dependent knowledge could be
explored appropriately. As far as we know, this is the first work to explore
the domain-wise convolutional channel activation for deep DA networks.
Moreover, to effectively align high-level feature distributions across two
domains, we further deploy domain conditioned feature correction blocks after
task-specific layers, which will explicitly correct the domain discrepancy.
Extensive experiments on three cross-domain benchmarks demonstrate the proposed
approach outperforms existing methods by a large margin, especially on very
tough cross-domain learning tasks.Comment: Accepted by AAAI 202
Recommended from our members
Research-based versus clinical serum creatinine measurements and the association of acute kidney injury with subsequent kidney function: findings from the Chronic Renal Insufficiency Cohort study.
Background:Observational studies relying on clinically obtained data have shown that acute kidney injury (AKI) is linked to accelerated chronic kidney disease (CKD) progression. However, prior reports lacked uniform collection of important confounders such as proteinuria and pre-AKI kidney function trajectory, and may be susceptible to ascertainment bias, as patients may be more likely to undergo kidney function testing after AKI. Methods:We studied 444 adults with CKD who participated in the prospective Chronic Renal Insufficiency Cohort (CRIC) Study and were concurrent members of a large integrated healthcare delivery system. We estimated glomerular filtration rate (eGFR) trajectories using serum creatinine measurements from (i) the CRIC research protocol (yearly) and (ii) routine clinical care. We used linear mixed effects models to evaluate the associations of AKI with acute absolute change in eGFR and post-AKI eGFR slope, and explored whether these varied by source of creatinine results. Models were adjusted for demographic characteristics, diabetes status and albuminuria. Results:During median follow-up of 8.5 years, mean rate of eGFR loss was -0.31 mL/min/1.73 m2/year overall, and 73 individuals experienced AKI (55% Stage 1). A significant interaction existed between AKI and source of serum creatinine for acute absolute change in eGFR level after discharge; in contrast, AKI was independently associated with a faster rate of eGFR decline (mean additional loss of -0.67 mL/min/1.73 m2/year), which was not impacted by source of serum creatinine. Conclusions:AKI is independently associated with subsequent steeper eGFR decline regardless of the serum creatinine source used, but the strength of association is smaller than observed in prior studies after taking into account key confounders such as pre-AKI eGFR slope and albuminuria
CoinSeg: Contrast Inter- and Intra- Class Representations for Incremental Segmentation
Class incremental semantic segmentation aims to strike a balance between the
model's stability and plasticity by maintaining old knowledge while adapting to
new concepts. However, most state-of-the-art methods use the freeze strategy
for stability, which compromises the model's plasticity.In contrast, releasing
parameter training for plasticity could lead to the best performance for all
categories, but this requires discriminative feature representation.Therefore,
we prioritize the model's plasticity and propose the Contrast inter- and
intra-class representations for Incremental Segmentation (CoinSeg), which
pursues discriminative representations for flexible parameter tuning. Inspired
by the Gaussian mixture model that samples from a mixture of Gaussian
distributions, CoinSeg emphasizes intra-class diversity with multiple
contrastive representation centroids. Specifically, we use mask proposals to
identify regions with strong objectness that are likely to be diverse
instances/centroids of a category. These mask proposals are then used for
contrastive representations to reinforce intra-class diversity. Meanwhile, to
avoid bias from intra-class diversity, we also apply category-level
pseudo-labels to enhance category-level consistency and inter-category
diversity. Additionally, CoinSeg ensures the model's stability and alleviates
forgetting through a specific flexible tuning strategy. We validate CoinSeg on
Pascal VOC 2012 and ADE20K datasets with multiple incremental scenarios and
achieve superior results compared to previous state-of-the-art methods,
especially in more challenging and realistic long-term scenarios. Code is
available at https://github.com/zkzhang98/CoinSeg.Comment: Accepted by ICCV 202
Bridge the Points:Graph-based Few-shot Segment Anything Semantically
The recent advancements in large-scale pre-training techniques have significantly enhanced the capabilities of vision foundation models, notably the Segment Anything Model (SAM), which can generate precise masks based on point and box prompts. Recent studies extend SAM to Few-shot Semantic Segmentation (FSS), focusing on prompt generation for SAM-based automatic semantic segmentation. However, these methods struggle with selecting suitable prompts, require specific hyperparameter settings for different scenarios, and experience prolonged one-shot inference times due to the overuse of SAM, resulting in low efficiency and limited automation ability. To address these issues, we propose a simple yet effective approach based on graph analysis. In particular, a Positive-Negative Alignment module dynamically selects the point prompts for generating masks, especially uncovering the potential of the background context as the negative reference. Another subsequent Point-Mask Clustering module aligns the granularity of masks and selected points as a directed graph, based on mask coverage over points. These points are then aggregated by decomposing the weakly connected components of the directed graph in an efficient manner, constructing distinct natural clusters. Finally, the positive and overshooting gating, benefiting from graph-based granularity alignment, aggregate high-confident masks and filter out the false-positive masks for final prediction, reducing the usage of additional hyperparameters and redundant mask generation. Extensive experimental analysis across standard FSS, One-shot Part Segmentation, and Cross Domain FSS datasets validate the effectiveness and efficiency of the proposed approach, surpassing state-of-the-art generalist models with a mIoU of 58.7% on COCO-20i and 35.2% on LVIS-92i. The code is available in https://andyzaq.github.io/GF-SAM/
Bridge the Points:Graph-based Few-shot Segment Anything Semantically
The recent advancements in large-scale pre-training techniques have significantly enhanced the capabilities of vision foundation models, notably the Segment Anything Model (SAM), which can generate precise masks based on point and box prompts. Recent studies extend SAM to Few-shot Semantic Segmentation (FSS), focusing on prompt generation for SAM-based automatic semantic segmentation. However, these methods struggle with selecting suitable prompts, require specific hyperparameter settings for different scenarios, and experience prolonged one-shot inference times due to the overuse of SAM, resulting in low efficiency and limited automation ability. To address these issues, we propose a simple yet effective approach based on graph analysis. In particular, a Positive-Negative Alignment module dynamically selects the point prompts for generating masks, especially uncovering the potential of the background context as the negative reference. Another subsequent Point-Mask Clustering module aligns the granularity of masks and selected points as a directed graph, based on mask coverage over points. These points are then aggregated by decomposing the weakly connected components of the directed graph in an efficient manner, constructing distinct natural clusters. Finally, the positive and overshooting gating, benefiting from graph-based granularity alignment, aggregate high-confident masks and filter out the false-positive masks for final prediction, reducing the usage of additional hyperparameters and redundant mask generation. Extensive experimental analysis across standard FSS, One-shot Part Segmentation, and Cross Domain FSS datasets validate the effectiveness and efficiency of the proposed approach, surpassing state-of-the-art generalist models with a mIoU of 58.7% on COCO-20i and 35.2% on LVIS-92i. The code is available in https://andyzaq.github.io/GF-SAM/
VBLC: Visibility Boosting and Logit-Constraint Learning for Domain Adaptive Semantic Segmentation under Adverse Conditions
Generalizing models trained on normal visual conditions to target domains
under adverse conditions is demanding in the practical systems. One prevalent
solution is to bridge the domain gap between clear- and adverse-condition
images to make satisfactory prediction on the target. However, previous methods
often reckon on additional reference images of the same scenes taken from
normal conditions, which are quite tough to collect in reality. Furthermore,
most of them mainly focus on individual adverse condition such as nighttime or
foggy, weakening the model versatility when encountering other adverse
weathers. To overcome the above limitations, we propose a novel framework,
Visibility Boosting and Logit-Constraint learning (VBLC), tailored for superior
normal-to-adverse adaptation. VBLC explores the potential of getting rid of
reference images and resolving the mixture of adverse conditions
simultaneously. In detail, we first propose the visibility boost module to
dynamically improve target images via certain priors in the image level. Then,
we figure out the overconfident drawback in the conventional cross-entropy loss
for self-training method and devise the logit-constraint learning, which
enforces a constraint on logit outputs during training to mitigate this pain
point. To the best of our knowledge, this is a new perspective for tackling
such a challenging task. Extensive experiments on two normal-to-adverse domain
adaptation benchmarks, i.e., Cityscapes -> ACDC and Cityscapes ->
FoggyCityscapes + RainCityscapes, verify the effectiveness of VBLC, where it
establishes the new state of the art. Code is available at
https://github.com/BIT-DA/VBLC.Comment: Camera ready for AAAI 2023. Code is available at
https://github.com/BIT-DA/VBL
Bridge the Points:Graph-based Few-shot Segment Anything Semantically
The recent advancements in large-scale pre-training techniques have significantly enhanced the capabilities of vision foundation models, notably the Segment Anything Model (SAM), which can generate precise masks based on point and box prompts. Recent studies extend SAM to Few-shot Semantic Segmentation (FSS), focusing on prompt generation for SAM-based automatic semantic segmentation. However, these methods struggle with selecting suitable prompts, require specific hyperparameter settings for different scenarios, and experience prolonged one-shot inference times due to the overuse of SAM, resulting in low efficiency and limited automation ability. To address these issues, we propose a simple yet effective approach based on graph analysis. In particular, a Positive-Negative Alignment module dynamically selects the point prompts for generating masks, especially uncovering the potential of the background context as the negative reference. Another subsequent Point-Mask Clustering module aligns the granularity of masks and selected points as a directed graph, based on mask coverage over points. These points are then aggregated by decomposing the weakly connected components of the directed graph in an efficient manner, constructing distinct natural clusters. Finally, the positive and overshooting gating, benefiting from graph-based granularity alignment, aggregates high-confident masks and filters the false-positive masks for final prediction, reducing the usage of additional hyperparameters and redundant mask generation. Extensive experimental analysis across standard FSS, One-shot Part Segmentation, and Cross Domain FSS datasets validate the effectiveness and efficiency of the proposed approach, surpassing state-of-the-art generalist models with a mIoU of 58.7% on COCO-20i and 35.2% on LVIS-92i. The project page of this work is https://andyzaq.github.io/GF-SAM/
- …
