Project Titles 2016-17
CLICK HERE
===============================================================================================
ECE (IEEE 2015):(EMBEDDED SYSTEM)
ECE (IEEE 2015):(MATLAB)
EEE (IEEE 2015):(PED,PS,EM)
ALL TITLES (IEEE 2015-ALL DEPARTMENTS):
ECE (IEEE 2015):(MATLAB)
EEE (IEEE 2015):(PED,PS,EM)
ALL TITLES (IEEE 2015-ALL DEPARTMENTS):
===============================================================================================
===============================================================================================
===============================================================================================
Project Titles 2014
CSE/IT (IEEE 2014):(JAVA/DotNet) CSE (IEEE 2014):(ANDROID)
EEE (IEEE 2014) ECE (IEEE 2014):(EMBEDDED SYSTEM)
ECE (IEEE 2014):(MATLAB IMAGE PROCESSING,DSP,COMMUNICATION)ECE (IEEE 2014):(ANDROID)
MECHANICAL (IEEE 2014):
===============================================================================================
To get any base papers from this list send your details like college name,department,contact number to our mail id spectrumpondicherry@gmail.com.
===============================================================================================
JAVA/NS-2
|
APPENDIX:
D
DotNet
J
Java
IP Image Processing
DM DataMining
NS Network Security
NW Networking
MC Mobile Computing
SC Service Computing
PD Parallel distribution
CC Cloud Computing
S.No
|
Code
|
Title
|
Year
|
Abstract
|
|
1
|
JCCZ-01
|
AuditFree
Cloud Storage via Deniable Attribute based Encryption
|
IEEE-2015
|
Cloud storage
services have become increasingly popular. Because of the importance of
privacy, many cloud storage encryption schemes have been proposed to protect
data from those who do not have access. All such schemes assumed that cloud
storage providers are safe and cannot be hacked; however, in practice, some
authorities (i.e., coercers) may force cloud storage providers to reveal user
secrets or confidential data on the cloud, thus altogether circumventing
storage encryption schemes. In this paper, we present our design for a new
cloud storage encryption scheme that enables cloud storage providers to
create convincing fake user secrets to protect user privacy. Since coercers
cannot tell if obtained secrets are true or not, the cloud storage providers
ensure that user privacy is still securely protected.
|
|
2
|
JCCZ-02
|
CHARM A Cost
efficient Multi cloud Data Hosting Scheme with High Availability
|
IEEE-2015
|
Nowadays,
more and more enterprises and organizations are hosting their data into the
cloud, in order to reduce the IT maintenance cost and enhance the data
reliability. However,
facing
the numerous cloud vendors as well as their heterogenous pricing policies,
customers may well be perplexed with which cloud(s) are suitable for storing
their data and what hosting strategy is cheaper. The general status quo is
that customers usually put their data into a single cloud (which is subject
to the vendor lock-in risk) and then simply trust to luck. Based on
comprehensive analysis of various state-of-the-art cloud vendors, this paper
proposes a novel data hosting scheme (named CHARM) which integrates two key
functions desired. The first is selecting several suitable clouds and an
appropriate redundancy strategy to store data with minimized monetary cost
and guaranteed availability. The second is triggering a transition process to
re-distribute data according to the variations of data access pattern and
pricing of clouds. We evaluate the performance of CHARM using both
trace-driven simulations and prototype experiments. The results show that
compared with the major existing schemes, CHARM not only saves around 20% of
monetary cost but also exhibits sound adaptability to data and price
adjustments.
Index Terms—Multi-cloud; data hosting; cloud
storage.
|
|
3
|
JCCZ-03
|
Enabling
Cloud Storage Auditing with Key Exposure Resistance
|
IEEE-2015
|
Cloud
storage auditing is viewed as an important service to verify the integrity of
the data in public cloud. Current auditing protocols are all based on the
assumption that the client’s secret key for auditing is absolutely secure.
However, such assumption may not always be held, due to the possibly weak
sense of security and/or low security settings at the client. If such a
secret key for auditing is exposed, most of the current auditing protocols
would inevitably become unable to work. In this paper, we focus on this new
aspect of cloud storage auditing. We investigate how to reduce the damage of
the client’s key exposure in cloud storage auditing, and give the first
practical solution for this new problem setting. We formalize the definition
and the security model of auditing protocol with key-exposure resilience and
propose such a protocol. In our design, we employ the binary tree structure
and the pre-order traversal technique to update the secret keys for the
client. We also develop a novel authenticator construction to support the
forward security and the property of blockless verifiability. The security
proof and the performance analysis show that our proposed protocol is secure
and efficient.
|
|
4
|
JCCZ-04
|
MobiContext_Cloud
|
IEEE-2015
|
In
recent years, recommendation systems have seen significant evolution in the
field of knowledge engineering. Most of the existing recommendation systems
based their models on collaborative filtering approaches that make them
simple to implement. However, performance of most of the existing
collaborative filtering based recommendation system suffers due to the challenges,
such as: (a) cold start, (b) data sparseness, and (c) scalability. Moreover,
recommendation problem is often characterized by the presence of many
conflicting objectives or decision variables, such as users’ preferences and
venue closeness. In this paper, we proposed MobiContext, a hybrid
cloud based Bi Objective Recommendation Framework (BORF) for mobile social
networks. The MobiContext utilizes multi objective optimization techniques to generate personalized
recommendations. To address the issues pertaining to cold start and data
sparseness, the BORF performs data preprocessing by using the Hub Average (HA) inference model. Moreover, the Weighted Sum
Approach (WSA) is implemented for scalar optimization and an evolutionary
algorithm (NSGAII) is applied for vector optimization to provide optimal
suggestions to the users about a venue.
|
|
5
|
JCCZ-05
|
OPoR
Enabling Proof of Retrievability in Cloud Computing with Resource Constrained
Devices
|
IEEE-2015
|
Cloud
Computing moves the application software and databases to the centralized
large data centers, where the management of the data and services may not be
fully trustworthy. In this work, we study the problem of ensuring the integrity
of data storage in Cloud Computing. To reduce the computational cost at user
side during the integrity verification of their data, the notion of public
verifiability has been proposed. However, the challenge is that the
computational burden is too huge for the users with resource-constrained
devices to compute the public authentication tags of file blocks. To tackle
the challenge, we propose OPoR, a new cloud storage scheme involving a cloud
storage server and a cloud audit server, where the latter is assumed to be
semi-honest. In particular, we consider the task of allowing the cloud audit
server, on behalf of the cloud users, to pre-process the data before
uploading to the cloud storage server and later verifying the data integrity.
OPoR outsources the heavy computation of the tag generation to the cloud
audit server and eliminates the involvement of user in the auditing and in
the preprocessing phases. Furthermore, we strengthen the Proof of
Retrievabiliy (PoR) model to support dynamic data operations, as well as
ensure security against reset attacks launched by the cloud storage server in
the upload phase.
|
|
6
|
JCCZ-06
|
Privacy-Preserving
Public Auditing for
|
IEEE-2015
|
To
protect outsourced data in cloud storage against corruptions, adding fault
tolerance to cloud storage together with data integrity checking and failure
reparation becomes critical. Recently, regenerating codes have gained
popularity due to their lower repair bandwidth while providing fault
tolerance. Existing remote checking methods for regenerating-coded data only
provide private auditing, requiring data owners to always stay online and
handle auditing, as well as repairing, which is sometimes impractical. In
this paper, we propose a public auditing scheme for the regenerating-code-based
cloud storage. To solve the regeneration problem of failed authenticators in
the absence of data owners, we introduce a proxy, which is privileged to
regenerate the authenticators, into the traditional public auditing system
model. Moreover, we design a novel public verifiable authenticator, which is
generated by a couple of keys and can be regenerated using partial keys.
Thus, our scheme can completely release data owners from online burden. In
addition, we randomize the encode coefficients with a pseudorandom function
to preserve data privacy. Extensive security analysis shows that our scheme
is provable secure under random oracle model and experimental evaluation
indicates that our scheme is highly efficient .
|
|
7
|
JCCZ-07
|
Profit
Maximization Scheme
|
IEEE-2015
|
As
an effective and efficient way to provide computing resources and services to
customers on demand, cloud computing has become more and more popular. From
cloud service providers’ perspective, profit is one of the most important
considerations, and it is mainly determined by the configuration of a cloud
service platform under given market demand. However, a single long-term
renting scheme is usually adopted to configure a cloud platform, which cannot
guarantee the service quality but leads to serious resource waste. In this
paper, a double resource renting scheme is designed firstly in which
short-term renting and long-term renting are combined aiming at the existing
issues. This double renting scheme can effectively guarantee the quality of
service of all requests and reduce the resource waste greatly. Secondly, a
service system is considered as an M/M/m+D queuing model and the
performance indicators that affect the profit of our double renting scheme
are analyzed, e.g., the average charge, the ratio of requests that need
temporary servers, and so forth. Thirdly, a profit maximization problem is
formulated for the double renting scheme and the optimized configuration of a
cloud platform is obtained by solving the profit maximization problem.
|
|
8
|
JCCZ-08
|
Reactive Resource Provisioning Heuristics for
|
IEEE-2015
|
The
need for low latency analysis over high-velocity data streams motivates the
need for distributed continuous dataflow systems. Contemporary stream
processing systems use simple techniques to scale on elastic cloud resources
to handle variable data rates. However, application QoS is also impacted by
variability in resource performance exhibited by clouds and hence
necessitates “dynamic dataflows” which
utilize alternate tasks as additional control over the dataflow’s cost and
QoS. Further, we formalize an optimization problem to represent deployment
and runtime resource provisioning that allows us to balance the application’s
QoS, value, and the resource cost. We propose two greedy heuristics, centralized
and sharded, based on the variable-sized bin packing algorithm and compare
against a Genetic Algorithm (GA) based heuristic that gives a near-optimal
solution. A large-scale simulation study, using the Linear Road Benchmark and
VM performance traces from the AWS public cloud, shows that while GA-based
heuristic provides a better quality schedule, the greedy heuristics are more
practical, and can intelligently utilize cloud elasticity to mitigate the
effect of variability, both in input data rates and cloud resource
performance, to meet the QoS of fast data applications.
|
|
9
|
JCCZ-09
|
SAE Toward Efficient Cloud Data Analysis
|
IEEE-2015
|
Social
network analysis is used to extract features of human communities and proves
to be very instrumental in a variety of scientific domains. The dataset of a
social network is often so large that a cloud data analysis service, in which
the computation is performed on a parallel platform in the could, becomes a
good choice for researchers not experienced in parallel programming. In the
cloud, a primary challenge to efficient data analysis is the computation and
communication skew (i.e., load imbalance) among computers caused by
humanity’s group behavior (e.g., bandwagon effect). Traditional load
balancing techniques either require significant effort to re-balance loads on
the nodes, or cannot well cope with stragglers. In this paper, we propose a
general straggler-aware execution approach, SAE, to support the analysis
service in the cloud. It offers a novel computational decomposition method
that factors straggling feature extraction processes into more fine-grained
sub-processes, which are then distributed over clusters of computers for
parallel execution. Experimental results show that SAE can speed up the
analysis by up to 1.77 times compared with state-of-the-art solutions.
|
|
10
|
JCCZ-10
|
Service Operatoraware Trust Scheme for Resource
|
IEEE-2015
|
This
paper proposes a service operator-aware trust scheme (SOTS) for resource
matchmaking across multiple clouds. Through analyzing the built-in
relationship between the users, the broker, and the service resources, this
paper proposes a middleware framework of trust management that can
effectively reduce user burden and improve system dependability. Based on
multi-dimensional resource service operators, we model the problem of trust
evaluation as a process of multi-attribute decision-making, and develop an
adaptive trust evaluation approach based on information entropy theory. This
adaptive approach can overcome the limitations of traditional trust schemes,
whereby the trusted operators are weighted manually or subjectively. As a
result, using SOTS, the broker can efficiently and accurately prepare the
most trusted resources in advance, and thus provide more dependable resources
to users. Our experiments yield interesting and meaningful observations that
can facilitate the effective utilization of SOTS in a large-scale multi-cloud
environment.
|
|
11
|
JCCZ-11
|
Towards Optimized Fine Grained Pricing of
|
IEEE-2015
|
Although
many pricing schemes in IaaS platform are already proposed with pay-as-you-go
and subscription/spot market policy to guarantee service level agreement, it
is still inevitable to suffer from wasteful payment because of coarsegrained
pricing scheme. In this paper, we investigate an optimized fine-grained and
fair pricing scheme. Two tough issues are addressed: (1) the profits of
resource providers and customers often contradict mutually; (2)
VM-maintenance overhead like startup cost is often too huge to be neglected.
Not only can we derive an optimal price in the acceptable price range that
satisfies both customers and providers simultaneously, but we also find a
best-fit billing cycle to maximize social welfare (i.e., the sum of the cost
reductions for all customers and the revenue gained by the provider). We
carefully evaluate the proposed optimized fine-grained pricing scheme with
two large-scale real-world production traces (one from Grid Workload Archive
and the other from Google data center). We compare the new scheme to classic
coarse-grained hourly pricing scheme in experiments and find that customers
and providers can both benefit from our new approach. The maximum social
welfare can be increased up to 72:98% and 48:15% with respect
to DAS-2 trace and Google trace respectively.
|
|
12
|
JCCZ-12
|
Understanding the Performance and
|
IEEE-2015
|
Commercial clouds bring a great
opportunity to the scientific computing area. Scientific applications usually
require significant
resources, however not all scientists have access to sufficien high end computing
systems. Cloud computing has gained
the attention of scientists as a competitive resource to run HPC applications
at a potentially lower cost. But as
DIfferent infrastructure,
it is unclear whether clouds are capable of running scientific applications
with a reasonable performance
per money spent. This work provides a comprehensive evaluation of EC2 cloud
in different aspects. We first analyze the potentials of
the cloud by evaluating the raw performance of different services of AWS such
as compute, memory, network and I /O. Based
on the findings on the raw performance, we then evaluate the
performance of the scientific applications running in the cloud. Finally,
we compare the performance of AWS with a private cloud, in order to find the
root cause of its limitations while running
scientific applications. This paper aims to assess the
ability of the cloud to perform well, as well as to evaluate the cost of the cloud
in terms of both raw performance and scientific applications performance
Furthermore, we evaluate other services
including S3, EBS and DynamoDB among many AWS services in
order to assess the abilities of those to be used by scientific applications
and frameworks. We also evaluate a real scientific compng application through
the Swift parallel scripting System at scale.
|
|
13
|
JDMZ-01
|
Anonymizing Collections of Tree Struct Data- Data
Engg
|
IEEE-2015
|
Collections of
real-world data usually have implicit or explicit structural relations. For
example, databases link records through foreign keys, and XML documents
express associations between different values through syntax. Privacy
preservation, until now, has focused either on data with a very simple
structure, e.g. relational tables, or on data with very complex structure e.g.
social network graphs, but has ignored intermediate cases, which are the most
frequent in practice. In this work, we focus on tree structured data. Such
data stem from various applications, even when the structure is not directly
reflected in the syntax, e.g. XML documents. A characteristic case is a
database where information about a single person is scattered amongst
different tables that are associated through foreign keys. The paper defines
k(m;n)-anonymity, which provides protection against identity disclosure and
proposes a greedy anonymization heuristic that is able to sanitize large
datasets. The algorithm and the quality of the anonymization are evaluated
experimentally.
|
|
14
|
JDMZ-02
|
FOCS Fast Overlapped Community Search
|
IEEE-2015
|
However, most of the existing algorithms
that detect overlapping communities assume that the communities are denser
than their surrounding regions and falsely identify overlaps as communities.
Further, many of these algorithms are computationally demanding and thus, do
not scale reasonably with varying network sizes. In this article, we propose
FOCS (Fast Overlapped Community Search), an algorithm that accounts for local
connectedness in order to identify overlapped communities. FOCS is shown to
be linear in number of edges and nodes. It additionally gains in speed via
simultaneous selection of multiple near-best communities rather than merely
the best, at each iteration. FOCS outperforms some popular overlapped
community finding algorithms in terms of
|
|
15
|
JDMZ-03
|
Making Digital Artifacts_Data Engg
|
IEEE-2015
|
The
current Web has no general mechanisms to make digital artifacts — such as
datasets, code, texts, and images — verifiable and permanent. For digital
artifacts that are supposed to be immutable, there is moreover no commonly
accepted method to enforce this immutability. These shortcomings have a
serious negative impact on the ability to reproduce the results of processes
that rely onWeb resources, which in turn heavily impacts areas such as
science where reproducibility is important. To solve this problem, we propose
trusty URIs containing cryptographic hash values. We show how trusty URIs can
be used for the verification of digital artifacts, in a manner that is
independent of the serialization format in the case of structured data files
such as nano publications.
|
|
16
|
JDMZ-04
|
Privacy Policy Inference of User-Uploaded
|
IEEE-2015
|
With the increasing volume of images users share through
social sites, maintaining privacy has become a major problem, as demonstrated
by a recent wave of publicized incidents where users inadvertently shared
personal information. In light of these incidents, the need of tools to help
users control access to their shared content is apparent. Toward addressing
this need, we propose an Adaptive Privacy Policy Prediction (A3P) system to
help users compose privacy settings for their images. We propose a two-level
framework which according to the user’s available history on the site,
determines the best available privacy policy for the user’s images being uploaded.
Our solution relies on an image classification framework for image categories
which may be associated with similar policies, and on a policy prediction
algorithm to automatically generate a policy for each newly uploaded image,
also according to users’ social features.
|
|
17
|
JDMZ-05
|
RRW - A Robust and Reversible Watermarking
|
IEEE-2015
|
Advancement
in information technology is playing an increasing role in the use of
information systems comprising relational databases. These databases are used
effectively in collaborative environments for information extraction;
consequently, they are vulnerable to security threats concerning ownership
rights and data tampering. Watermarking is advocated to enforce ownership
rights over shared relational data and for providing a means for tackling
data tampering. When ownership rights are enforced using watermarking, the
underlying data undergoes certain modifications; as a result of which, the
data quality gets compromised. Reversible watermarking is employed to ensure
data quality along-with data recovery. However, such techniques are usually
not robust against malicious attacks and do not provide any mechanism to
selectively watermark a particular attribute by taking into account its role
in knowledge discovery. Therefore, reversible watermarking is required that
ensures; (i) watermark encoding and decoding by accounting for the role of
all the features in knowledge discovery; and, (ii) original data recovery in
the presence of active malicious attacks.
|
|
18
|
JDMZ-06
|
Sparsity Learning Formulations for Mining
|
IEEE-2015
|
Traditional
clustering and feature selection methods consider the data matrix as static.
However, the data matrices evolve smoothly over time in many applications. A
simple approach to learn from these time-evolving data matrices is to analyze
them separately. Such strategy ignores the time-dependent nature of the
underlying data. In this paper, we propose two formulations for evolutionary
co-clustering and feature selection based on the fused Lasso regularization. The
evolutionary co-clustering formulation is able to identify smoothly varying
hidden block structures embedded into the matrices along the temporal
dimension. Our formulation is very flexible and allows for imposing
smoothness constraints over only one dimension of the data matrices. The
evolutionary feature selection formulation can uncover shared features in
clustering from time-evolving data matrices. We show that the optimization
problems involved are non-convex, non-smooth and non-separable. To compute
the solutions efficiently, we develop a two-step procedure that optimizes the
objective function iteratively. We evaluate the proposed formulations using
the Allen Developing Mouse Brain Atlas data. Results show that our
formulations consistently outperform prior methods.
|
|
19
|
JDMZ-07
|
Structured Learning_Knowledge Discovery
|
IEEE-2015
|
Social
identity linkage across different social media platforms is of critical
importance to business intelligence by gaining from social data a deeper
understanding and more accurate profiling of users. In this paper, we propose
a solution framework, HYDRA, which consists of three key steps: (I) we model
heterogeneous behavior by long-term topical distribution analysis and
multi-resolution temporal behavior matching against high noise and
information missing, and the behavior similarity are described by
multi-dimensional similarity vector for each user pair; (II) we build
structure consistency models to maximize the structure and behavior
consistency on users’ core social structure across different platforms, thus
the task of identity linkage can be performed on groups of users, which is
beyond the individual level linkage in previous study; and (III) we propose a
normalized-margin-based linkage function formulation, and learn the linkage
function by multi-objective optimization where both supervised pair-wise
linkage function learning and structure consistency maximization are
conducted towards a unified Pareto optimal solution. The model is able to
deal with drastic information missing, and avoid the curse-of-dimensionality
in handling high dimensional sparse representation.
|
|
20
|
JDMZ-08
|
Subgraph
Matching with Set Similarity
|
IEEE-2015
|
In
real-world graphs such as social networks, Semantic Web and biological
networks, each vertex usually contains rich information, which can be modeled
by a set of tokens or elements. In this paper, we study a subgraph matching
with set similarity (SMS2) query over a
large graph database, which retrieves subgraphs that are structurally
isomorphic to the query graph, and meanwhile satisfy the condition of vertex
pair matching with the (dynamic) weighted set similarity. To efficiently
process the SMS2 query, this paper
designs a novel lattice-based index for data graph, and lightweight
signatures for both query vertices and data vertices. Based on the index and
signatures, we propose an efficient two-phase pruning strategy including set
similarity pruning and structure-based pruning, which exploits the unique
features of both (dynamic) weighted set similarity and graph topology. We
also propose an efficient dominating-set-based subgraph matching algorithm
guided by a dominating set selection algorithm to achieve better query
performance. Extensive experiments on both real and synthetic datasets demonstrate
that our method outperforms state-of-the-art methods by an order of
magnitude.
|
|
21
|
JDMZ-09
|
The Impact
of View Histories on Edit Recommendations
|
IEEE-2015
|
Recommendation
systems are intended to increase developer productivity by recommending files
to edit. These systems mine association rules in software revision histories.
However, mining coarse grained rules using
only edit histories produces recommendations with low accuracy, and can only
produce recommendations after a developer edits a file. In this work, we
explore the use of finer grained association
rules, based on the insight that view histories help characterize the
contexts of files to edit. To leverage this additional context and fine grained association rules, we have developed MI, a
recommendation system extending ROSE, an existing edit based recommendation system. We then conducted a
comparative simulation of ROSE and MI using the interaction histories stored
in the Eclipse Bugzilla system. The simulation demonstrates that MI predicts
the files to edit with significantly higher recommendation accuracy than ROSE
(about 63% over 35%), and makes recommendations earlier, often before
developers begin editing. Our results clearly demonstrate the value of
considering both views and edits in systems to recommend files to edit, and
results in more accurate, earlier, and more flexible recommendations.
|
|
22
|
JDMZ-10
|
Towards
Effective Bug Triage with Software Data Reduction Techniques
|
IEEE-2015
|
Software companies spend over 45 percent of cost in dealing with
software bugs. An inevitable step of fixing bugs is bug triage, which aims to
correctly assign a developer to a new bug. To decrease the time cost in
manual work, text classification techniques are applied to conduct automatic
bug triage. In this paper, we address the problem of data reduction for bug
triage, i.e., how to reduce the scale and improve the quality of bug data. We
combine instance selection with feature selection to simultaneously reduce
data scale on the bug dimension and the word dimension. To determine the
order of applying instance selection and feature selection, we extract
attributes from historical bug data sets and build a predictive model for a
new bug data set. We empirically investigate the performance of data
reduction on totally 600,000 bug reports of two large open source projects,
namely Eclipse and Mozilla. The results show that our data reduction can
effectively reduce the data scale and improve the accuracy of bug triage. Our
work provides an approach to leveraging techniques on data processing to form
reduced and high-quality bug data in software development and maintenance.
|
|
23
|
JIPZ-01
|
Multiview Alignment Hashing for Efficient
Image
|
IEEE-2015
|
Hashing
is a popular and efficient method for nearest neighbor search in large-scale
data spaces, by embedding high-dimensional feature descriptors into a
similarity preserving Hamming space with a low dimension. For most hashing
methods, the performance of retrieval heavily depends on the choice of the
high-dimensional feature descriptor. Furthermore, a single type of feature
cannot be descriptive enough for different images when it is used for
hashing. Thus, how to combine multiple representations for learning effective
hashing functions is an imminent task. In this paper, we present a novel
unsupervised Multiview Alignment Hashing (MAH) approach based on Regularized
Kernel Nonnegative Matrix Factorization (RKNMF),
|
|
24
|
JIPZ-02
|
YouTube
Video Promotion by Cross-network
|
IEEE-2015
|
The
emergence and rapid proliferation of various social media networks have
reshaped the way how video contents are generated, distributed and consumed
in traditional video sharing portals. Nowadays, online videos can be accessed
from far beyond the internal mechanisms of the video sharing portals, such as
internal search and front page highlight. Recent studies have found that
external referrers, such as external search engines and other social media
websites, arise to be the new and important portals to Lead users to online
videos. In this paper, we introduce a novel cross-network collaborative
application to help drive the online traffic for given videos in traditional
video portal YouTube by leveraging the high propagation efficiency of the
popular Twitter followees.
|
|
25
|
JMCZ-01
|
Modelling and Analysis_Mob Comp
|
IEEE-2015
|
—In
opportunistic networks, direct communication between mobile devices is used
to extend the set of services accessible through cellular or WiFi networks.
Mobility patterns and their impact in such networks have been extensively
studied. In contrast, this has not been the case with communication traffic
patterns, where homogeneous traffic between all nodes is usually assumed.
This assumption is generally not true, as node mobility and social
characteristics can significantly affect the end-to-end traffic demand
between them. To this end, in this paper we explore the joint effect of
traffic patterns and node mobility on the performance of popular forwarding
mechanisms, both analytically and through simulations. Among the different
insights stemming from our analysis, we identify conditions under which
heterogeneity renders the added value of using extra relays more/less useful.
Furthermore, we confirm the intuition that an increasing amount of
heterogeneity closes the performance gap between different forwarding
policies, making endto- end routing more challenging in some cases.
|
|
26
|
JMCZ-02
|
Towards
Information Diffusion in Mobile Social
|
IEEE-2015
|
The
emerging of mobile social networks opens opportunities for viral marketing.
However, before fully utilizing mobile social networks as a platform for
viral marketing, many challenges have to be addressed. In this paper, we
address the problem of identifying a small number of individuals through whom
the information can be diffused to the network as soon as possible, referred
to as the diffusion minimization problem. Diffusion minimization under the
probabilistic diffusion model can be formulated as an asymmetric k- center
problem which is NP-hard, and the best known approximation algorithm for the
asymmetric k-center problem has approximation ratio of log_
n and time complexity O(n5).
Clearly, the performance and the time complexity of the approximation
algorithm are not satisfiable in large-scale mobile social networks.
|
|
27
|
JMMZ-01
|
Color
imaging_Multimedia
|
IEEE-2015
|
Multimedia data with associated semantics is
omnipresent in today’s social online platforms in the form of keywords, user
comments, and so forth. This article presents a statistical framework
designed to infer knowledge in the imaging domain from the semantic domain.
Note that this is the reverse direction of common computer vision
applications. The framework relates keywords to image characteristics using a
statistical significance test. It scales to millions of images and hundreds
of thousands of keywords. We demonstrate the usefulness of the statistical
framework with three color imaging applications: 1) semantic image
enhancement: re-render an image in order to adapt it to its semantic context;
2) color naming: find the color triplet for a given color name; and 3) color
palettes: find a palette of colors that best represents a given arbitrary
semantic context and that satisfies established harmony constraints.
|
|
28
|
JPDZ-01
|
Secure
Distributed Deduplication Systems with Improved Reliability -01-Secure Distributed
Deduplication Systems with Improved Reliability
|
IEEE-2015
|
Data
deduplication is a technique for eliminating duplicate copies of data, and
has been widely used in cloud storage to reduce storage space and upload
bandwidth. However, there is only one copy for each file stored in cloud even
if such a file is owned by a huge number of users. As a result, deduplication
system improves storage utilization while reducing reliability. Furthermore,
the challenge of privacy for sensitive data also arises when they are
outsourced by users to cloud. Aiming to address the above security
challenges, this paper makes the first attempt to formalize the notion of
distributed reliable deduplication system. We propose new distributed
deduplication systems with higher reliability in which the data chunks are
distributed across multiple cloud servers.
|
|
29
|
JPDZ-02
|
Service
Operatoraware Trust Scheme for Resource
|
IEEE-2015
|
This
paper proposes a service operator-aware trust scheme (SOTS) for resource
matchmaking across multiple clouds. Through analyzing the built-in
relationship between the users, the broker, and the service resources, this
paper proposes a middleware framework of trust management that can
effectively reduce user burden and improve system dependability. Based on
multi-dimensional resource service operators, we model the problem of trust
evaluation as a process of multi-attribute decision-making, and develop an
adaptive trust evaluation approach based on information entropy theory. This
adaptive approach can overcome the limitations of traditional trust schemes,
whereby the trusted operators are weighted manually or subjectively. As a
result, using SOTS, the broker can efficiently and accurately prepare the
most trusted resources in advance, and thus provide more dependable resources
to users. Our experiments yield interesting and meaningful observations that
can facilitate the effective utilization of SOTS in a large-scale multi-cloud
environment.
|
|
30
|
JSCZ-01
|
A
Trust-Aware Service Brokering Scheme
|
IEEE-2015
|
Oriented by requirement of
trust management in multiple cloud environment, this paper presents T-broker,
a trustaware service brokering scheme for efficient matching cloud services
(or resources) to satisfy various user requests. First, a trusted third
party-based service brokering architecture is proposed for multiple cloud
environment, in which the T-broker acts as a middleware for cloud
trust management and service matching. Then, T-broker uses a
hybrid and adaptive trust model to compute the overall trust degree of
service resources, in which trust is defined as a fusion evaluation
result from adaptively combining the direct monitored evidence with
the social feedback of the service resources. More importantly, T-broker uses
the maximizing deviation method to compute the direct experience based on
multiple key trusted attributes of service resources, which can overcome the
limitations of traditional trust schemes, in which the trusted attributes are
weighted manually or subjectively. Finally, T-broker uses a lightweight
feedback mechanism, which can effectively reduce networking risk and improve
system efficiency. The experimental results show that, compared with the
existing approaches, our T-broker yields very good results in many
typical cases, and the proposed system is robust to deal with various numbers
of dynamic service behavior from multiple cloud sites.
|
|
31
|
JSCZ-02
|
Collusion-Tolerable
Privacy-Preserving Sum and
|
IEEE-2015
|
Much
research has been conducted to securely outsource multiple parties’ data
aggregation to an untrusted aggregator without disclosing each individual’s
privately owned data, or to enable multiple parties to jointly aggregate
their data while preserving privacy. However, those works either require
secure pair-wise communication channels or suffer from high complexity. In
this paper, we consider how an external aggregator or multiple parties can
learn some algebraic statistics (e.g., sum, product) over participants’
privately owned data while preserving the data privacy. We assume all channels
are subject to eavesdropping attacks, and all the communications throughout
the aggregation are open to others. We first propose several protocols that
successfully guarantee data privacy under semi-honest model, and then present
advanced protocols which tolerate up to k passive adversaries who do not try
to tamper the computation. Under this weak assumption, we limit both the
communication and computation complexity of each participant to a small
constant. At the end, we present applications which solve several interesting
problems via our protocols.
|
|
32
|
JSCZ-03
|
Control
Cloud Data Access Privilege and Anonymity With Fully Anonymous
Attribute-Based Encryption
|
IEEE-2015
|
Cloud computing is a revolutionary computing
paradigm, which enables flexible, on-demand, and low-cost usage of computing
resources, but the data is outsourced to some cloud servers, and various
privacy concerns emerge from it. Various schemes based on the attribute-based
encryption have been proposed to secure the cloud storage. However, most work
focuses on the data contents privacy and the access control, while less
attention is paid to the privilege control and the identity privacy. In this
paper, we present a semi anonymous privilege control scheme Anony Control to
address not only the data privacy, but also the user identity privacy in
existing access control schemes. Anony Control decentralizes the
central authority to limit the identity leakage and thus achieves semi
anonymity. Besides, it also generalizes the file access control to the
privilege control, by which privileges of all operations on the cloud data
can be managed in a fine-grained manner. Subsequently, we present the Anony
Control-F, which fully prevents the identity leakage and achieve the full
anonymity. Our security analysis shows that both Anony Control and Anony Control-F are
secure under the decisional bilinear Diffie–Hellman assumption, and our
performance evaluation exhibits the feasibility of our schemes.
|
|
33
|
JSCZ-04
|
Data Lineage
in Malicious Environments
|
IEEE-2015
|
Intentional
or unintentional leakage of confidential data is undoubtedly one of the most
severe security threats that organizations face in the digital era. The
threat now extends to our personal lives: a plethora of personal information
is available to social networks and smartphone providers and is indirectly
transferred to untrustworthy third party and fourth party applications. In
this work, we present a generic data lineage framework LIME
for data flow across multiple entities that
take two characteristic, principal roles (i.e., owner and consumer). We
define the exact security guarantees required by such a data lineage
mechanism toward identification of a guilty entity, and identify the
simplifying non-repudiation and honesty assumptions. We then develop and
analyze a novel accountable data transfer protocol between two entities
within a malicious environment by building upon oblivious transfer, robust
watermarking, and signature primitives. Finally, we perform an experimental
evaluation to demonstrate the practicality of our protocol and apply our
framework to the important data leakage scenarios of data outsourcing and
social networks. In general, we consider LIME ,
our lineage framework for data transfer, to be an key step towards achieving
accountability by design.
|
|
34
|
JSCZ-05
|
Enabling
Cloud Storage Auditing with
|
IEEE-2015
|
Cloud
storage auditing is viewed as an important service to verify the integrity of
the data in public cloud. Current auditing protocols are all based on the
assumption that the client’s secret key for auditing is absolutely secure.
However, such assumption may not always be held, due to the possibly weak
sense of security and/or low security settings at the client. If such a
secret key for auditing is exposed, most of the current auditing protocols
would inevitably become unable to work. In this paper, we focus on this new
aspect of cloud storage auditing. We investigate how to reduce the damage of
the client’s key exposure in cloud storage auditing, and give the first
practical solution for this new problem setting. We formalize the definition
and the security model of auditing protocol with key-exposure resilience and
propose such a protocol. In our design, we employ the binary tree structure
and the pre-order traversal technique to update the secret keys for the
client. We also develop a novel authenticator construction to support the
forward security and the property of blockless verifiability. The security
proof and the performance analysis show that our proposed protocol is secure
and efficient.
|
|
35
|
JSCZ-06
|
Formalization
and Verification_Cybernetics
|
IEEE-2015
|
Group behavior interactions, such as
multirobot teamwork and group communications in social networks, are widely
seen in both natural, social, and artificial behavior related applications.
Behavior interactions in a group are often associated with varying coupling
relationships, for instance, conjunction or disjunction. Such coupling
relationships challenge existing behavior representation methods, because
they involve multiple behaviors from different actors, constraints on the
interactions, and behavior evolution. In addition, the quality of behavior
interactions are not checked through verification techniques. In this paper,
we propose an ontology-based behavior modeling and checking system (OntoB for
short) to explicitly represent and verify complex behavior relationships,
aggregations, and constraints. The OntoB system provides both a visual
behavior model and an abstract behavior tuple to capture behavioral elements,
as well as building blocks. It formalizes various intra-coupled interactions
(behaviors conducted by the same actor) via transition systems (TSs), and
inter-coupled behavior aggregations (behaviors conducted by different actors)
from temporal, inferential, and party-based perspectives.
|
|
36
|
JSCZ-07
|
Group Key
Agreement with Local Connectivity
|
IEEE-2015
|
In
this paper, we study a group key agreement problem where a user is only aware
of his neighbors while the connectivity graph is arbitrary. In our problem,
there is no centralized initialization for users. A group key agreement with
these features is very suitable for social networks. Under our setting, we
construct two efficient protocols with passive security. We obtain lower
bounds on the round complexity for this type of protocol, which demonstrates
that our constructions are round efficient. Finally, we construct an actively
secure protocol from a passively secure one.
|
|
37
|
JSCZ-08
|
Privacy-Preserving
Public Auditing for
|
IEEE-2015
|
To
protect outsourced data in cloud storage against corruptions, adding fault
tolerance to cloud storage together with data integrity checking and failure
reparation becomes critical. Recently, regenerating codes have gained
popularity due to their lower repair bandwidth while providing fault tolerance.
Existing remote checking methods for regenerating-coded data only provide
private auditing, requiring data owners to always stay online and handle
auditing, as well as repairing, which is sometimes impractical. In this
paper, we propose a public auditing scheme for the regenerating-code-based
cloud storage. To solve the regeneration problem of failed authenticators in the
absence of data owners, we introduce a proxy, which is privileged to
regenerate the authenticators, into the traditional public auditing system
model.
|
|
38
|
JSEZ-01
|
Impact of
view_DM
|
IEEE-2015
|
Recommendation
systems are intended to increase developer productivity by recommending files
to edit. These systems mine association rules in software revision histories.
However, mining coarse-grained rules using only edit histories produces
recommendations with low accuracy, and can only produce recommendations after
a developer edits a file. In this work, we explore the use of fine-grained
association rules, based on the insight that view histories help characterize
the contexts of files to edit. To leverage this additional context and
fine-grained association rules, we have developed MI, a recommendation
system extending ROSE, an existing edit based recommendation system.
We then conducted a comparative simulation of ROSE and MI using the
interaction histories stored in the Eclipse Bugzilla system. The simulation
demonstrates that MI predicts the files to edit with significantly higher
recommendation accuracy than ROSE (about 63% over 35%), and makes
recommendations earlier, often before developers begin editing. Our results
clearly demonstrate the value of considering both views and edits in systems
to recommend files to edit, and results in more accurate, earlier, and more
flexible recommendations.
|
|
NS-2
|
|||||
39
|
NSZ-01
|
A
Distributed Fault-Tolerant Topology Control Algorithm for Heterogeneous
Wireless Sensor Networks
|
IEEE-2015
|
This paper introduces a distributed fault-tolerant topology
control algorithm, called the Disjoint Path Vector (DPV), for heterogeneous
wireless sensor networks composed of a large number of sensor nodes with
limited energy and computing capability and several supernodes with unlimited
energy resources. The DPV algorithm addresses the k-degree Anycast Topology
Control problem where the main objective is to assign each sensor’s
transmission range such that each has at least k-vertex-disjoint paths to
supernodes and the total power consumption is minimum. The resulting
topologies are tolerant to k _ 1 node failures in the worst case. We prove
the correctness of our approach by showing that topologies generated by DPV
are guaranteed to satisfy k-vertex supernode connectivity. Our simulations
show that the DPV algorithm achieves up to 4-fold reduction in total
transmission power required in the network and 2-fold reduction in maximum
transmission power required in a node compared to existing solutions.
|
|
40
|
NSZ-02
|
Adaptive
Algorithms for Diagnosing Large-Scale Failures in Computer Networks
|
IEEE-2015
|
We propose a greedy algorithm, Cluster-MAX-COVERAGE (CMC), to
efficiently diagnose large-scale clustered failures.
We primarily address the challenge of determining faults with
incomplete symptoms. CMC makes novel use of both positive and negative
symptoms to output a hypothesis list with a low number of false negatives and
false positives quickly. CMC requires reports from about half as many nodes
as other existing algorithms to determine failures with 100 percent accuracy.
Moreover, CMC accomplishes this gain significantly faster (sometimes by two
orders of magnitude) than an algorithm that matches its accuracy. When there
are fewer positive and negative symptoms at a reporting node, CMC performs
much better than existing algorithms. We also propose an adaptive algorithm
called Adaptive-MAX-COVERAGE (AMC) that performs efficiently during both
independent and clustered failures. During a series of failures that include
both independent and clustered, AMC results in a reduced number of false
negatives and false positives.
|
|
41
|
NSZ-03
|
Delay
Optimization and Cross-Layer Design in Multihop Wireless Networks With
Network Coding and Successive Interference Cancelation
|
IEEE-2015
|
Network coding (NC) and multipacket
reception with successive interference cancelation (SIC) have been shown to
improve the performance of multihop wireless networks (MWNs). However,
previous work emphasized maximization of network throughput without
considering quality of service (QoS) requirements, which may lead to high
packet delays in the network. The objective of this work is minimization of
packet delay in a TDMA-based MWN that is jointly utilizing NC and SIC
techniques for a given traffic demand matrix. We assume conflictfree
scheduling and allow multipath routing. We formulate a cross-layer
optimization that assigns time slots to links in a way that the average
packet delay is minimized. The problem formulation results in a difficult
mixed integer nonlinear programming (MINLP) that the state-of-art software
can only solve for very small-sized networks. For large networks, we develop
a heuristic approach that iteratively determines the optimal solution. We
present numerical results, which show that the average packet delay and
traffic handling capacity of a network, using w/o NC+SIC, NC, SIC and NC+SIC
schemes, improves from left to right. The traffic capacity of NC+SIC is
double of the w/o NC+SIC. Thus, combined utilization of NC and SIC techniques
results in significant performance improvement.
|
|
42
|
NSZ-04
|
Distributed
denial of service attacks in software-defined networking with cloud computing
|
IEEE-2015
|
Although
software-defined networking (SDN) brings numerous benefits by decoupling the
control plane from the data plane, there is a contradictory relationship
between SDN and distributed denial-of-service (DDoS) attacks. On one hand,
the capabilities of SDN make it easy to detect and to react to DDoS attacks.
On the other hand, the separation of the control plane from the data plane of
SDN introduces new attacks. Consequently, SDN itself may be a target of DDoS
attacks. In this paper, we first discuss the new trends and characteristics
of DDoS attacks in cloud computing environments. We show that SDN brings us a
new chance to defeat DDoS attacks in cloud computing environments, and we
summarize good features of SDN in defeating DDoS attacks. Then we review the
studies about launching DDoS attacks on SDN and the methods against DDoS attacks
in SDN.In addition, we discuss a number of challenges that need to be
addressed to mitigate DDoS attached in SDN with cloud computing. This work
can help understand how to make full use of SDN’s advantages to defeat DDoS
attacks in cloud computing environments and how to prevent SDN itself from
becoming a victim of DDoSattacks.
|
|
43
|
NSZ-05
|
Dynamic
Openflow-Controlled Optical Packet Switching Network
|
IEEE-2015
|
This paper presents and experimentally demonstrates the generalized
architecture of Open flow-controlled optical packet switching (OPS) network. Open
flow control is enabled by introducing The Openflow/OPS agent into the OPS
network, which realizes the Openflow protocol translation and message
exchange between the Openflow control plane and the underlying OPS nodes.
With software-defined networking (SDN) and Openflow technique, the complex
control functions of the conventional OPS network can offloaded into a
centralized and flexible control plane, while promoted control and operations
can be provided due to centralized coordination of network resources.
Furthermore, a contentionaware routing/rerouting strategy as well as a fast
network adjustment mechanism is proposed and demonstrated for the first time
as advanced Openflow control to route traffic and handle the network
dynamics. With centralized SDN/Openflow control, the OPS network has the
potential to have better resource utilization and enhanced network resilience
at lower cost and less node complexity. Our work will accelerate the
development of both OPS and SDN evolution.
|
|
44
|
NSZ-06
|
Game-Theoretic
Topology Controlfor Opportunistic Localizationin Sparse Underwater Sensor
Networks
|
IEEE-2015
|
In this paper, we propose a localization
scheme named Opportunistic Localization by Topology Control (OLTC),
specifically for sparse Underwater Sensor Networks (UWSNs). In a UWSN, an
unlocalized sensor node finds its location by utilizing the spatio-temporal
relation with the reference nodes. Generally, UWSNs are sparsely deployed
because of the high implementation cost, and unfortunately, the network
topology experiences partitioning due to the effect of passive node mobility.
Consequently, most of the underwater sensor nodes lack the required number of
reference nodes for localization in underwater environments. The existing
literature is deficient in addressing the problem of node localization in the
above mentioned scenario. Antagonistically, however, we promote that even in
such sparse UWSN context, it is possible to localize the nodes by exploiting
their available opportunities.
|
|
45
|
NSZ-07
|
Improving
Physical-Layer Security in Wireless Communications Using Diversity Techniques
|
IEEE-2015
|
Due
to the broadcast nature of radio propagation, wireless transmission can be
readily overheard by unauthorized users for interception purposes and is thus
highly vulnerable to eavesdropping attacks. To this end, physical-layer
security is emerging as a promising paradigm to protect the wireless
communications against eavesdropping attacks by exploiting the physical
characteristics of wireless channels. This article is focused on the
investigation of diversity techniques to improve physical-layer security
differently from the conventional artificial noise generation and beamforming
techniques, which typically consume additional power for generating
artificial noise and exhibit high implementation complexity for beamformer
design. We present several diversity approaches to improve wireless
physical-layer security, including multiple-input multiple-output (MIMO),
multiuser diversity, and cooperative diversity. To illustrate the security
improvement through diversity, we propose a case study
|
|
46
|
NSZ-08
|
Interference-Based
Topology Control Algorithm for Delay-Constrained Mobile Ad Hoc Networks
|
IEEE-2015
|
As the foundation of routing, topology control should minimize
the interference among nodes, and increase the network capacity. With the
development of mobile ad hoc networks (MANETs), there is a growing
requirement of quality of service (QoS) in terms of delay. In order to meet
the delay requirement, it is important to consider topology control in delay
constrained environment, which is contradictory to the objective of
minimizing interference. In this paper, we focus on the delay-constrained
topology control problem, and take into account delay and interference
jointly.
|
|
47
|
NSZ-09
|
Joint
Optimal Data Rate and Power Allocation in Lossy Mobile Ad Hoc Networks with
Delay-Constrained Traffics
|
IEEE-2015
|
In this paper, we consider lossy mobile ad hoc networks where
the data rate of a given flow becomes lower and lower along its routing path.
One of the main challenges in lossy mobile ad hoc networks is how to achieve
the conflicting goal of increased network utility and reduced power
consumption, while without following the instantaneous state of a fading
channel. To address this problem, we propose a cross-layer rate-effective
network utility maximization (RENUM) framework by taking into account the lossy
nature of wireless links and the constraints of rate outage probability and
average delay. In the proposed framework, the utility is associated with the
effective rate received at the destination node of each flow instead of the
injection rate at the source of the flow. We then present a distributed joint
transmission rate, link power and average delay control algorithm, in which
explicit broadcast message passing is required for power allocation
algorithm.
|
|
48
|
NSZ-10
|
Max
Contribution An Online Approximation of Optimal Resource Allocation in Delay
Tolerant Networks
|
IEEE-2015
|
In this paper, a joint optimization of link scheduling,
routing and replication for delay-tolerant networks (DTNs) has been studied.
The optimization problems for resource allocation in DTNs are typically
solved using dynamic programming which requires knowledge of future events
such as meeting schedules and durations. This paper defines a new notion of
approximation to the optimality for DTNs, called snapshot approximation where
nodes are not clairvoyant, i.e., not looking ahead into future events, and
thus decisions are made using only contemporarily available knowledges.
Unfortunately, the snapshot approximation still requires solving an NP-hard
problem of maximum weighted independent set (MWIS) and a global knowledge of
who currently owns a copy and what their delivery probabilities are. This
paper proposes an algorithm, Max-Contribution (MC) that approximates MWIS
problem with a greedy method and its distributed online approximation algorithm,
Distributed Max-Contribution (DMC).
|
|
49
|
NSZ-11
|
Neighbor
Similarity Trust against Sybil Attack in P2P E-Commerce
|
IEEE-2015
|
Peer
to peer (P2P) e-commerce applications exist at the edge of the Internet with
vulnerabilities to passive and active attacks. These attacks have pushed away
potential business firms and individuals whose aim is to get the best benefit
in e-commerce with minimal losses. The attacks occur during interactions
between the trading peers as a transaction takes place. In this paper, we
propose how to address Sybil attack, an active attack, in which peers can
have bogus and multiple identities to fake their owns. Most existing work,
which concentrates on social networks and trusted certification, has not been
able to prevent Sybil attack peers from doing transactions. Our work exploits
the neighbor similarity trust relationship to address Sybil attack. In our
approach, duplicated Sybil attack peers can be identified as the neighbor
peers become acquainted and hence more trusted to each other. Security and
performance analysis shows that Sybil attack can be minimized by our proposed
neighbor similarity trust.
|
|
50
|
NSZ-12
|
Power
Control and Soft Topology Adaptations in Multihop Cellular Networks With
Multi-Point Connectivity
|
IEEE-2015
|
The LTE standards account for the use of relays to enhance coverage
near the cell edge. In a traditional topology, a mobile can either establish
a direct link to the base station (BS) or a link to the relay, but not both.
In this paper, we consider the benefit of multipoint connectivity in allowing
user equipment (UEs) to split their transmit power over simultaneous links to
the BS and the relay, in effect transmitting two parallel flows. We model
decisions by the UEs as to: (i) which point of access to attach to (either a
relay or a relay and the BS or only the BS); and (ii) how to allocate
transmit power over these links so as to maximize their total rate. We show
that this flexibility in the selection of points of access leads to
substantial network capacity increase against when nodes operate in a fixed
network topology. Individual adaptations by UEs, in terms of both point of
access and transmit power, are interdependent due to interference and to the
possibility of over-loading of the backhaul links.
|
|
51
|
NSZ-13
|
Privacy-Preserving
Detection of Privacy-Preserving Detection of Sensitive Data Exposure
|
IEEE-2015
|
Statistics from security firms, research institutions and government
organizations show that the number of data-leak
instances have grown rapidly in recent years. Among various
data-leak cases, human mistakes are one of the main causes of data
loss. There exist solutions detecting inadvertent sensitive data leaks caused
by human mistakes and to provide alerts for organizations. A common approach
is to screen content in storage and transmission for exposed sensitive
information. Such an approach usually requires the detection operation to be
conducted in secrecy. However, this secrecy requirement is challenging to
satisfy in practice, as detection servers may be compromised or outsourced.
In this paper, we present a privacypreserving data-leak detection (DLD)
solution to solve the issue where a special set of sensitive data digests is
used in detection. The advantage of our method is that it enables the data
owner to safely delegate the detection operation to a semihonest provider
without revealing the sensitive data to the provider. We describe how
Internet service providers can offer their customers DLD as an add-on service
with strong privacy guarantees. The evaluation results show that our method
can support accurate detection with very small number of false alarms under
various data-leak scenarios.
|
|
52
|
NSZ-14
|
Security-Aware
Relaying Scheme for Cooperative Networks With Untrusted Relay Nodes
|
IEEE-2015
|
This paper studies the problem of secure transmission in dual-hop
cooperative networks with untrusted relays, where each relay acts as both a
potential helper and an eavesdropper. A security-aware relaying scheme is
proposed, which employs the alternate jamming and secrecy-enhanced relay
selection to prevent the confidential message from being eavesdropped by the
untrusted relays. To evaluate the performance of the proposed strategies, we
derive the lower bound of the achievable ergodic secrecy rate (ESR), and
conduct the asymptotic analysis to examine how the ESR scales as the number
of relays increases.
|
|
53
|
NSZ-15
|
Self-Organizing Resource Management
Framework in OFDMA Femtocells
|
IEEE-2015
|
Next
generation wireless networks (i.e., WiMAX, LTE) provide higher bandwidth and spectrum
efficiency leveraging smaller (femto) cells with orthogonal frequency
division multiple access (OFDMA). The uncoordinated, dense deployments of
femtocells however, pose several unique challenges relating to interference
and resource management in OFDMA femtocell networks. Towards addressing these
challenges, we propose RADION, a distributed resource management framework
that effectively manages interference across femtocells. RADION’s core
building blocks enable femtocells to opportunistically determine the
available resources in a completely distributed and efficient manner.
Further, RADION’s modular nature paves the way for different resource
management solutions to be incorporated in the framework. We implement RADION
on a real WiMAX femtocell testbed deployed in a typical indoor setting. Two
distributed solutions are enabled through RADION and their performance is
studied to highlight their quick self-organization into efficient resource
allocations.
|
|
54
|
NSZ-16
|
Statistical Dissemination Control in Large
Machine-to-Machine Communication Networks
|
IEEE-2015
|
Cloud based machine-to-machine (M2M) communications have emerged to
achieve ubiquitous and autonomous data transportation for future daily life
in the cyber-physical world. In light of the need of network
characterizations, we analyze the connected M2M network in the machine swarm
of geometric random graph topology, including degree distribution, network
diameter, and average distance (i.e., hops). Without the need of end-to-end
information to escape catastrophic complexity, information dissemination
appears an effective way in machine swarm. To fully understand practical data
transportation, G/G/1 queuing network model is exploited to obtain average
end-to-end delay and maximum achievable system throughput. Furthermore, as
real applications may require dependable networking performance across the
swarm, quality of service (QoS) along with large network diameter creates a
new intellectual challenge.
|
|
55
|
NSZ-17
|
Toward Transparent Coexistence for Multihop Secondary
Cognitive Radio Networks
|
IEEE-2015
|
The dominate spectrum sharing paradigm of today is interference
avoidance, where a secondary network can use the spectrum only when such a
use is not interfering with the primary network. However, with the advances
of physical-layer technologies, the mindset of this paradigm is being
challenged. This paper explores a new paradigm called “transparent
coexistence” for spectrum sharing between primary and secondary nodes in a
multihop network environment. Under this paradigm, the secondary network is
allowed to use the same spectrum simultaneously with the primary network as
long as their activities are “transparent” (or “invisible”) to the primary
network. Such transparency is accomplished through a systematic interference
cancelation (IC) by the secondary nodes without any impact on the primary
network. Although such a paradigm has been studied in the information theory
(IT) and communications (COMM) communities, it is not well understood in the
wireless networking community, particularly for multihop networks.
|
|
No comments:
Post a Comment