Home || About us || Contact us || Sales || Course || Multimedia || Project || Netwoking || Back
IEEE 2010 Project Titles
S.N. |
IEEE 2010 Project Titles |
Domain |
Lang/Year |
1. |
Toward Optimal Network Fault Correction in Externally Managed
Overlay Networks Abstract: We consider an end-to-end approach of
inferring probabilistic data-forwarding failures in an externally managed
overlay network, where overlay nodes are independently operated by various
administrative domains. Our optimization goal is to minimize the expected
cost of correcting (i.e., diagnosing and repairing) all faulty overlay nodes
that cannot properly deliver data. Instead of first checking the most likely
faulty nodes as in conventional fault localization problems, we prove that an
optimal strategy should start with checking one of the candidate nodes,
which are identified based on a potential function that we develop. We
propose several efficient heuristics for inferring the best node to be
checked in large-scale networks. By extensive simulation, we show that we can
infer the best node in at least 95% of time, and that first checking the
candidate nodes rather than the most likely faulty nodes can decrease the checking
cost of correcting all faulty nodes. |
Parallel & Distributed |
2010/
.Net |
2.
|
Slow Adaptive OFDMA Systems Through Chance Constrained
Programming Abstract: Adaptive
OFDMA has recently been recognized as a promising technique for providing high
spectral efficiency in future broadband wireless systems. The research over
the last decade on adaptive OFDMA systems has focused on adapting the
allocation of radio resources, such as subcarriers and power, to the
instantaneous channel conditions of all users. However, such “fast”
adaptation requires high computational complexity and excessive signaling
overhead. This hinders the deployment of adaptive OFDMA systems worldwide.
This paper proposes a slow adaptive OFDMA scheme, in which the subcarrier allocation
is updated on a much slower timescale than that of the fluctuation of
instantaneous channel conditions. Meanwhile, the data rate requirements of
individual users are accommodated on the fast timescale with high
probability, thereby meeting the requirements except occasional outage. Such
an objective has a natural chance constrained programming formulation, which
is known to be
intractable. To circumvent this difficulty, we formulate safe tractable constraints
for the problem based on recent advances in chance constrained programming.
We then develop a polynomial-time algorithm for computing an optimal solution
to the reformulated problem. Our results show that the proposed slow
adaptation scheme drastically reduces both computational cost and control
signaling overhead when compared with the conventional fast adaptive OFDMA.
Our work can be viewed as an initial attempt to apply the chance constrained
programming methodology to wireless system designs. Given that most wireless
systems can tolerate an occasional dip in the quality of service, we hope
that the proposed methodology will find further applications in wireless
communications. |
Network |
2010/
.Net |
3. |
Privacy-Conscious Location-Based Queries in Mobile
Environments Abstract: In
location-based services, users with location-aware mobile devices are able to
make queries about their surroundings anywhere and at any time. While this
ubiquitous computing paradigm brings great convenience for information
access, it also raises concerns over potential intrusion into user location
privacy. To protect location privacy, one typical approach is to cloak user locations
into spatial regions based on user-specified privacy requirements, and to
transform location-based queries into region-based queries. In this paper, we
identify and address three new issues concerning this location cloaking
approach. First, we study the representation of cloaking regions and show
that a circular region generally leads to a small result size for regionbased
queries. Second, we develop a mobility-aware location cloaking technique to
resist trace analysis attacks. Two cloaking algorithms, namely MaxAccu
Cloak and MinComm Cloak, are designed based on different
performance objectives. Finally, we develop an efficient polynomial algorithm
for evaluating circularregion- based kNN queries. Two query processing modes, namely bulk and progressive,
are presented to return query results either all at once or in an incremental
manner. Experimental results show that our proposed mobility-aware cloaking
algorithms significantly improve the quality of location cloaking in terms of
an entropy measure without compromising much on query latency or
communication cost. Moreover, the progressive query processing mode achieves
a shorter response time than the bulk mode by
parallelizing the query evaluation and result transmission. |
Parallel & Distributed |
2010/Java |
4. |
On the Performance of Content Delivery under Competition in a Stochastic
Unstructured Peer-to-Peer Network Abstract: In this paper, we investigate the
impact of the interaction and competition among peers on downloading
performance under stochastic, heterogeneous, unstructured P2P settings,
thereby greatly extending the existing results on stochastic P2P networks
made only under a single downloading peer in the network. To analyze the
average download time in a P2P network with multiple competing downloading
peers, we first introduce the notion of system utilization tailored to a P2P
network. We investigate the relationship between the average download time,
system utilization and the level of competition among downloading peers in a
stochastic P2P network. We then derive an achievable lower bound on the
average download time and propose algorithms to give the peers the minimum
average download time. Our result can much improve the download performance
compared to earlier results in the literature. Our results also provide
theoretical explanation to the inconsistency of performance improvement by
using parallel connections (parallel connections sometimes do not outperform
a single connection) observed in some measurement studies. |
Parallel & Distributed |
2010/Java |
5. |
Multicast Multi-path Power Efficient Routing in Mobile ADHOC
Networks Abstract: The
proposal of this paper presents a measurement-based routing algorithm to load
balance intra domain traffic along multiple paths for multiple multicast
sources. Multiple paths are established using application-layer overlaying.
The proposed algorithm is able to converge under different network models,
where each model reflects a different set of assumptions about the
multicasting capabilities of the network. The algorithm is derived from
simultaneous perturbation stochastic approximation and relies only on noisy
estimates from measurements. Simulation results are presented to demonstrate
the additional benefits obtained by incrementally increasing the multicasting
capabilities. The main application of mobile ad hoc network is in emergency
rescue operations and battlefields. This paper addresses the problem of power
awareness routing to increase lifetime of overall network. Since nodes in
mobile ad hoc network can move randomly, the topology may change arbitrarily
and frequently at unpredictable times. Transmission and reception parameters
may also impact the topology. Therefore it is very difficult to find and
maintain an optimal power aware route. In this work a scheme has been
proposed to maximize the network lifetime and minimizes the power consumption
during the source to destination route establishment. The proposed work is
aimed to provide efficient power aware routing considering real and non real
time data transfer. |
Network
Security |
2010/
.Net |
6. |
Layered Approach Using Conditional Random Fields for Intrusion
Detection Abstract: Intrusion detection faces a number of
challenges; an intrusion detection system must reliably detect malicious
activities in a network and must perform efficiently to cope with the large
amount of network traffic. In this paper, we address these two issues of
Accuracy and Efficiency using Conditional Random Fields and Layered Approach.
We demonstrate that high attack detection accuracy can be achieved by using
Conditional Random Fields and high efficiency by implementing the Layered
Approach. Experimental results on the benchmark KDD ’99 intrusion data set
show that our proposed system based on Layered Conditional Random Fields
outperforms other well-known methods such as the decision trees and the naive
Bayes. The improvement in attack detection accuracy is very high,
particularly, for the U2R attacks (34.8 percent improvement) and the R2L
attacks (34.5 percent improvement). Statistical Tests also demonstrate higher
confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle
noisy data without compromising performance. |
Secure
Computing |
2010/
Java |
7. |
IRM Integrated File Replication and Consistency Maintenance in
P2P Systems Abstract: In peer-to-peer file sharing systems,
file replication and consistency maintenance are widely used techniques for
high system performance. Despite significant interdependencies between them,
these two issues are typically addressed separately. Most file replication
methods rigidly specify replica nodes, leading to low replica utilization,
unnecessary replicas and hence extra consistency maintenance overhead. Most
consistency maintenance methods propagate update messages based on message
spreading or a structure without considering file replication dynamism,
leading to inefficient file update and hence high possibility of outdated
file response. This paper presents an Integrated file Replication and
consistency Maintenance mechanism (IRM) that integrates the two techniques in
a systematic and harmonized manner. It achieves high efficiency in file
replication and consistency maintenance at a significantly low cost. Instead
of passively accepting replicas and updates, each node determines file
replication and update polling by dynamically adapting to time-varying file
query and update rates, which avoids unnecessary file replications and
updates. Simulation results demonstrate the effectiveness of IRM in
comparison with other approaches. It dramatically reduces overhead and yields
significant improvements on the efficiency of both file replication and
consistency maintenance approaches. |
Parallel & Distributed |
2010/Java |
8. |
Fault-tolerant Mobile Agent-based Monitoring Mechanism for
Highly Dynamic Distributed Networks Abstract: Asynchronous and dynamic natures of
mobile agents, a certain number of mobile agent-based monitoring mechanisms
have actively been developed to monitor large-scale and dynamic distributed
networked systems adaptively and efficiently. Among them, some mechanisms
attempt to adapt to dynamic changes in various aspects such as network
traffic patterns, resource addition and deletion, network topology and so on.
However, failures of some domain managers are very critical to providing
correct, real-time and efficient monitoring functionality in a large-scale
mobile agent-based distributed monitoring system. In this paper, we present a
novel fault tolerance mechanism to have the following advantageous features
appropriate for large-scale and dynamic hierarchical mobile agent-based
monitoring organizations. It supports fast failure detection functionality
with low failure-free overhead by each domain manager transmitting heart-beat
messages to its immediate higher-level manager. Also, it minimizes the number
of non-faulty monitoring managers affected by failures of domain managers.
Moreover, it allows consistent failure detection actions to be performed
continuously in case of agent creation, migration and termination, and is
able to execute consistent takeover actions even in concurrent failures of
domain managers. |
Parallel & Distributed |
2010/Java |
9. |
Engineering Wireless Mesh Networks Joint Scheduling, Routing,
Power Control, and Rate Adaptation Abstract: We present a
number of significant engineering insights on what makes a good configuration
for medium- to large size wireless mesh networks (WMNs) when the objective
function is to maximize the minimum throughput among all flows. For this, we
first develop efficient and exact computational tools using column generation
with greedy pricing that allow us to compute exact solutions for networks
significantly larger than what has been possible so far. We also develop very
fast approximations that compute nearly optimal solutions for even larger
cases. Finally, we adapt our tools to the case of proportional fairness and
show that the engineering insights are very similar. |
Network |
2010/Java |
10. |
Congestion Control of Transmission Control Protocol Based on
Bandwidth Estimation Abstract: This paper presents a framework for
TCP congestion control, called “Bandwidth based TCP”, which differs from most
TCP algorithms by using the bandwidth estimation as the congestion measure to
control the window size increment. It tries to predict the equilibrium point
of window size then make the congestion window approach this point in a
round-trip-time. First of all, an overview of TCP and AQM is introduced. Then,
the stability of the mechanisms is also investigated via linearization.
Finally, through the simulations, the performance of the proposed scheme is
shown to be better than TCP-Vegas under homogeneous and heterogeneous
environments. |
Network |
2010/Java |
11. |
Anonymous Query Processing in Road Networks Abstract: The increasing availability of
location-aware mobile devices has given rise to a flurry of location-based
services (LBS). Due to the nature of spatial queries, an LBS needs the user position
in order to process her requests. On the other hand, revealing exact user
locations to a (potentially untrusted) LBS may pinpoint
their identities and breach their privacy. To address this issue, spatial
anonymity techniques obfuscate user locations, forwarding to the LBS a
sufficiently large region instead. Existing methods explicitly target
processing in the Euclidean space, and do not apply when proximity to the
users is defined according to network distance (e.g., driving time through
the roads of a city). In this paper, we propose a framework for anonymous
query processing in road networks. We design location obfuscation techniques
that (i) provide anonymous LBS access to the users, and (ii) allow efficient
query processing at the LBS side. Our techniques exploit existing network
database infrastructure, requiring no specialized storage schemes or
functionalities. We experimentally compare alternative designs in real road
networks and demonstrate the effectiveness of our techniques. |
Data
Engineering |
2010/
.Net |
12. |
Agent Based Efficient Anomaly Intrusion Detection System in
Adhoc networks Abstract: Networks are protected using many firewalls and
encryption software’s. But many of them are not sufficient and effective. Most
intrusion detection systems for mobile ad hoc networks are focusing on either
routing protocols or its efficiency, but it fails to address the security
issues. Some of the nodes may be selfish, for example, by not forwarding the
packets to the destination, thereby saving the battery power. Some others may
act malicious by launching security attacks like denial of service or hack
the information. The ultimate goal of the security solutions for wireless
networks is to provide security services, such as authentication,
confidentiality, integrity, anonymity, and availability, to mobile users.
This paper incorporates agents and data mining techniques to prevent anomaly
intrusion in mobile adhoc networks. Home agents present in each system
collects the data from its own system and using data mining techniques to
observed the local anomalies. The Mobile agents monitoring the neighboring
nodes and collect the information from neighboring home agents to determine
the correlation among the observed anomalous patterns before it will send the
data. This system was able to stop all of the successful attacks in an adhoc
networks and reduce the false alarm positives. |
Secure
Computing |
2010/Java |
13. |
A Distributed Protocol to Serve Dynamic Groups for Peer-to-Peer
Streaming Abstract: Peer-to-peer (P2P) streaming has been
widely deployed over the Internet. A streaming system usually has multiple
channels, and peers may form multiple groups for content distribution. In
this paper, we propose a distributed overlay framework (called SMesh) for
dynamic groups where users may frequently hop from one group to another while
the total pool of users remain stable. SMesh first builds a relatively stable
mesh consisting of all hosts for control messaging. The mesh supports dynamic
host joining and leaving, and will guide the construction of delivery trees.
Using the Delaunay Triangulation (DT) protocol as an example, we show how to construct an efficient mesh
with low maintenance cost. We further study various tree construction
mechanisms based on the mesh, including embedded, bypass, and intermediate
trees. Through simulations on Internet-like topologies, we show that SMesh
achieves low delay and low link stress. |
Parallel & Distributed |
2010/Java |
14. |
A Distributed CSMA Algorithm for Throughput and Utility
Maximization in Wireless Networks Abstract: In multihop
wireless networks, designing distributed scheduling algorithms to achieve the
maximal throughput is a challenging problem because of the complex
interference constraints among different links. Traditional maximal-weight
scheduling (MWS), although throughput-optimal, is difficult to implement in
distributed networks. On the other hand, a distributed greedy protocol
similar to IEEE 802.11 does not guarantee the maximal throughput. In this
paper, we introduce an adaptive carrier sense multiple access (CSMA)
scheduling algorithm that can achieve the maximal throughput distributive.
Some of the major advantages of the algorithm
are that it applies to a very general interference model and that it is
simple, distributed, and asynchronous. Furthermore, the algorithm is combined
with congestion control to achieve the optimal utility and fairness of
competing flows. Simulations verify the effectiveness of the algorithm. Also,
the adaptive CSMA scheduling is a modular MAC-layer algorithm that can be combined with
various protocols in the transport layer and network layer. Finally, the
paper explores some implementation issues in the setting of 802.11 networks. |
Network |
2010/
.Net |
15.
|
Secure Data Collection in Wireless Sensor Networks Using
Randomized Dispersive Routes Abstract: Compromised-node
and denial-of-service are two key attacks in wireless sensor networks (WSNs).
In this paper, we study routing mechanisms that circumvent (bypass) black
holes formed by these attacks. We argue that existing multi-path routing
approaches are vulnerable to such attacks, mainly due to their deterministic
nature. So once an adversary acquires the routing algorithm, it can compute
the same routes known to the source, and hence endanger all information sent
over these routes. In this paper, we develop mechanisms that generate
randomized multipath routes. Under our design, the routes taken by the
“shares” of different packets change over time. So even if the routing
algorithm becomes known to the adversary, the adversary still cannot pinpoint
the routes traversed by each packet. Besides randomness, the routes generated
by our mechanisms are also highly dispersive and energy-efficient, making
them quite capable of bypassing black holes at low energy cost. Extensive
simulations are conducted to verify the validity of our mechanisms. |
Mobile
Computing |
2010/
.Net |
16. |
Throughput Analysis for a Contention-Based Dynamic Spectrum
Sharing Model Abstract: In this
paper we present throughput analysis for a contention-based dynamic spectrum
sharing model. We consider two scenarios of allocating channels to primary
users, fixed allocation and random allocation. In fixed allocation, the
number of primary users allocated to a channel is fixed all the time, but the
number of users in different channels may be different. In random allocation,
each primary user dynamically and randomly selects a channel in each time
slot. We assume that the spectrum band of primary users is divided into
multiple channels and the time is slotted. Primary users allocated to a
specific channel compete to access this channel in each time slot. Secondary
users are able to dynamically detect the idle channels in each time slot, and
compete to access these channels. We develop analytical models for the
throughput of primary users and secondary users in both scenarios and examine
the impact of the number of secondary users on the throughput of the system.
For a given number of primary users, channels and traffic generation
probability, we aim to find the number of secondary users to maximize the
total throughput of both primary users and secondary users. Our solutions
match closely with the numerical results. |
Network |
2010/
.Net |
17. |
On the Throughput Performance of Multirate IEEE 802.11
Networks with Variable-Loaded Stations: Analysis, Modeling, and a Novel
Proportional Fairness Criterion Abstract: This paper focuses
on multi rate IEEE 802.11 Wireless LAN employing the mandatory Distributed
Coordination Function (DCF) option. Its aim is threefold. Upon starting from
the multi-dimensional Markovian state transition model proposed by Malone et.al.
for characterizing the behavior of the IEEE 802.11 protocol at the Medium
Access Control layer, it presents an extension accounting for packet
transmission failures due to channel errors. Second, it establishes the
conditions under which a network constituted by 𝑁 stations, each station transmitting with its own
bit rate, 𝑅(𝑠) 𝑑 , and packet rate, 𝜆𝑠, can be assumed loaded. Finally, it proposes a
modified Proportional Fairness (PF) criterion, suitable for mitigating the rate
anomaly problem of multirate loaded IEEE 802.11 Wireless LANs, employing
the mandatory DCF option. Compared to the widely adopted assumption of
saturated network, the proposed fairness criterion can be applied to general
loaded networks. The throughput allocation resulting from the proposed
algorithm is able to greatly increase the aggregate throughput of the DCF,
while ensuring fairness levels among the stations of the same order as the
ones guaranteed by the classical PF criterion. Simulation results are
presented for some sample scenarios, confirming the effectiveness of the
proposed criterion for optimized throughput allocation. |
Mobile
Computing |
2010/
.Net |
18. |
Localized Multicast: Efficient and Distributed Replica
Detection in Large-Scale Sensor Networks Abstract: Due to the poor physical protection of
sensor nodes, it is generally assumed that an adversary can capture and
compromise a small number of sensors in the network. In a node replication
attack, an adversary can take advantage of the credentials of a compromised
node to surreptitiously introduce replicas of that node into the network.
Without an effective and efficient detection mechanism, these replicas can be
used to launch a variety of attacks that undermine many sensor applications
and protocols. In this paper, we present a novel distributed approach called
Localized Multicast for detecting node replication attacks. The efficiency
and security of our approach are evaluated both theoretically and via
simulation. Our results show that, compared to previous distributed
approaches proposed by Parno et al., Localized Multicast is more efficient in
terms of communication and memory costs in large-scale sensor networks, and
at the same time achieves a higher probability of detecting node replicas. |
Mobile
Computing |
2010/
.Net |
19. |
VEBEK: Virtual Energy-Based Encryption and Keying for Wireless
Sensor Networks Abstract: Designing cost-efficient, secure
network protocols for Wireless Sensor Networks (WSNs) is a challenging
problem because sensors are resource-limited wireless devices. Since the
communication cost is the most dominant factor in a sensor’s energy
consumption, we introduce an energy-efficient Virtual Energy-Based Encryption
and Keying (VEBEK) scheme for WSNs that significantly reduces the number of
transmissions needed for rekeying to avoid stale keys. In addition to the
goal of saving energy, minimal transmission is imperative for some military
applications of WSNs where an adversary could be monitoring the wireless
spectrum. VEBEK is a secure communication framework where sensed data is
encoded using a scheme based on a permutation code generated via the RC4
encryption mechanism. The key to the RC4 encryption mechanism dynamically
changes as a function of the residual virtual energy of the sensor. Thus, a
one-time dynamic key is employed for one packet only and different keys are
used for the successive packets of the stream. The intermediate nodes along
the path to the sink are able to verify the authenticity and integrity of the
incoming packets using a predicted value of the key generated by the sender’s
virtual energy, thus requiring no need for specific rekeying messages. VEBEK
is able to efficiently detect and filter false data injected into the network
by malicious outsiders. The VEBEK framework consists of two operational modes
(VEBEK-I and VEBEK-II), each of which is optimal for different scenarios. In
VEBEK-I, each node monitors its one-hop neighbors where VEBEK-II
statistically monitors downstream nodes. We have evaluated VEBEK’s feasibility
and performance analytically and through simulations. Our results show that
VEBEK, without incurring transmission overhead (increasing packet size or
sending control messages for rekeying), is able to eliminate malicious data
from the network in an energy efficient manner. We also show that our
framework performs better than other comparable schemes in the literature
with an overall 60-100 percent improvement in energy savings without the
assumption of a reliable medium access control layer. |
Mobile
Computing |
2010/
.Net |
20. |
On Wireless Scheduling Algorithms for Minimizing the
Queue-Overflow Probability Abstract: In this
paper, we are interested in wireless scheduling algorithms for the downlink
of a single cell that can minimize the queue-overflow probability.
Specifically, in a large-deviation setting, we are interested in algorithms
that maximize the asymptotic decay rate of the queue-overflow probability, as
the queue-overflow threshold approaches infinity. We first derive an upper bound
on the decay rate of the queue-overflow probability over all scheduling
policies. We then focus on a class of scheduling algorithms collectively
referred to as the “ -algorithms.” For a given the -algorithm picks the user
for service at each time that has the largest product of the transmission
rate multiplied by the backlog raised to the power . We show that when the
overflow metric is appropriately modified, the minimum-cost-to-overflow under
the -algorithm can be achieved by a simple linear path, and it can be written
as the solution of a vector-optimization problem. Using this structural
property, we then show that when approaches infinity, the -algorithms
asymptotically achieve the largest decay rate of the queue-overflow
probability. Finally, this result enables us to design scheduling algorithms
that are both close to optimal in terms of the
asymptotic decay rate of the overflow probability and empirically shown to
maintain small queue-overflow probabilities over queue-length ranges of
practical interest. |
Network |
2010/
.Net |
21. |
Uncertainty Modeling and Reduction in MANETs Abstract: Evaluating and quantifying trust
stimulates collaboration in mobile ad hoc networks (MANETs). Many existing reputation
systems sharply divide the trust value into right or wrong, thus ignoring
another core dimension of trust: uncertainty. As uncertainty deeply impacts a
node’s anticipation of others’ behavior and decisions during interaction, we
include uncertainty in the reputation system. Specifically, we define a new
uncertainty model to directly reflect a node’s confidence in the sufficiency
of its past experience, and study how the collection of trust information
affects uncertainty in nodes’ opinions. After defining a way to reveal and
compute the uncertainty in trust opinions, we exploit mobility, one of the
important characteristics of MANETs, to efficiently reduce uncertainty and to
speed up trust convergence. Two different categories of mobility-assisted
uncertainty reduction schemes are provided: the proactive schemes exploit
mobile nodes to collect and broadcast trust information to achieve trust
convergence; the reactive schemes provide the mobile nodes methods to get
authenticated and bring their reputation in the original region to the
destination region. Both of the schemes offer a controllable trade-off
between delay, cost, and uncertainty. Extensive analytical and simulation
results are presented to support our uncertainty model and mobility-assisted
reduction schemes. |
Mobile
Computing |
2010/
.Net |
22. |
Inference From Aging Information Abstract: For many
learning tasks the duration of the data collection can be greater than the time
scale for changes of the underlying data distribution. The question we ask is
how to include the information that data are aging. Ad hoc methods to
achieve this include the use of validity windows that prevent the learning
machine from making inferences based on old data. This introduces the problem
of how to define the size of validity windows. In this brief, a new adaptive
Bayesian inspired algorithm is presented for learning drifting concepts. It
uses the analogy of validity windows in an adaptive Bayesian way to
incorporate changes in the data distribution over time. We apply a
theoretical approach based on information geometry to the classification
problem and measure its performance in simulations. The uncertainty about the
appropriate size of the memory windows is dealt with in a Bayesian manner by
integrating over the distribution of the adaptive window size. Thus, the
posterior distribution of the weights may develop algebraic tails. The
learning algorithm results from tracking the mean and variance of the
posterior distribution of the weights. It was found that the algebraic tails
of this posterior distribution give the learning algorithm the ability to
cope with an evolving environment by permitting the escape from local traps. |
Neural
Network |
2010/
.Net |
24. |
Bayesian classifier programmed in SQL Abstract: The Bayesian classifier is a
fundamental classification technique. In this work, we focus on programming
Bayesian classifiers in SQL. We introduce two classifiers: Naive Bayes and a
classifier based on class decomposition using K-means clustering. We consider
two complementary tasks: model computation and scoring a data set. We study
several layouts for tables and several indexing alternatives. We analyze how
to transform equations into efficient SQL queries and introduce several query
optimizations. We conduct experiments with real and synthetic data sets to evaluate
classification accuracy, query optimizations, and scalability. Our Bayesian
classifier is more accurate than Naive Bayes and decision trees. Distance
computation is significantly accelerated with horizontal layout for tables,
denormalization, and pivoting. We also compare Naive Bayes implementations in
SQL and C++: SQL is about four times slower. Our Bayesian classifier in SQL
achieves high classification accuracy, can efficiently analyze large data
sets, and has linear scalability. |
Data
Mining |
2010/
.Net |
25. |
A Stochastic Approach to Image Retrieval Using Relevance
Feedback and Particle Swarm Optimization Abstract: Understanding the subjective meaning of a
visual query, by converting it into numerical parameters that can be
extracted and compared by a computer, is the paramount challenge in the field
of intelligent image retrieval, also referred to as the ¿semantic gap¿
problem. In this paper, an innovative approach is proposed that combines a
relevance feedback (RF) approach with an evolutionary stochastic algorithm,
called particle swarm optimizer (PSO), as a way to grasp user's semantics
through optimized iterative learning. The retrieval uses human interaction to
achieve a twofold goal: 1) to guide the swarm particles in the exploration of
the solution space towards the cluster of relevant images; 2) to dynamically
modify the feature space by appropriately weighting the descriptive features
according to the users' perception of relevance. Extensive simulations showed
that the proposed technique outperforms traditional deterministic RF
approaches of the same class, thanks to its stochastic nature, which allows a
better exploration of complex, nonlinear, and highly-dimensional solution
spaces. |
Image
Processing |
2010/Java |
26. |
PAM: An Efficient and Privacy-Aware Monitoring Framework for
Continuously Moving Objects Abstract: Efficiency and privacy are two fundamental
issues in moving object monitoring. This paper proposes a privacy-aware
monitoring (PAM) framework that addresses both issues. The framework
distinguishes itself from the existing work by being the first to
holistically address the issues of location updating in terms of monitoring
accuracy, efficiency, and privacy, particularly, when and how mobile clients
should send location updates to the server. Based on the notions of safe
region and most probable result, PAM performs location updates only when they
would likely alter the query results. Furthermore, by designing various
client update strategies, the framework is flexible and able to optimize
accuracy, privacy, or efficiency. We develop efficient query
evaluation/reevaluation and safe region computation algorithms in the
framework. The experimental results show that PAM substantially outperforms
traditional schemes in terms of monitoring accuracy, CPU cost, and
scalability while achieving close-to-optimal communication cost. |
Software
Engineering |
2010/
.Net |
27. |
Predictive Network Anomaly detection and visualization Abstract: Various
approaches have been developed for quantifying and displaying network traffic
information for determining network status and in detecting anomalies. Although
many of these methods are
effective, they rely on the collection of long-term network statistics. Here,
we present an approach that uses short-term observations of network features
and their respective time averaged
entropies. Acute changes are localized in network feature space using
adaptive Wiener filtering and auto-regressive moving average modeling. The
color-enhanced datagram is designed to
allow a network engineer to quickly capture and visually comprehend at a
glance the statistical characteristics of a network anomaly. First, average
entropy for each feature is calculated for every second of observation. Then,
the resultant short-term measurement is subjected to first- and second-order
time averaging statistics. These measurements are the basis of a novel
approach to anomaly estimation based on the well-known Fisher linear
discriminant (FLD). Average port, high port, server ports, and peered ports
are some of the network features used for stochastic clustering and
filtering. We empirically determine that these network features obey
Gaussian-like distributions. The proposed algorithm is tested on real-time
network traffic data from Ohio University’s main Internet connection.
Experimentation has shown that the presented FLD-based scheme is accurate in
identifying anomalies in network feature space, in localizing anomalies in
network traffic flow, and in helping network engineers to prevent potential
hazards. Furthermore, its performance is highly effective in providing a
colorized visualization chart to network analysts in the presence of bursty
network traffic. |
Secure
Computing |
2010/J2EE |
28. |
Privacy Preserving public Auditing Data for data storage
security in Cloud computing Abstract: Cloud Computing is the long dreamed vision
of computing as a utility, where users can remotely store their data into the
cloud so as to enjoy the on-demand high quality applications and services
from a shared pool of configurable computing resources. By data outsourcing,
users can be relieved from the burden of local data storage and maintenance.
However, the fact that users no longer have physical possession of the
possibly large size of outsourced data makes the data integrity protection in
Cloud Computing a very challenging and potentially formidable task,
especially for users with constrained computing resources and capabilities.
Thus, enabling public auditability for cloud data storage security is of
critical importance so that users can resort to an external audit party to
check the integrity of outsourced data when needed. To securely introduce an
effective third party auditor (TPA), the following two fundamental
requirements have to be met: 1) TPA should be able to efficiently audit the
cloud data storage without demanding the local copy of data, and introduce no
additional on-line burden to the cloud user; 2) The third party auditing
process should bring in no new vulnerabilities towards user data privacy. In
this paper, we utilize the public key based homomorphic authenticator and uniquely
integrate it with random mask technique to achieve a privacy-preserving
public auditing system for cloud data storage security while keeping all
above requirements in mind. To support efficient handling of multiple
auditing tasks, we further explore the technique of bilinear aggregate
signature to extend our main result into a multi-user setting, where TPA can
perform multiple auditing tasks simultaneously. Extensive security and
performance analysis shows the proposed schemes are provably secure and highly
efficient. |
Cloud
Computing |
2010/
.Net |
29. |
Collaborative Sensing to Improve Information Quality for
Target Tracking in Wireless Sensor Networks Abstract: Due to limited network resources for sensing,
communication and computation, information quality (IQ) in a wireless sensor
network (WSN) depends on the algorithms and protocols for managing such
resources. In this paper, for target tracking application in WSNs consisting
of active sensors (such as ultrasonic sensors) in which normally a sensor
senses the environment actively by emitting energy and measuring the
reflected energy, we present a novel collaborative sensing scheme to improve the
IQ using joint sensing and adaptive sensor scheduling. With multiple sensors
participating in a single sensing operation initiated by an emitting sensor,
joint sensing can increase the sensing region of an individual emitting
sensor and generate multiple sensor measurements simultaneously. By adaptive
sensor scheduling, the emitting sensor for the next time step can be selected
adaptively according to the predicted target location and the detection
probability of the emitting sensor. Extended Kalman filter (EKF) is employed
to estimate the target state (i.e., the target location and velocity) using
sensor measurements and to predict the target location. A Monte Carlo method
is presented to calculate the detection probability of an emitting sensor. It
is demonstrated by simulation experiments that collaborative sensing can
significantly improve the IQ, and hence the tracking accuracy, as compared to individual sensing. |
Network |
2010/
.Net |
30. |
Conditional Shortest Path Routing in Delay Tolerant Networks Abstract: Delay
tolerant networks are characterized by the sporadic connectivity between
their nodes and therefore the lack of stable end-to-end paths from source to
destination. Since the future node connections are mostly unknown in these
networks, opportunistic forwarding is used to deliver messages. However,
making effective forwarding decisions using only the network characteristics
(i.e. average intermeeting time between nodes) extracted from contact history
is a challenging problem. Based on the observations about human mobility
traces and the findings of previous work,
we introduce a new metric called conditional intermeeting
time, which computes the average
intermeeting time between two nodes relative to a meeting with a third node using
only the local knowledge of the past contacts. We then look at the effects of
the proposed metric on the shortest path based routing designed for delay
tolerant networks. We propose Conditional Shortest Path Routing (CSPR)
protocol that routes the messages over conditional shortest paths in which
the cost of links between nodes is defined by conditional intermeeting times
rather than the conventional intermeeting times. Through trace-driven
simulations, we demonstrate that CSPR achieves higher delivery rate and lower
end-to-end delay compared to the shortest path based routing protocols that
use the conventional intermeeting time as the link metric. |
Network |
2010/Java |
32. |
Cross-Layer Design in Multihop Wireless Networks Abstract: In this paper, we take a holistic
approach to the protocol architecture design in multihop wireless networks.
Our goal is to integrate various protocol layers into a rigorous framework,
by regarding them as distributed computations over the network to solve some
optimization problem. Different layers carry out distributed computation on
different subsets of the decision variables using local information to
achieve individual optimality. Taken together, these local algorithms (with
respect to different layers) achieve a global optimality. Our current theory
integrates three functions—congestion control, routing and scheduling—in
transport, network and link layers into a coherent framework. These three
functions interact through and are regulated by congestion price so as to
achieve a global optimality, even in a time-varying environment. Within this
context, this model allows us to systematically derive the layering structure
of the various mechanisms of different protocol layers, their interfaces, and
the control information that must cross these interfaces to achieve a certain
performance and robustness. |
Network |
2010/Java |
33. |
Deactivation of Unwelcomed
Deep Web Extraction Services through Random Injection Abstract: Websites serve content both through Web Services
as well as through user-viewable webpages. While the consumers of
web-services are typically ‘machines’, webpages are meant for human users. It
is highly desirable (for reasons of security, revenue, ownership,
availability etc.) for service providers that content that will undergo
further processing be fetched in a prescribed fashion, preferably through a
supplied Web Services. In fact, monetization of partnerships within a
services ecosystem normally means that website data translate into valuable
revenue. Unfortunately, it is quite commonplace for arbitrary developers to
extract or leverage information from websites without asking for permission
and or negotiating a revenue sharing agreement. This may translate to
significant lost income for content providers. Even in cases where
website owners are happy to share the data, they may want users to adopt
dedicated Web Service APIs (and associated API-servers) rather than putting a
load on their revenue-generating websites. In this paper, we introduce a
mechanism that disables automated web scraping agents, thus forcing clients
to conform to the provided Web Services. |
Web
Services |
2010/
.Net |
34. |
Distributed Algorithms for Minimum Cost Multicast with Network
Coding in Wireless Networks Abstract: We adopt the
network coding approach to achieve minimum-cost multicast in
interference-limited wireless networks where link capacities are functions of
the signal-to-noise-plus interference ratio (SINR). Since wireless link
capacities can be controlled by varying transmission powers, minimum-cost
multicast must be achieved by jointly optimizing network coding subgraphs
with power control and congestion control schemes. To address this, we design
a set of node-based distributed gradient projection algorithms which iteratively
adjust local control variables so as to converge to the optimal power
control, coding sub graph, and congestion control configuration. We
explicitly derive the scaling matrices required in the gradient projection
algorithms for fast, guaranteed global convergence, and show how the scaling
matrices can be computed in a distributed manner. |
Distribute
Computing |
2010/
Java |
35. |
Fast Algorithms for Resource Allocation in Cellular Networks Abstract: We consider a wireless cellular network
where the channels from the base station to the n mobile users undergo flat
fading. Spectral resources are to be divided among the users using time
division multiple access (TDMA) in order to maximize total user utility. We
show that this problem can be cast as a nonlinear
convex optimization problem, and describe an O(n)
algorithm to solve it. Computational experiments show that the algorithm
typically converges in around 25 iterations, where each iteration has a cost
that is O(n), with a modest constant.
When the algorithm starts from an initial resource allocation that is close
to optimal, convergence typically takes even fewer iterations. Thus, the
algorithm can efficiently track the optimal resource allocation as the
channel conditions change due to fading. While, in this paper, we focus on
TDMA systems, our approach extends to frequency selective channels, and to
frequency division multiple access (FDMA), and code division multiple access
(CDMA) systems. We briefly describe such extensions. |
||
36. |
Minimizing Delay and Maximizing Lifetime for Wireless Sensor
Networks With Any cast Abstract: In this
paper, we are interested in minimizing the delay and maximizing the lifetime of
event-driven wireless sensor networks, for which events occur infrequently.
In such systems, most of the energy is consumed when the radios are on,
waiting for an arrival to occur. Sleep-wake scheduling is an effective
mechanism to prolong the lifetime of these energy-constrained wireless sensor
networks. However, sleep-wake scheduling could result in substantial delays
because a transmitting node needs to wait for its next-hop relay node to wake
up. An interesting line of work
attempts to reduce these delays by developing .anycast.-based packet
forwarding schemes, where each node opportunistically forwards a packet to
the _rst neighboring node that wakes up among multiple candidate nodes. In
this paper, we _rst study how to optimize the any cast forwarding schemes for
minimizing the expected packet-delivery delays from the sensor nodes to the
sink. Based on this result, we then provide a solution to the joint control
problem of how to optimally control the system parameters of the sleep-wake
scheduling protocol and the any cast packet-forwarding protocol to maximize
the network lifetime, subject to a constraint on the expected end to end
packet-delivery delay. Our numerical results indicate that the proposed
solution can outperform prior heuristic solutions in the literature,
especially under the practical scenarios where there are obstructions, e.g.,
a lake or a mountain, in the coverage area of wireless sensor networks. |
Network |
2010/Java |
37. |
Opportunistic Routing in Multi-radio Multi-channel Multi-hop
Wireless Networks Abstract: Two major
factors that limit the throughput in multi-hop wireless networks are the
unreliability of wireless transmissions and co-channel interference. One
promising technique that combats lossy wireless transmissions is opportunistic
routing (OR). OR involves multiple forwarding candidates to relay packets by
taking advantage of the broadcast nature and spacial diversity of the
wireless medium. Furthermore, recent advances in multi-radio multi-channel
transmission technology allows more concurrent transmissions in the network,
and shows the potential of substantially improving the system capacity.
However, the performance of OR in multi-radio multi-channel multi-hop
networks is still unknown, and the methodology of studying the performance of
traditional routing (TR) can not be directly applied to OR. In this paper, we
present our research on computing an end-to-end throughput bound of OR in
multi radio multi-channel multi-hop wireless networks. We formulate the capacity of OR as a linear programming
(LP) problem which jointly solves the radio-channel assignment and
transmission scheduling. Leveraging our analytical model, we gain the
following insights into OR: 1) OR can achieve better performance than TR
under different radio/channel configurations, however, in particular
scenarios, TR is more preferable than OR; 2) OR can achieve comparable or
even better performance than TR by using less radio resource; 3) for OR, the
throughput gained from increasing the number of potential forwarding
candidates becomes marginal. |
Mobile
Computing |
2010/Java |
38. |
Optimal Jamming Attacks and Network Defense Policies in
Wireless Sensor Networks Abstract: We consider a scenario where a sophisticated jammer
jams an area in a single-channel wireless sensor network. The jammer controls
the probability of jamming and transmission range to cause maximal damage to
the network in terms of corrupted communication links. The jammer action
ceases when it is detected by a monitoring node in the network, and a
notification message is transferred out of the jamming region. The jammer is
detected at a monitor node by employing an optimal detection test based on
the percentage of incurred collisions. On the other hand, the network
computes channel access probability in an effort to minimize the jamming
detection plus notification time. In order for the jammer to optimize its
benefit, it needs to know the network channel access probability and number
of neighbors of the monitor node. Accordingly, the network needs to know the
jamming probability of the jammer. We study the idealized case of perfect
knowledge by both the jammer and the network about the strategy of one
another, and the case where the jammer or the network lack this knowledge.
The latter is captured by formulating and solving optimization problems, the
solutions of which constitute best responses of the attacker or the network
to the worst-case strategy of each other. We also take into account potential
energy constraints of the jammer and the network. We extend the problem to
the case of multiple observers and adaptable jamming transmission range and
propose a intuitive heuristic jamming strategy for that case. |
Network |
2010/Java |
39. |
Large-Scale
Software Testing Environment using Cloud Computing Technology for Dependable
Parallel and Distributed Systems Abstract: Various information systems are widely
used in information society era, and the demand for highly dependable system
is increasing year after year. However, software testing for such a system
becomes more difficult due to the enlargement and the complexity of the
system. In particular, it is too difficult to test parallel and distributed
systems sufficiently although dependable systems such as high-availability
servers usually form parallel and distributed systems. To solve these
problems, we proposed a software testing environment for dependable parallel
and distributed system using the cloud computing technology, named D-Cloud.
D-Cloud includes Eucalyptus as the cloud management software, and FaultVM
based on QEMU as the virtualization software, and D-Cloud frontend for
interpreting test scenario. D-Cloud enables not only to automate the system
configuration and the test procedure but also to perform a number of test
cases simultaneously, and to emulate hardware faults flexibly. In this paper,
we present the concept and design of D-Cloud, and describe how to specify the
system configuration and the test scenario. Furthermore, the preliminary test
example as the software testing using D-Cloud was presented. Its result shows
that D-Cloud allows to set up the environment easily, and to test the
software testing for the distributed system. |
Cloud
Computing |
2010/
J2EE |
40. |
Dynamic Multichannel Access With Imperfect Channel State
Detection Abstract:
A restless multi-armed bandit
problem that arises in multichannel opportunistic communications is
considered, where channels are modeled as independent and identical
Gilbert–Elliot channels and channel state detection is subject to errors. A
simple structure of the myopic policy is established under a certain
condition on the false alarm probability of the channel state detector. It is
shown that myopic actions can be obtained by maintaining a simple channel
ordering without knowing the underlying Markovian model. The optimality of
the myopic policy is proved for the case of two channels and conjectured for
general cases. Lower and upper bounds on the performance of the myopic policy
are obtained in closed-form, which characterize the scaling behavior of the
achievable throughput of the multichannel opportunistic system. The
approximation factor of the myopic policy is also analyzed to bound its
worst-case performance loss with respect to the optimal performance. |
Network |
2010/
.Net |
S.N. |
IEEE 2009 Project Titles |
Domain |
Lang/Year |
1. |
A Gen2-based RFID Authentication Protocol for Security and
Privacy Abstract: EPCglobal
Class-1 Generation-2 specification (Gen2 in brief) has been approved as
ISO18000-6C for global use, but the identity of tag (TID) is transmitted in
plaintext which makes the tag traceable and clonable. Several solutions have
been proposed based on traditional encryption methods, such as symmetric or
asymmetric ciphers, but they are not suitable for low-cost RFID tags.
Recently, some lightweight authentication protocols conforming to Gen2 have
been proposed. However, the message flow of these protocols is different from
Gen2. Existing readers may fail to read new tags. |
Mobile Computing |
2009/.Net |
2. |
A Tabu Searching Algorithm For Cluster Building in Wireless
Sensor Networks Abstract: The
main challenge in wireless sensor network deployment pertains to optimizing
energy consumption when collecting data from sensor nodes. Compared to other
methods (CPLEX-based method, distributed method, simulated annealing-based
method), the results show that our tabu search-based approach returns
high-quality solutions in terms of cluster cost and execution time. As a
result, this approach is suitable for handling network extensibility in a
satisfactory manner. |
Mobile Computing |
2009/.Net |
3. |
Analysis of Shortest Path Routing for Large Multi-Hop Wireless
Networks Abstract: In this
paper, we analyze the impact of straight line routing in large homogeneous
multi-hop wireless networks.We estimate the nodal load, which is defined as
the number of packets served at a node, induced by straight line routing. For
a given total offered load on the network, our analysis shows that the nodal
load at each node is a function of the node’s Voronoi cell, the node’s
location in the network, and the traffic pattern specified by the source and
destination randomness and straight line routing. In the asymptotic regime,
we show that each node’s probability that the node serves a packet arriving
to the network approaches the products of half the
length of the Voronoi cell perimeter and the load density function that a
packet goes through the node’s location. The density function depends on the
traffic pattern generated by straight line routing, and determines where the
hot spot is created in the network. Hence, contrary to conventional wisdom,
straight line routing can balance the load over the network, depending on the
traffic patterns. |
Network Computing |
2009/.Net |
4. |
Biased Random Walks in Uniform Wireless Networks Abstract: A recurrent
problem when designing distributed applications is to search for a node with
known property. File searching in peer-to-peer (P2P) applications, resource
discovery in service-oriented architectures (SOAs), and path discovery in
routing can all be cast as a search problem. Random walk-based search
algorithms are often suggested for tackling the search problem, especially in
very dynamic systems-like mobile wireless networks. The cost and the
effectiveness of a random walk-based search algorithm are measured by the
excepted number of transmissions required before hitting the target. Hence,
to have a low hitting time is a critical goal. |
Mobile Computing |
2009/.Net |
5. |
Cell Breathing Techniques for Load Balancing in Wireless LANs Abstract: Maximizing
network throughput while providing fairness is one of the key challenges in
wireless LANs (WLANs). This goal is typically achieved when the load of
access points (APs) is balanced. Recent studies on operational WLANs,
however, have shown that AP load is often substantially uneven. To alleviate
such imbalance of load, several load balancing schemes have been proposed.
These schemes commonly require proprietary software or hardware at the user
side for controlling the user-AP association. In this paper we present a new
load balancing technique by controlling the size of WLAN cells (i.e., AP's
coverage range), which is conceptually similar to cell breathing in cellular
networks. The proposed scheme does not require any modification to the users
neither the IEEE 802.11 standard. It only requires the ability of dynamically
changing the transmission power of the AP beacon messages. We develop a set
of polynomial time algorithms that find the optimal beacon power settings
which minimize the load of the most congested AP. We also consider the
problem of network-wide min-max load balancing. Simulation results show that
the performance of the proposed method is comparable with or superior to the
best existing association-based methods. |
Mobile Computing |
2009/.Net |
6. |
Compaction of Schedules and a Two-Stage Approach for
Duplication-Based DAG Scheduling Abstract: Many DAG scheduling algorithms
generate schedules that require prohibitively large number of processors. To
address this problem, we propose a generic algorithm, SC, to minimize the
processor requirement of any given valid schedule. SC preserves the schedule
length of the original schedule and reduces processor count by merging
processor schedules and removing redundant duplicate tasks. To the best of
our knowledge, this is the first algorithm to address this highly unexplored
aspect of DAG scheduling. On average, SC reduced the processor requirement
91, 82, and 72 percent for schedules generated by PLW, TCSD, and CPFD
algorithms, respectively. SC algorithm has a low complexity compared to most
duplication-based algorithms. Moreover, it decouples processor economization
from schedule length minimization problem. To take advantage of these
features of SC, we also propose a scheduling algorithm SDS, having the same
time complexity as SC. Our experiments demonstrate that schedules generated
by SDS are only 3 percent longer than CPFD, one of the best algorithms in
that respect. SDS and SC together form a two-stage scheduling algorithm that
produces schedules with high quality and low processor requirement, and has
lower complexity than the comparable algorithms that produce similar
high-quality results. |
Distribute Computing |
2009/.Net |
7. |
Delay Analysis for Maximal Scheduling With Flow Control in Wireless
Networks With Bursty Traffic Abstract: We consider the delay properties of
one-hop networks with general interference constraints and multiple traffic
streams with time-correlated arrivals. We first treat the case when arrivals
are modulated by independent finite state Markov chains. We show that the
well known maximal scheduling algorithm achieves average delay that grows at
most logarithmically in the largest number of interferers at any link.
Further, in the important special case when each Markov process has at most
two states (such as bursty ON/OFF sources), we prove that average delay is
independent of the number of nodes and links in the network, and hence is
order-optimal. We provide tight delay bounds in terms of the individual
auto-correlation parameters of the traffic sources. These are perhaps the
first order-optimal delay results for controlled queueing networks that
explicitly account for such statistical information. Our analysis treats
cases both with and without flow control. |
Network Computing |
2009/.Net |
8. |
Energy Maps For Mobile Wireless Networks Coherence Time versus
Spreading Period Abstract: We show that even though mobile
networks are highly unpredictable when viewed at the individual node scale, the
end-to end quality-of-service (QoS) metrics can be stationary when the mobile
network is viewed in the aggregate. We define the coherence time as the
maximum duration for which the end-to-end QoS metric remains roughly
constant, and the spreading period as the minimum duration required to spread
QoS information to all the nodes. We show that if the coherence time is
greater than the spreading period, the end-to-end QoS metric can be tracked.
We focus on the energy consumption as the end-to-end QoS metric, and describe
a novel method by which an energy map can be constructed and refined in the
joint memory of the mobile nodes. Finally, we show how energy maps can be
utilized by an application that aims to minimize a node’s total energy
consumption over its near-future trajectory. |
Mobile Computing |
2009/.Net |
9. |
Enforcing Minimum-Cost Multicast Routing against Shelfish
Information Flows Abstract: We
study multicast in a non cooperative environment where information flows
selfishly route themselves through the cheapest paths available. The main
challenge is to enforce such selfish multicast flows to stabilize at a
socially optimal operating point incurring minimum total edge cost, through
appropriate cost allocation and other economic measures, with replicable and
encodable properties of information flows considered. We show that known cost
allocation schemes are not sufficient. We relate the taxes to VCG payment
schemes and discuss an efficient primal-dual algorithm that simultaneously
computes the taxes, the cost allocation, and the optimal multicast flow, with
potential of fully distributed implementations. |
Distribute Computing |
2009/.Net |
10. |
Explicit Load Balancing Technique for NGEO Satellite Ip
Networks With On-Board Processing Capabilities Abstract: Non-geostationary
(NGEO) satellite communication systems offer an array of advantages over
their terrestrial and geostationary counterparts. They are seen as an
integral part of next generation ubiquitous communication systems. Given the
non-uniform distribution of users in satellite footprints, due to several
geographical and/or climatic constraints, some Inter-Satellite Links (ISLs)
are expected to be heavily loaded with data packets while others remain
underutilized. Such scenario obviously leads to congestion of the heavily
loaded links. It ultimately results in buffer overflows, higher queuing
delays, and significant packet drops. To guarantee a better distribution of
traffic among satellites, this operation avoids both congestion and packet drops
at the satellite. It also ensures a better distribution of traffic over the
entire satellite constellation. |
Network Computing |
2009/.Net |
11. |
Greedy Routing with Anti-Void Traversal for Wirless Sensor
Networks Abstract: The
unreachability problem (i.e., the so-called void problem) that exists in the
greedy routing algorithms has been studied for the wireless sensor networks.
Some of the current research work cannot fully resolve the void problem,
while there exist other schemes that can guarantee the delivery of packets
with the excessive consumption of control overheads. Moreover, the hop count
reduction (HCR) scheme is utilized as a short-cutting technique to reduce the
routing hops by listening to the neighbor’s traffic, while the intersection
navigation (IN) mechanism is proposed to obtain the best rolling direction
for boundary traversal with the adoption of shortest path criterion. In order
to maintain the network requirement of the proposed RUT scheme under the
non-UDG networks, the partial UDG construction (PUC) mechanism is proposed to
transform the non-UDG into UDG setting for a portion of nodes that facilitate
boundary traversal. These three schemes are incorporated within the GAR
protocol to further enhance the routing performance with reduced
communication overhead. The proofs of correctness for the GAR scheme are also
given in this paper. |
Mobile Computing |
2009/.Net |
12. |
Information Content-Based Sensor Selection and Transmission Power
Adjustment for Collaborative Target Tracking Abstract: For target tracking applications,
wireless sensor nodes provide accurate information since they can be deployed
and operated near the phenomenon. These sensing devices have the opportunity
of collaboration among themselves to improve the target localization and
tracking accuracies. An energy-efficient collaborative target tracking
paradigm is developed for wireless sensor networks (WSNs). In addition, a
novel approach to energy savings in WSNs is devised in the
information-controlled transmission power (ICTP) adjustment, where nodes with
more information use higher transmission powers than those that are less
informative to share their target state information with the neighboring
nodes. |
Mobile Computing |
2009/.Net |
13. |
Local Construction of Near-Optimal Power Spanners for Wireless
Ad Hoc Networks Abstract: We present a local distributed algorithm
that, given a wireless ad hoc network modeled as a unit disk graph U in the plane,
constructs a planar power spanner of U whose degree is bounded by k and whose
stretch factor is bounded by 1 + (2sin pi/k)p, where k ges 10 is
an integer parameter and p isin [2, 5] is the power exponent constant. For
the same degree bound k, the stretch factor of our algorithm significantly
improves the previous best bounds by Song et al. We show that this bound is
near-optimal by proving that the slightly smaller stretch factor of 1 + (2sin
pi/k+1)p is unattainable for the same degree bound k. In contrast
to previous algorithms for the problem, the presented algorithm is local. As
a consequence, the algorithm is highly scalable and robust. Finally, while
the algorithm is efficient and easy to implement in practice, it relies on
deep insights on the geometry of unit disk graphs and novel techniques that
are of independent interest. |
Mobile Computing |
2009/.Net |
14. |
Movement-Assisted Connectivity Restoration in Wireless Sensor
and Abstract: Recent years have witnessed a growing interest
in applications of wireless sensor and actor networks (WSANs). In these
applications, a set of mobile actor nodes are deployed in addition to sensors
in order to collect sensors’ data and perform specific tasks in response to
detected events/objects. In most scenarios, actors have to respond
collectively, which requires interactor coordination. Therefore, maintaining
a connected interactor network is critical to the effectiveness of WSANs.
However, WSANs often operate unattended in harsh environments where actors
can easily fail or get damaged. An actor failure may lead to partitioning the
interactor network and thus hinder the fulfillment of the application
requirements. |
Distribute Computing |
2009/.Net |
15. |
On the Planning of Wireless Sensor Networks Energy-Efficient
Clustering under the Joint Rout Abstract: Minimizing
energy dissipation and maximizing network lifetime are important issues in
the design of applications and protocols for sensor networks. Energy-efficient
sensor state planning consists in finding an optimal assignment of states to
sensors in order to maximize network lifetime. For example, in area
surveillance applications, only an optimal subset of sensors that fully
covers the monitored area can be switched on while the other sensors are
turned off, we address the optimal planning of sensors' states in
cluster-based sensor networks. Typically, any sensor can be turned on, turned
off, or promoted cluster head, and a different power consumption level is
associated with each of these states. We seek an energy-optimal topology that
maximizes network lifetime while ensuring simultaneously full area coverage
and sensor connectivity to cluster heads, which are constrained to form a
spanning tree used as a routing topology. |
Mobile Computing |
2009/.Net |
16. |
Performance of Orthogonal Fingerprinting Codes under
Worst-Case Noise Abstract: We study the
effect of the noise distribution on the error probability of the detection
test when a class of randomly rotated spherical fingerprints is used. The
detection test is performed by a focused correlation detector, and the
spherical codes studied here form a randomized orthogonal constellation. The
colluders create a noise-free forgery by uniform averaging of their
individual copies, and then add a noise sequence to form the actual forgery. We derive the
noise distribution that maximizes the error probability of the detector under
average and almost-sure distortion constraints. Moreover, we characterize the
noise distribution that minimizes the decoder’s error exponent under a
large-deviations distortion constraint. |
Secure
Computing |
2009/.Net |
17. |
PRESTO Feedback-Driven Data Management in Sensor Networks Abstract: This paper
presents PRESTO, a novel two-tier sensor data management architecture
comprising proxies and sensors that cooperate with one another for acquiring
data and processing queries. PRESTO proxies construct time-series models of
observed trends in the sensor data and transmit the parameters of the model
to sensors. Sensors check sensed data with model-predicted values and
transmit only deviations from the predictions back to the proxy. Such a
model-driven push approach is energy-efficient, while ensuring that anomalous
data trends are never missed. In addition to supporting queries on current
data, PRESTO also supports queries on historical data using interpolation and
local archival at sensors. PRESTO can adapt model and system parameters to
data and query dynamics to further extract energy savings. |
Network Computing |
2009/.Net |
18. |
Random Cast An Energy Efficient Communication Scheme for
Mobile Ad Hoc Networks Abstract: In mobile
ad hoc networks (MANETs), every node overhears every data transmission
occurring in its vicinity and thus, consumes energy unnecessarily. However,
since some MANET routing protocols such as Dynamic Source Routing (DSR)
collect route information via overhearing, they would suffer if they are used
in combination with 802.11 PSM. Allowing no overhearing may critically
deteriorate the performance of the underlying routing protocol, while
unconditional overhearing may offset the advantage of using PSM. |
Mobile Computing |
2009/.Net |
19. |
Resequencing Analysis of Stop-and-Wait ARQ for Parallel
Multichannel Communications Abstract: We evaluate the resequencing delay and the
resequencing buffer occupancy, respectively. Under the assumption that all channels
have the same transmission rate but possibly different time-invariant error
rates, we derive the probability generating function of the resequencing
buffer occupancy and the probability mass function of the resequencing delay.
From numerical and simulation results, we analyze trends in the mean
resequencing buffer occupancy and the mean resequencing delay as functions of
system parameters. We expect that the modeling technique and analytical
approach used in this paper can be applied to the performance evaluation of
other ARQ protocols (e.g., the selective-repeat ARQ) over multiple
time-varying channels. |
Network Computing |
2009/.Net |
20. |
Resource Allocation in OFDMA Wireless Communications Systems
Supporting Multimedia Services Abstract: We
design a resource allocation algorithm for downlink of orthogonal frequency
division multiple access (OFDMA) systems supporting real-time (RT) and
best-effort (BE) services simultaneously over a time-varying wireless
channel.. We formulate the optimization problem representing the resource
allocation under consideration and solve it by using the dual optimization
technique and the projection stochastic sub gradient method. Simulation
results show that the proposed algorithm well meets the QoS requirements with
the high throughput and outperforms the modified largest weighted delay first
(M-LWDF) algorithm that supports similar QoS requirements. |
Network Computing |
2009/.Net |
21. |
Route Stability in MANETs under the Random Direction Mobility
Model Abstract: A fundamental issue arising in mobile ad hoc networks (MANETs)
is the selection of the optimal path between any two nodes. A method that has
been advocated to improve routing efficiency is to select the most stable
path so as to reduce the latency and the overhead due to route
reconstruction. In this work, we study both the availability and the duration
probability of a routing path that is subject to link failures caused by node
mobility. In particular, we focus on the case where the network nodes move
according to the Random Direction model, and we derive both exact and
approximate (but simple) expressions of these probabilities. Through our
results, we study the problem of selecting an optimal route in terms of path
availability. Finally, we propose an approach to improve the efficiency of
reactive Routing protocols. |
Mobile Computing |
2009/.Net |
22. |
Secure and Policy-Complaint Source Routing Abstract: In
today’s Internet, inter-domain route control remains elusive; nevertheless,
such control could improve the performance, reliability, and utility of the
network for end users and ISPs alike. While researchers have proposed a
number of source routing techniques to combat this limitation, there has thus
far been no way for independent ASes to ensure that such traffic does not
circumvent local traffic policies, nor to accurately determine the correct
party to charge for forwarding the traffic. |
Network Computing |
2009/.Net |
23. |
Single-Link Failure Detection in All-Optical Networks Using Abstract: In this paper, we consider
the problem of fault localization in all-optical networks. We introduce the
concept of monitoring cycles (MCs) and monitoring paths (MPs) for unique
identification of single-link failures. MCs and MPs are required to pass
through one or more monitoring locations. They are constructed such that any
single-link failure results in the failure of a unique combination of MCs and
MPs that pass through the monitoring location(s). For a network with only one
monitoring location, we prove that three-edge connectivity is a necessary and
sufficient condition for constructing MCs that uniquely identify any
single-link failure in the network. For this case, we formulate the problem
of constructing MCs as an integer linear program (ILP). We also develop
heuristic approaches for constructing MCs in the presence of one or more
monitoring locations. For an arbitrary network (not necessarily three-edge
connected), we describe a fault localization technique that uses both MPs and
MCs and that employs multiple monitoring locations. We also provide a
linear-time algorithm to compute the minimum number of required monitoring
locations. Through extensive simulations, we demonstrate the effectiveness of
the proposed monitoring technique. |
Network Computing |
2009/.Net |
24. |
Spread-Spectrum Watermarking Security Abstract: This paper presents both theoretical and
practical analyses of the security offered by watermarking and data hiding methods
based on spread spectrum. In this context, security is understood as the
difficulty of estimating the secret parameters of the embedding function
based on the observation of watermarked signals. On the theoretical side, the
security is quantified from an information-theoretic point of view by means
of the equivocation about the secret parameters. The main results reveal
fundamental limits and bounds on security and provide insight into other
properties, such as the impact of the embedding parameters, and the tradeoff
between robustness and security. On the practical side, workable estimators
of the secret parameters are proposed and theoretically analyzed for a
variety of scenarios, providing a comparison with previous approaches, and
showing that the security of many schemes used in practice can be fairly low. |
Secure
Computing |
2009/.Net |
25. |
The Effectiveness of Checksums for Embedded Control Networks Abstract: Embedded control networks commonly use
checksums to detect data transmission errors. However, design decisions about
which checksum to use are difficult because of a lack of information about
the relative effectiveness of available options. We study the error detection
effectiveness of the following commonly used checksum computations: exclusive
or (XOR), two's complement addition, one's complement addition, Fletcher
checksum, Adler checksum, and cyclic redundancy codes (CRCs). A study of
error detection capabilities for random independent bit errors and burst
errors reveals that XOR, two's complement addition, and Adler checksums are
suboptimal for typical network use. Instead, one's complement addition should
be used for networks willing to sacrifice error detection effectiveness to
reduce compute cost, Fletcher checksum for networks looking for a balance of
error detection and compute cost, and CRCs for networks willing to pay a
higher compute cost for significantly improved error detection. |
Secure
Computing |
2009/.Net |
26. |
Two Blocking Algorithm on Adaptive Binary Splitting Single and
Pair Resolutions for RFID Tag Identification Abstract: In radio frequency identification (RFID)
systems, the reader identifies tags through communication over a shared
wireless channel. When multiple tags transmit their IDs simultaneously, their
signals collide, increasing the identification delay. Therefore, many
previous anti-collision algorithms, including an adaptive query splitting
algorithm (AQS) and an adaptive binary splitting algorithm (ABS), focused on
solving this problem. This paper proposes two blocking algorithms, a single
resolution blocking ABS algorithm (SRB) and a pair resolution blocking ABS
algorithm (PRB), based on ABS. SRB not only inherits the essence of ABS which
uses the information of recognized tags obtained from the last process of tag
identification, but also adopts a blocking technique which prevents
recognized tags from being collided by unrecognized tags. PRB further adopts
a pair resolution technique which couples recognized tags and thus only needs
half time for next identifying these recognized tags. We formally analyze the
performance of SRB and PRB. Finally, the analytic and simulation results show
that SRB slightly outperforms ABS and PRB significantly surpasses ABS. |
Network Computing |
2009/.Net |
27. |
Virus Spread in Networks Abstract: The influence of the
network characteristics on the virus spread is analyzed in a new--the N-intertwined
Markov chain--model, whose only approximation lies in the application of mean
field theory. The mean field approximation is quantified in detail. The N-intertwined
model has been compared with the exact 2N-state Markov
model and with previously proposed "homogeneous" or
"local" models. The sharp epidemic threshold τc, which
is a consequence of mean field theory, is rigorously shown to be equal to
τc = 1/(λmax (A)), where λmax (A) is the
largest eigenvalue--the spectral radius--of the adjacency matrix A. A
continued fraction expansion of the steady-state infection probability at
node j is presented as well as several upper bounds. |
Network Computing |
2009/.Net |
28. |
Capturing Router Congestion and Delay Abstract: Using a unique
monitoring experiment, we capture all packets crossing a (lightly utilized)
operational access router from a Tier-1 provider, and use them to provide a
detailed examination of router congestion and packet delays. The complete
capture enables not just statistics as seen from outside the router, but also
an accurate physical router model to be identified. |
Network Computing |
2009/Java |
29. |
Continuous Monitoring of Spatial Queries in Wireless Broadcast Abstract: Wireless
data broadcast is a promising technique for information dissemination that
leverages the computational capabilities of the mobile devices in order to
enhance the scalability of the system. Under this environment, the data are
continuously broadcast by the server, interleaved with some indexing
information for query processing. Clients may then tune in the broadcast
channel and process their queries locally without contacting the server.
Previous work on spatial query processing for wireless broadcast systems has
only considered snapshot queries over static data. |
Mobile Computing |
2009/Java |
30. |
Energy Maps For Mobile Wireless networks coherence Time
Versues Spreding Period Abstract: We
show that even though mobile networks are highly unpredictable when viewed at
the individual node scale, the end-to-end quality-of-service (QoS) metrics
can be stationary when the mobile network is viewed in the aggregate.
Finally, we show how energy maps can be utilized by an application that aims
to minimize a node's total energy consumption over its near-future
trajectory. |
Mobile Computing |
2009/Java |
31. |
Energy-Efficient SINR-Based Routing for Multihop Wireless Abstract: In this paper, we develop an
energy-efficient routing scheme that takes into account the interference
created by existing flows in the network. The routing scheme chooses a route
such that the network expends the minimum energy satisfying with the minimum constraints
of flows. Unlike previous works, we explicitly study the impact of routing a
new flow on the energy consumption of the network. Using implementation we
show that the routes chosen by our algorithm (centralized and distributed)
are more energy efficient than the state of the art. |
Mobile Computing |
2009/Java |
32. |
Evaluating the Vulnerability of Network Traffic Using Joint
Security and Routing Analysis Abstract: Joint analysis of security and routing
protocols in wireless networks reveals vulnerabilities of secure network
traffic that remain undetected when security and routing protocols are
analyzed independently. We formulate a class of continuous metrics to
evaluate the vulnerability of network traffic as a function of security and
routing protocols used in wireless networks. We develop two complementary
vulnerability definitions using set theoretic and circuit theoretic
interpretations of the security of network traffic, allowing a network
analyst or an adversary to determine weaknesses in the secure network. |
Network Computing |
2009/Java |
33. |
Large Connectivity for Dynamic Random Geometric Graphs Abstract: We provide the first rigorous analytical
results for the connectivity of dynamic random geometric graphs—a model for
mobile wireless networks in which vertices move in random directions in the
unit torus. The model presented here follows the one described in. We provide
precise asymptotic results for the expected length of the connectivity and
disconnectivity periods of the network. We believe that the formal tools
developed in this work could be extended to be used in more concrete settings
and in more realistic models, in the same manner as the development of the
connectivity threshold for static random geometric graphs has affected a lot
of research done on ad hoc networks. |
Mobile Computing |
2009/Java |
34. |
Measuring Capacity Bandwidth of Targeted Path Segments Abstract: Accurate
measurement of network bandwidth is important for network management
applications as well as flexible Internet applications and protocols which
actively manage and dynamically adapt to changing utilization of network
resources. Extensive work has focused on two approaches to measuring
bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path.
Unfortunately, best-practice techniques for the former are inefficient and
techniques for the latter are only able to observe bottlenecks visible at
end-to-end scope. In this paper, we develop end-to-end probing methods which
can measure bottleneck capacity bandwidth along arbitrary, targeted sub
paths of a path in the network, including sub paths shared by a
set of flows. We evaluate our technique through ns simulations, then provide
a comparative Internet performance evaluation against hop-by-hop and
end-to-end techniques. We also describe a number of applications which we
foresee as standing to benefit from solutions to this problem, ranging from
network troubleshooting and capacity provisioning to optimizing the layout of
application-level overlay networks, to optimized replica placement. |
Network Computing |
2009/Java |
35. |
Mitigation of Control Channel Jamming Under Node Capture Attacks Abstract: Availability
of service in many wireless networks depends on the ability for network users
to establish and maintain communication channels using control messages from
base stations and other users. An adversary with knowledge of the underlying
communication protocol can mount an efficient denial of service attack by
jamming the communication channels used to exchange control messages. The use
of spread spectrum techniques can deter an external adversary from such
control channel jamming attacks. However, malicious colluding insiders or an
adversary who captures or compromises system users is not deterred by spread
spectrum, as they know the required spreading sequences. |
Mobile Computing |
2009/Java |
36. |
Mobility Management Approaches for Mobile IP Networks Abstract: In wireless networks, efficient
management of mobility is a crucial issue to support mobile users. The Mobile
Internet Protocol (MIP) has been proposed to support global mobility in IP
networks. Several mobility management strategies have been proposed which aim
reducing the signaling traffic related to the Mobile Terminals (MTs)
registration with the Home Agents (HAs) whenever their Care-of-Addresses
(CoAs) change. They use different Foreign Agents (FAs) and Gateway FAs (GFAs)
hierarchies to concentrate the registration processes. For high-mobility MTs,
the Hierarchical MIP (HMIP) and Dynamic HMIP (DHMIP) strategies localize the
registration in FAs and GFAs, yielding to high-mobility signaling. The
Multicast HMIP strategy limits the registration processes in the GFAs. For
high-mobility MTs, it provides lowest mobility signaling delay compared to
the HMIP and DHMIP approaches. However, it is resource consuming strategy
unless for frequent MT mobility. Hence, we propose an analytic model to
evaluate the mean signaling delay and the mean bandwidth per call according
to the type of MT mobility. In our analysis, the MHMIP outperforms the DHMIP
and MIP strategies in almost all the studied cases. The main contribution of
this paper is the analytic model that allows the mobility management
approaches performance evaluation. |
Mobile Computing |
2009/Java |
37. |
Multiple Routing Configurations for Fast IP Network Recovery Abstract: As the
Internet takes an increasingly central role in our communications
infrastructure, the slow convergence of routing protocols after a network
failure becomes a growing problem. To assure fast recovery from link and node
failures in IP networks, we present a new recovery scheme called Multiple
Routing Configurations (MRC). We also show how an estimate of the traffic
demands in the network can be used to improve the distribution of the
recovered traffic, and thus reduce the chances of congestion when MRC is
used. |
Network Computing |
2009/Java |
38. |
Residual-Based Estimation of Peer and Link Lifetimes in P2P
Networks Abstract: Residual-Based Estimation of measuring lifetimes in P2P systems usually rely on the so-called Create-Based
Method (CBM), which divides a given observation window into two halves
and samples users “created” in the first half every time units until they die
or the observation period ends. Despite its frequent use, this approach has
no rigorous accuracy or overhead analysis in the literature. To shed more
light on its performance, we first derive a model for CBM and show that small
window size or large may lead to highly inaccurate lifetime distributions. We
then show that create based sampling exhibits an inherent tradeoff between
overhead and accuracy, which does not allow any fundamental improvement to
the method. Instead, we propose a completely different approach for sampling
user dynamics that keeps track of only residual lifetimes of peers and
uses a simple renewal-process model to recover the actual lifetimes from the
observed residuals. Our analysis indicates that for reasonably large systems,
the proposed method can reduce bandwidth consumption by several orders of
magnitude compared to prior approaches while simultaneously achieving higher accuracy.
We finish the paper by implementing a two-tier Gnutella network crawler
equipped with the proposed sampling method and obtain the distribution of
ultrapeer lifetimes in a network of 6.4 million users and 60 million links. |
Network Computing |
2009/Java |
39. |
SIMPS Using Sociology for Personal Mobility Abstract: Assessing
mobility in a thorough fashion is a crucial step toward more efficient mobile
network design. Recent research on mobility has focused on two main points:
analyzing models and studying their impact on data transport. These works
investigate the consequences of
mobility. This model defines a process called sociostation, rendered by two
complimentary behaviors, namely socialize and isolate, that regulate an
individual with regard to her/his own sociability level. SIMPS leads to
results that agree with scaling laws observed both in small-scale and
large-scale human motion. Although our model defines only two simple
individual behaviors, we observe many emerging collective behaviors (group
formation/splitting, path formation, and evolution). |
Mobile Computing |
2009/Java |
40. |
Spatio-Temporal Network Anomaly Detection by Assessing
Deviations of Empirical Measures Abstract: We introduce
an Internet traffic anomaly detection mechanism based on large deviations
results for empirical measures. Using past traffic traces we characterize
network traffic during various time-of-day intervals, assuming that it is
anomaly-free. We present two different approaches to characterize traffic:
(i) a model-free approach based on the method of types and Sanov’s theorem,
and (ii) a model-based approach modeling traffic using a Markov modulated
process. Using these characterizations as a reference we continuously monitor
traffic and employ large deviations and decision theory results to “compare”
the empirical measure of the monitored traffic with the corresponding
reference characterization, thus, identifying traffic anomalies in real-time.
Our experimental results show that applying our methodology (even
short-lived) anomalies are identified within a small number of observations.
Throughout, we compare the two approaches presenting their advantages and
disadvantages to identify and classify temporal network anomalies. We also
demonstrate how our framework can be used to monitor traffic from multiple
network elements in order to identify both spatial and temporal anomalies. We
validate our techniques by analyzing real traffic traces with time-stamped
anomalies. |
Network Computing |
2009/Java |
41. |
Flexible Rollback Recovery in Dynamic Heterogeneous Grid
Computing Abstract: Large applications executing on Grid
or cluster architectures consisting of hundreds or thousands of computational
nodes create problems with respect to reliability. The source of the problems
is node failures and the need for dynamic configuration over extensive
runtime. By allowing recovery even under different numbers of processors, the
approaches are especially suitable for applications with a need for adaptive
or reactionary configuration control. The low-cost protocols offer the
capability of controlling or bounding the overhead. A formal cost model is
presented, followed by an experimental evaluation. It is shown that the
overhead of the protocol is very small, and the maximum work lost by a
crashed process is small and bounded. |
Secure
Computing |
2009/Java |
42. |
Dynamic Routing with Security Considerations Abstract: Security has become one of the major
issues for data communication over wired and wireless networks. Different
from the past work on the designs of cryptography algorithms and system
infrastructures, we will propose a dynamic routing algorithm that could
randomize delivery paths for data transmission. The algorithm is easy to
implement and compatible with popular routing protocols, such as the Routing
Information Protocol in wired networks and Destination-Sequenced Distance
Vector protocol in wireless networks, without introducing extra control messages.
An analytic study on the proposed algorithm is presented, and a series of
simulation experiments are conducted to verify the analytic results and to
show the capability of the proposed algorithm. |
Secure
Computing |
2009/Java |
43. |
Adaptive Fuzzy Filtering for Artifact Reduction in Compressed
Images and Videos Abstract: |
Image
Processing |
2009/Java |
44. |
Detecting Malicious Packet Losses Abstract: We consider the problem of
detecting whether a compromised router is maliciously manipulating its stream
of packets. In particular, we are concerned with a simple yet effective
attack in which a router selectively drops packets destined for some Victim.
Unfortunately, it is quite challenging to attribute a missing packet to a
malicious action because normal network congestion can produce the same
effect. Modern networks routinely drop packets when the load emporarily
exceeds their buffering capacities. Previous detection protocols have tried
to address this problem with a user-defined threshold: too many dropped
packets imply malicious intent. However, this heuristic is fundamentally
unsound; setting this threshold is, at best, an art and will certainly create
unnecessary false positives or mask highly focused attacks. |
Distribute Computing |
2009/Java |
45. |
On the Security of Route Discovery in MANETs Abstract: Mobile ad hoc networks (MANETs) are
collections of wireless mobile devices with restricted broadcast range and
resources, and no fixed infrastructure. Communication is achieved by relaying
data along appropriate routes that are dynamically discovered and maintained
through collaboration between the nodes. Discovery of such routes is a major
task, both from efficiency and security points of view. Recently, a security
model tailored to the specific requirements of MANETs .Among the novel
characteristics of this security model is that it promises security guarantee
under concurrent executions, a feature of crucial practical implication for
this type of distributed computation. A novel route discovery algorithm
called endairA was also proposed, together with a claimed security proof
within the same model. |
Secure
Computing |
2009/.Net |
46.
|
A Distributed Stream Query Optimization Framework through
Integrated Planning and Deployment Abstract: This paper addresses the problem of
optimizing multiple distributed stream queries that are executing
simultaneously in distributed data stream systems. We argue that the static
query optimization approach of “plan, then deployment” is inadequate for
handling distributed queries involving multiple streams and node dynamics
faced in distributed data stream systems and applications. Thus, the
selection of an optimal execution plan in such dynamic and networked
computing systems must consider operator ordering, reuse, network placement,
and search space reduction. |
Distribute Computing |
2009/Java |
47. |
Facial Recognition using multisensor images based on localized
kernel Eigen spaces Abstract: A feature
selection technique along with an information fusion procedure for improving
the recognition accuracy of a visual and thermal image-based facial
recognition system is presented in this paper. A novel modular kernel
eigenspaces approach is developed and implemented on the phase congruency
feature maps extracted from the visual and thermal images individually.
Smaller sub-regions from a predefined neighborhood within the phase
congruency images of the training samples are merged to obtain a large set of
features. These features are then projected into higher dimensional spaces
using kernel methods. The proposed localized nonlinear feature selection
procedure helps to overcome the bottlenecks of illumination variations,
partial occlusions, expression variations and variations due to temperature
changes that affect the visual and thermal face recognition techniques. AR
and Equinox databases are used for experimentation and evaluation of the
proposed technique. The proposed feature selection procedure has greatly
improved the recognition accuracy for both the visual and thermal images when
compared to conventional techniques. Also, a decision level fusion
methodology is presented which along with the feature selection procedure has
outperformed various other face recognition techniques in terms of
recognition accuracy. |
Image
Processing |
2009/Java |
48. |
Ranking and Suggesting
Popular Items Abstract: We consider the problem of ranking and suggesting popular items based
on user feedback that appears in applications such as social tagging and
search query suggestions. In particular, we assume that the user feedback is
generated as follows. The system suggests to each user a (small) subset of
items from the set of all possible items. The user can then choose an item
from her suggestion set, or alternatively choose an item from the set of all
possible items. Using this feedback, the goal is to quickly learn the true
popularity of items, and hence being able to suggest items to users that are
indeed popular. The difficulty that arises in this context is that making
suggestions to users can reinforce the popularity of some items, and hence
distort the resulting item ranking. In this paper, we provide an analysis of
this problem. We first formally show that suggesting items to users can
indeed lead to a skewed popularity ranking of items. We then propose several
algorithms for ranking and suggesting items, and study their performance. In
addition, we illustrate our results using a numerical cased study that is
based on the inferred popularity of
tags from a month-long crawl of a popular social bookmarking service. While
”na¨ıve” algorithms can lead to a skewed ranking, our results suggests
that there exist simple algorithms for ranking and suggesting items that lead
to good performance. |
Web
Mining |
2009/J2EE |
49. |
Monitoring the
Application-Layer DDoS Attacks for Popular Websites Abstract: Distributed denial of service (DDoS)
attack is a continuous critical threat to the Internet. Derived from the low
layers, new application-layer-based DDoS attacks utilizing legitimate HTTP
requests to overwhelm victim resources are more undetectable. The case may be
more serious when such attacks mimic or occur during the flash crowd event of
a popular Website. Focusing on the detection for such new DDoS attacks, a
scheme based on document popularity is introduced. An Access Matrix is
defined to capture the spatial-temporal patterns of a normal flash crowd.
Principal component analysis and independent component analysis are applied
to abstract the multidimensional Access Matrix. |
||
50. |
Multipath Dissemination in
Regular Mesh Topologies Abstract: Mesh
topologies are important for large-scale peer-to-peer systems that use
low-power transceivers. The Quality of Service (QoS) in such systems is known
to decrease as the scale increases. We present a scalable approach for
dissemination that exploits all the shortest paths between a pair of nodes
and improves the QoS. Despite the presence of multiple shortest paths in a
system, we show that these paths cannot be exploited by spreading the
messages over the paths in a simple round-robin manner; nodes along one of
these paths will always handle more messages than the nodes along the other
paths. |
Distribute Computing |
Copyright 2010
© Penta Software IT Solution. All rights reserved.
Terms of use | Privacy Policy | Back