Login
Technical Reports

KReSIT Archives  Upload  Edit  Delete

IITB/CSE/2017/December/87, TR-CSE-2017-87
Pushing The Envelope for Boolean Functional Synthesis
S. Akshay, Supratik Chakraborty, Shubham Goel, Sumith Kulal, Shetal Shah



Boolean functional synthesis using AIGs and CEGAR has been recently
proposed as a promising alternative to synthesis based on BDDs or on
DPLL+CDCL style clausal reasoning. In this paper, we delve deeper into
the AIG+CEGAR approach and propose techniques that significantly improve
the performance of Boolean functional synthesis, sometimes by orders
of magnitude vis-a-vis state-of-the-art tools. This is achieved by a
combination of algorithmic modifications resulting from an improved
theoretical analysis of the AIG+CEGAR approach. Our approach also
harnesses the power of recently proposed efficient almost-uniform
samplers to improve the performance of synthesis.
Empirical evaluation on a suite of benchmarks shows that our approach
outperforms existing state-of-the-art Boolean functional synthesis
tools on most of these benchmarks.


IITB/CSE/2017/October/86, TR-CSE-2017-86
test
test

test


IITB/CSE/2017/March/85, TR-CSE-2017-85
Kleene Theorems for Labelled Free Choice Nets without product acceptance condition
Ramchandra Phawade

In an earlier work [PhaLod14], we provided expressions for free
choice nets having "distributed choice property' which makes the nets
"direct product' representable. In a recent work [Pha16], we gave equivalent syntax for a larger class of free choice nets obtained by dropping "distributed choice property', thus syntactically characterizing nets which are not representable by direct products also.

In both [PhaLod14,Pha16], the classes of free choice nets were
restricted by a "product condition' on the set of final markings.
In this paper we do away with this restriction and give expressions
for the resultant classes of nets.
We work with 1-bounded, labelled free choice nets given with an S-cover,
where the labels come from a distributed alphabet and S-components of
the associated S-cover respect the alphabet distribution.


IITB/CSE/2017/March/84, TR-CSE-2017-84
Ramsey-Based Inclusion Checking for Dense-Stack Visibly Pushdown Automata
Devendra Bhave and Ramchandra Phawade

Visibly pushdown automata are popular as they are closed under Boolean operations and determinization.
There exist multiple notions of timed pushdown systems like recursive timed automata, dense-time pushdown automata.
We explore a generalization of visibly pushdown automata over infinite words with parity acceptance condition
--in which stack elements have real valued time stamps--named dense-stack visibly pushdown automata.
We prove its closure under union and intersection and show that
inclusion checking for this class is decidable using Ramsey-based techniques.


IITB/CSE/2016/October/83, TR-CSE-2016-83
Assumption Propagation through Annotated Programs
Dipak L. Chaudhari and Om Damani

In the correct-by-construction programming methodology, programs are incrementally derived from their
formal specifications, by repeatedly applying transformations to partially derived programs. At an intermediate stage in a derivation, users may have to make certain assumptions to proceed further. To ensure
that the assumptions hold true at that point in the program, certain other assumptions may need to be
introduced upstream as loop invariants or preconditions. Typically these other assumptions are made in an
ad hoc fashion and may result in unnecessary rework, or worse, complete exclusion of some of the alternative
solutions. In this work, we present rules for propagating assumptions through annotated programs. We show
how these rules can be integrated in a top-down derivation methodology to provide a systematic approach
for propagating the assumptions, materializing them with executable statements at a place different from
the place of introduction, and strengthening of loop invariants with minimal additional proof efforts.


IITB/CSE/2016/October/82, TR-CSE-2016-82
WebQ: A Virtual Queue For Improving User Experience During Web Server Overload
Murali Suresh, Ravi Shankar Mondal, Stanly Thomas, Mythili Vutukuru

While several techniques exist today to build high-capacity web
servers, little attention is paid to the fact that servers often crash
when faced with transient overload, causing user experience to degrade
sharply when incoming load exceeds capacity. Existing overload control
mechanisms focus on some form of admission control to protect the web
server from overload. However, all such techniques result in excess
user requests failing, and there is no feedback to the frustrated user
about when to retry again. This paper describes WebQ, a system
consisting of two web proxies, that together simulate a virtual queue
of web requests, and shape incoming load to match server
capacity. Users accessing a server protected by WebQ receive a HTTP
redirect response specifying a wait time in the virtual queue, and are
automatically redirected to the web server upon expiration of the wait
time. The wait times are calculated using an estimate of the server's
capacity that is computed by WebQ. Users in the virtual queue are
provided with a secure cryptographic token, which is checked to
guarantee that the user has waited his prescribed time in the queue,
without having to maintain per-user state. The design of the WebQ
proxies lends itself to a distributed implementation, making the
system itself scalable and robust to overload. Our experiments with
our WebQ prototype show that, with WebQ in place, users experience
zero server failures and significantly better response times from a
web server, even when the peak load is several times its capacity.


IITB/CSE/2016/August/81, TR-CSE-2016-81
Combining Free Choice and Time in Petri Nets
S. Akshay, Lo ̈ıc Helou ̈et and Ramchandra Phawade

Time Petri nets (TPNs) (Merlin74) are a classical extension of Petri nets with timing constraints attached to transitions, for which most verification problems are undecidable.
We consider TPNs under a strong semantics with multiple enabling of transitions. We focus on a structural subclass of unbounded TPNs, where the underlying untimed net is free choice, and show that it enjoys nice properties in the timed setting under a multi-server semantics. In particular, we show that the questions of firability (whether a chosen transition can fire), and termination (whether the net has a non-terminating run) are decidable for this class. We then consider the problem of robustness under guard enlargement (Puri00), i.e., whether a given property is preserved even if the system is implemented on an architecture with imprecise time measurement. This question was studied for TPNs in (Akshay et al. 2016), and decidability of several problems was obtained for bounded classes of nets. We show that robustness of fireablity is decidable for unbounded free choice TPNs with a multi-server semantics.


IITB/CSE/2016/June/80, TR-CSE-2016-80
A Perfect class of Context-Sensitive Timed Languages
Devendra Bhave, Vrunda Dave, S. N. Krishna, Ramchandra Phawade, Ashutosh Trivedi

Perfect languages---a term coined by Esparza, Ganty, and Majumdar---are the classes of languages that are closed under Boolean operations and enjoy decidable emptiness problem. Perfect languages form the basis for decidable automata-theoretic model-checking for the respective class of models. Regular languages and visibly pushdown languages are paradigmatic examples of perfect languages. Alur and Dill initiated the language-theoretic study of timed languages and introduced timed automata capturing a timed analog of regular languages. However, unlike their untimed counterparts, timed regular languages are not perfect. Alur, Fix, and Henzinger later discovered a perfect subclass of timed languages recognized by event-clock automata. Since then, a number of perfect subclasses of timed context-free languages, such as event-clock visibly push down languages, have been proposed. There exist examples of perfect languages even beyond context-free languages:---La Torre, Madhusudan, and Parlato characterized first perfect class of context-sensitive languages via multistack visibly pushdown automata with an explicit bound on number of stages where in each stage at most one stack is used. In this paper we extend their work for timed languages by characterizing a perfect subclass of timed context-sensitive languages and provide a logical characterization for this class of timed languages.


IITB/CSE/2016/May/79, TR-CSE-2016-79
Kleene Theorem for Labelled Free Choice Nets without Distributed Choice
Ramchandra Phawade

For 1-bounded nets, equivalent syntax have been given by Grabowski (1981), Garg and Ragunath (1992) and other authors Lodaya (2006). We work with 1-bounded, labelled free choice nets given with an S-cover, where the labels come from a distributed alphabet and S-components of the associated S-c over respect the alphabet distribution. In earlier work (Phawade and Lodaya, PNSE 2014), we provided expressions for free choice nets having 'distributed choice property' which makes the nets 'direct product' representable. These expressions have 'pairings' of blocks of derivatives--representing automaton states--which specify how synchronizations take place. In this work, we give equivalent syntax for a larger class of free choice nets obtained by dropping 'distributed choice property'. Hence we deal with the nets which are not representable by direct products also. Now 'pairings' relate the tuples of blocks of derivatives and their effects--representing automaton transitions--in our expressions.


IITB/CSE/2016/January/78, TR-CSE-2016-78
Design, Implementation and Performance Analysis of Highly Efficient Algorithms for AES Key Retrieval in Access-driven Cache-based Side Channel Attacks
Ashokkumar C, M. Bhargav Sri Venkatesh , Ravi Prakash Giri , Bernard Menezes

Leakage of information between two processes sharing the same processor cache has been exploited in many novel approaches targeting various cryptographic algorithms. The software implementation of AES is an especially attractive target since it makes extensive use of cache-resident table lookups. We consider two attack scenarios where either the plaintext or ciphertext is known. We employ a multi-threaded spy process and ensure that each time slice provided to the victim (running AES) is small enough so that it makes a very limited number of table accesses. We design and implement a suite of algorithms to deduce the 128-bit AES key using as input the set of (unordered) cache line numbers captured by the spy threads in an access-driven cache-based side channel attack. Our algorithms are expressed using simple relational algebraic operations and run in under a minute. Above all, our attack is highly efficient - we demonstrate recovery of the full AES key given only about 6-7 blocks of plaintext or ciphertext (theoretically even a single block would suffice). This is a substantial improvement over previous cache-based side channel attacks that require between 100 and a million encryptions. Moreover, our attack supports varying cache hit/miss observation granularities, does not need frequent interruptions of the victim and will work even if the victim makes up to 60 cache accesses before being interrupted. Finally, we develop analytic models to estimate the number of encryptions/decryptions required as a function of access granularity and compare model results with those obtained from our experiments


IITB/CSE/2015/December/77, TR-CSE-2015-77
A Logical Characterization for Dense-time Visibly Pushdown Automata
Devendra Bhave, Vrunda Dave, Shankara Narayanan Krishna, Ramchandra Phawade, Ashutosh Trivedi

Two of the most celebrated results that effectively exploit visual representation to give logical characterization and decidable model checking include visibly pushdown automata (VPA) by Alur and Madhusudan and event-clock automata (ECA) by Alur, Fix and Henzinger. VPA and ECA -- by making the call-return edges visible and by making the clock-reset operation visible, respectively recover decidability for the verification problem for pushdown automata implementation against visibly pushdown automata specification and timed automata implementation against event-clock timed automata specification, respectively. In this work we combine and extend these two works to introduce dense-time visibly pushdown automata that make both the call-return as well as resets visible. We present monadic second order logic characterization of these automata and prove the decidability of the emptiness problem for these automata paving way for verification problem for dense-timed pushdown automata against dense-timed visibly pushdown automata specification.


IITB/CSE/2015/December/76, TR-CSE-2015-76
Benchmarking Multiprocessing Parameters in a Virtualized Multi-Core Environment
Chinmay Satish Kulkarni and Purushottam Kulkarni

A virtualized multi-core environment is a multi-core machine running virtual machines with multiple virtual cpus. Previous research has demonstrated that the performance of multi-threaded, disk intensive, and network intensive applications in such an environment can be greatly improved by correctly tuning certain simple multiprocessing related system parameters like cpu affinity and interrupt affinity. However, there is no clarity on the implications of tuning these parameters together, and the implications of tuning these parameters in an environment where application behaviour is dynamic. We present preliminary results of experiments designed towards filling up these gaps, and developing a unified framework to tune this set of parameters.


IITB/CSE/2015/September/75, TR-CSE-2015-75
On the Approximability of the Minimum Rainbow Subgraph Problem and Other Related Problems
Sumedh Tirodkar and Sundar Vishwanathan

In this paper, we study the approximability of the Minimum Rainbow Subgraph (MRS) problem and other related problems. The input to the problem is an $n$-vertex undirected graph, with each edge colored with one of $p$ colors. The goal is to find a subgraph on a minimum number of vertices which has one induced edge of each color. The problem is known to be NP-hard, and has an upper bound of $O(sqrt{n})$ and a lower bound of $Omega(log n)$ on its approximation ratio.

We define a new problem called the Densest $k$ Colored Subgraph problem, which has the same input as the MRS problem alongwith a parameter $k$. The goal is to output a subgraph on $k$ vertices, which has the maximum number of edges of distinct colors. We give an $O(n^{1/3})$ approximation algorithm for it, and then, using that algorithm, give an $O(n^{1/3}log n)$ approximation algorithm for the MRS problem. We observe that the MIN-REP problem is indeed a special case of the MRS problem. This also implies a combinatorial $O(n^{1/3}log n)$ approximation algorithm for the MIN-REP problem. Previously, Charikar et al. showed an ingenious LP-rounding based algorithm with an approximation ratio of $O(n^{1/3}log^{2/3}n)$ for MIN-REP. It is quasi- extbf{NP}-hard to approximate the MIN-REP problem to within a factor of $2^{log^{1-epsilon}n}. The same hardness result now applies to the MRS problem. We also give approximation preserving reductions between various problems related to the MRS problem for which the best known approximation ratio is $O(n^c)$ where $n$ is the size of the input and $c$ is a fixed constant less than one.


IITB/CSE/2015/June/74, TR-CSE-2015-74
Booklet with all the abstracts submitted for RS Poster-Mela 2015
-

-


IITB/CSE/2015/June/72, TR-CSE-2015-72
An Empirical Evaluation of Optimization Techniques for Virtual Machine Migration
Senthil Nathan, Umesh Bellur and Purushottam Kulkarni

The pre-copy live migration of a virtual machine (VM) has many uses which include mitigating resource hot-spots on an overloaded physical machine (PM), and evacuating a PM for software & hardware upgradation. In each of these cases, an important requirement is to perform VM migration as quickly as possible. Hence, various optimizations for pre-copy live migration have been proposed in existing literature, such as smart stop-and-copy, delta compression, page skip, deduplication, and data compression, to reduce the migration time. A comprehensive empirical evaluation and comparison of these optimizations in terms of performance and cost does not exist in current literature. As a result, it is not clear (i) for a given migration scenario and application, which is the best optimization in terms of performance & cost? (ii) Can the combination of optimizations increase the performance gain & decrease the resource cost and if so what combinations should we apply together?

In this paper, we present a comprehensive empirical study to understand and compare the tradeoff between the performance and cost of each optimization. We found that certain application behaviour, such as disk read/write rate, impacts the performance gain. Hence, we establish a correlation between the application behaviour and optimizations such that a suitable set of optimizations can be gainfully employed for a given scenario. The empirical study reveals that the page skip is a predominant optimization as it reduces network traffic by 20% without increasing the CPU utilization. The performance gain due to data compression is very significant (37% reduction in network traffic while increasing CPU utilization by 5x), and hence we can tradeoff CPU utilization for network bandwidth. The deduplication technique needs to be applied with utmost care as the increase in CPU utilization is 13x.


IITB/CSE/2015/May/71, TR-CSE-2015-71
On Exploiting Page Sharing in a Virtualized Environment - an Empirical Study of Virtualization Versus Lightweight Containers
Ashish Sonone, Anand Soni, Senthil Nathan, Umesh Bellur

While virtualized solutions are firmly entrenched in cloud data centers to provide isolated execution environments, the chief overhead it suffers from is that of memory consumption. Even pages that are common in multiple virtual machines (VMs) on the same physical machine (PM) are not shared and multiple copies exist thereby draining valuable memory resources and capping the number of VMs that can be instantiated on a PM. Lightweight containers (LWCs) on the other hand does not suffer from this situation since virtually everything is shared via Copy on Write semantics. As a result the capacity of a PM to host LWCs is far higher than hosting equivalent VMs. In this paper, we evaluate a solution using uKSM to exploit memory page redundancy amongst multiple VMs on a PM executing the same operating system thereby enabling a comparison of the two technologies as far as its capacity to instantiate multiple instances is concerned. We performed a thorough empirical evaluation of the two solutions and present comprehensive results.


IITB/CSE/2015/April/70, TR-CSE-2015-70
WebQ: A Virtual Queue For Improving User Experience During Web Server Overload
Bhavin Doshi, Chandan Kumar, Pulkit Piyush, Mythili Vutukuru

This paper describes a system for improving user experience when accessing overloaded web servers. While several techniques exist today to build high-capacity web servers, little attention is paid to the fact that servers often crash when faced with transient overload, causing user experience to degrade sharply when incoming load exceeds capacity. Existing overload control mechanisms focus on some form of admission control to protect the web server from overload. However, all such techniques result in user requests failing, either due to timing out or receiving a "service unavailable" message. More importantly, there is no feedback to the frustrated user about when to retry again, leading to adhoc retries. This paper describes WebQ, a system consisting of two web proxies, that together simulate a virtual queue of web requests, and shape incoming load to match server capacity. Users accessing a server protected by WebQ receive a HTTP redirect response specifying a wait time in the virtual queue, and are automatically redirected to the web server upon expiration of the wait time. The wait times are calculated using an estimate of the server's capacity that is computed by WebQ. Users in the virtual queue are provided with a secure cryptographic token, which is checked to guarantee that the user has waited his prescribed time in the queue. We evaluate the ideas of WebQ using a real prototype implementation and a simulation model. Our experiments show that, with WebQ in place, users experience zero server failures and significantly better response times from a web server, even when the peak load is several times the provisioned capacity of the server.


IITB/CSE/2015/February/69, TR-CSE-2015-69
Towards a Comprehensive Performance Model of Virtual Machine Live Migration
Senthil Nathan, Umesh Bellur, Purushottam Kulkarni

Although many models exist to predict the time taken to migrate a virtual machine from one physical machine to another, our empirical validation of these models has shown the 90th percentile error to be 46% (43 secs) and 159% (112 secs) for KVM and Xen live migration, respectively. Our analysis reveals that these models are fundamentally flawed as they all fail to take into account the following three critical parameters: (i) the writable working set size, (ii) the number of pages eligible for the skip technique, (iii) the relation of the number of skipped pages with the page dirty rate and the page transfer rate, and incorrectly model the key parameter—the number of new pages dirtied per unit time. In this paper, we propose a novel model that takes all these parameters into account. We present a thorough validation with 53 workloads and show that the 90th percentile error in the estimated migration times is only 12% (8 secs) and 19% (14 secs) for KVM and Xen live migration, respectively.


IITB/CSE/2015/January/68, TR-CSE-2015-68
Adding Inferred Constraints Leads to Runtime Inefficiencies in the Optimization of Piped Water Networks
Nikhil Hooda, Om P. Damani

Pressure-constrained piped water network cost optimization is an important problem with significant real world impact. The main design decision to be made for a piped water network is the choice of pipe diameters from a set of commercially available pipe diameters. Rural water networks in the developing world are typically branched networks with a single water source. For such networks, existing methods either solve the constrained-optimization problem heuristically, or optimally solve the specific case of single pipe segment per link. In the present study, we discuss two approaches to solve the general problem optimally. We show that the approach that attempts to model constraints capturing the inferred structure of the optimal solution results in much worse runtime performance compared to a more general approach. The general formulation proposed is optimal as well as faster than both the previous formulations for the specific problem of one pipe segment per link and the heuristic approach used to solve the general problem. We have implemented the general formulation in a constraint optimization tool and have made it publicly available.


IITB/CSE/2014/April/62, TR-CSE-2014-62
HTTP Web Browsing Traffic Performance in TDMA Mesh Networks
Vishal Sevani, Bhaskaran Raman

TDMA based wireless mesh networks have gained prominence as some of the recent standards such as WiMAX and 802.11s
have proposed the use of TDMA based MAC protocol for mesh networks. Even though HTTP web browsing traffic constitutes
a significant percentage of Internet traffic, but as of yet there have been no attempts to study the performance of HTTP based
web browsing traffic in TDMA mesh networks. HTTP web browsing traffic has different characteristics compared with other
types of traffic. In particular, as HTTP traffic consists of large number of small sized file transfers (median file size is 10KB),
it can impose high scheduling overhead. As we highlight, HTTP traffic requires that RTT (round trip time) be small and also it
requires that large sized flows be allocated higher share of bandwidth. Given these characteristics of HTTP traffic, it is not clear
what protocol design for TDMA mesh networks performs best. In this work we have studied the performance comparison of
four different TDMA MAC protocols for HTTP web browsing traffic. Two of these protocols follow distributed scheduling, one
centralized and the other naive static fixed schedule approach.

We have evaluated the performance of the four protocols for varying load, in different wireless packet loss conditions, for
single as well as multi-channel operation and in presence versus absence of large HTTP file downloads. A crucial aspect that
our results point out is that the two distributed protocols specified by the recent WiMAX and 802.11s standards, are incapable
of handling wireless packet loss and perform poorly in presence of large HTTP file downloads. Likewise the static approach
performs poorly under high load and single channel operation. As against this, the centralized protocol performs well for the
different experiment cases that we have considered. However our analysis reveals that the performance of the centralized protocol
can be further improved if we eliminate the flow set-up phase and minimize the delay in propagating the TDMA schedule. Likewise
we carry out required modifications to the centralized protocol and observe that the performance of the modified protocol, which
we term as the sf-centralized (set-up free centralized) protocol, improves by as much as 19% under moderate load and 47% under
high load.


IITB/CSE/2014/April/61, TR-CSE-2014-61
Benchmarking of Complex Event Processing Engine - Esper
Arun Mathew

Esper is an open source Complex Event Processing Engine. The project website has benchmarking details for ESP queries, for a 100 Mbit/s network. In this project we benchmark the CEP features of Esper. We measure the CPU utilization, memory utilization, latency and throughput for different kinds of CEP queries like conjunction, disjunction, sequence etc


IITB/CSE/2014/April/60, TR-CSE-2014-60
CONFIDE: Content-based Fixed-sized I/O Deduplication
Sujesha Sudevalayam, Purushottam Kulkarni, Rahul Balani and Akshat Verma

Due to increased adoption of virtualization-based systems, there is an increase in inherent content similarity of systems like email servers, virtualized servers and file servers. Inherent content similarity refers to the same content being present at physically different locations. Harnessing content similarity across different physical locations can help avoid duplicate disk I/O requests that fetch the same content repeatedly. This is referred to as I/O deduplication. Generally, caches are used to store most recently or frequently accessed blocks so as to save time of disk accesses. These caches are looked up based on block number, hence referred as block-based caches. Existing work on I/O deduplication maintains an additional content-based cache in order to serve more disk I/O requests from cache than before. It shows that any given cache can be better utilized if used as a content-based cache rather than block-based. However, the extra cache introduces cache exclusivity concerns. In our work, we incorporate intelligent I/O redirection within the storage virtualization engine of the device to manipulate the underlying block-based cache itself as a content-based cache.

We build and evaluate CONFIDE, a storage read-access optimization that identifies content similarity across fixed-sized blocks. This work is applicable to any device containing a storage virtualization layer — SAN volume controller, Shared object store, or Hypervisor, and our target implementation for the scope of this paper is a virtualized host. The CONFIDE system maintains a metadata store and performs read I/O request redirection such that the underlying cache behaves like a content-based cache. Our trace-based evaluation reveals that CONFIDE always performs equal to or better than Vanilla system, achieving upto 20% better cache-hit ratios and 80% higher number of disk reads averted.


IITB/CSE/2014/March/59, TR-CSE-2014-59
Colocation-Aware Modeling of CPU Usage for P2V Transitioning Applications
Sujesha Sudevalayam

Traditional data-centers are giving way to virtualization based shared hosting platforms. This requires knowledge of how much resources are required to host a set of virtualized services. Due to the resource overhead incurred by virtualization, it is essential to estimate the virtual resource usage correctly, in order to avoid inefficiency due to excessive provisioning as well as prevent performance degredation due to over-aggressive multiplexing. Thus, the first and foremost issue that needs to be addressed to provision applications in virtual execution environments, is the mapping of resource requirements from physical to virtual environments. In a previous work, we have demonstrated the CPU savings when communicating VMs are placed in co-located manner, as opposed to dispersed. In this work, we apply this knowledge to the P2V (physical to virtual) transition for virtualizing applications and services. We present an automated system that can account for virtualization CPU overheads and predict the virtualized CPU requirement, given the physical resource usage usage levels. We develop prediction models for two virtualization technologies --- Xen and KVM --- to demonstrate the generality of the modeling approach. We further conduct extensive experiments with synthetic and application benchmarks to validate the models. Our experiments show that with synthetic benchmarks, the 90 percentile error is around 5\% absolute CPU, for all workload types. With application benchmark testing, the 90 percentile is within 6\%.


IITB/CSE/2014/January/58, TR-CSE-2014-58
A Novel Power Model and Completion Time Model for Virtualized Environments
Swetha P.T. Srinivasan, Umesh Bellur

As power consumption costs takes upto half of the operational expenses of a data center, power management is now a critical concern of high performance computing and cloud computing data centers alike. Recent advances in processor technology have provided fine-grained control over the frequency and voltage at which processors operate and increased the dynamic power range of modern computers thus enabling power-performance negotiations. At the same time, the trends towards virtualization of data centers and HPC environments give us a clear boundary for control and helps us map virtual machines (VM) to specific cores of a physical machine. This control coupled with the ability to set
the operating frequency of the cores give us the opportunity to play with the power-performance tradeoffs of the VMs in a relatively simple manner. The complexity in effective power management arises from two sources - being able to predict the power consumption at a given performance level accurately and then to use this prediction in VM provisioning. Although many power and performance models exist, the combined effect of frequency and compute resource variations on the power and performance of VMs are not analyzed thoroughly. Moreover, the models do not perform well for memory or I/O-intensive tasks.

We present an empirically derived power model and a completion time model using linear regression with CPU utilization and operating frequency of the server
as parameters. We have validated the power model by using several processors in the Intel ‘i’-series, the Xeon series as well as the AMD processor in the x4 series and shown that our model predicts power consumption within a range
of 2-7% of the measured power. We have also validated the completion time model by predicting the execution time of six CPU, memory and I/O-intensive benchmarks on four heterogeneous systems and our model predicts the completion time within a range of 1-6% of the observed execution time. We are now in the process of employing these models to control VM provisioning to minimize power consumption while at the same time meeting the service level agreements of the applications executing within VMs. Our work applies to any virtualized environment - from a Hadoop running on a HPC cluster to enterprise applications in Cloud data centers.


IITB/CSE/2014/January/57, TR-CSE-2014-57
InSight: A Framework for Application Diagnosis using Virtual Machine Record and Replay
Senthilkumaran R and Purushottam Kulkarni

Non-deterministic execution poses several challenges toward diagnosis—debugging, profiling and execution state mining, of software systems (user-level applications and operating systems). While several techniques using modified libraries, library wrappers, binary instrumentation and memory shad- owing techniques exist, we aim to exploit the record and re- play technique enabled by virtualization to provide a generalized diagnosis framework. Our solution, is motivated by the requirements of reproducibility of execution, no application source code modification and no or minimal binary instrumentation overheads—all of which are seldom provided by existing techniques. InSight , our diagnosis framework, consists of two stages—the first which records execution state of applications and the virtual machine and the second that replays and analyses the recorded execution. We implement InSight on the Linux kernel-based virtual ma- chine (KVM) platform and as contributions implement an efficient record and replay substrate for KVM and a diagnosis framework using this substrate. This paper describes the design and implementation of these components and develops a set of diagnosis tools—potential deadlock detection, lock usage profiling and function profiling. We also present experiments to demonstrate correctness, the low overheads of InSight and the related diagnosis outcomes.


IITB/CSE/2013/September/56, TR-CSE-2013-56
Redesigning Khardi Rural Piped Water Network Scheme for Sustainability
Varsha Choudhary, Om P. Damani, Rajaram Desai, Aditya Joshi, Monika Kanwat, Manju Kaushal, Swati Kharole , Yogesh Pawde, Prerana Rathore and MilindSohoni

This report presents performance analysis of the 25 year Khardi multi-village drinking water pipeline scheme (MVS) from technical, operational, institutional, financial, and social perspective.

It also presents a design of Kundan dam based scheme as a sustainable solution compared to the new scheme for Khardi based on Bhatsa back water proposed by MJP and comparative feasibility analysis between them.

Brief History: The current scheme based on tail water of Bhatsa dam was designed in 1983-84 by MJP, the then WSSB(Water Supply and Sewerage Board) to supply piped drinking water to Khardi and five other villages namely Umbarkhand, Chanda, Golban, Lahe and Kukambe in Shahapur taluka of Thane district in Maharashtra. After its implementation in 1990, it was successfully operated by MJP for a period of two years and then handed over to Thane ZP. Since then it has been owned and operated by ZP.

Current Status: Currently, only three villages, namely Khardi, Lahe and Kukambhe are serviced throughout year. The remaining three villages namely Umbarkhand, Chanda, Golban are supplied water from the scheme only during pre-summer and summer months (January to May).Although, the scheme is functional, it hardly meets desired objective of drinking water supply as outlined below.

• Due to breakages of the old A.C. pipeline on the outskirts of the villages, the performance of the scheme has degraded over the years.

• The beneficiary villages are able to get water once in 2-4 days only for a period of 45 minutes. It is far from meeting the minimum needs of the people. In summer, Khardi, the tail end village gets water almost once a week, making people dependent on bore-wells, public wells and private tankers to meet their household water needs at additional financial cost.

• The water treatment is carried out by adding TCL in ESR/GSR or open wells of individual villages due to which it is difficult to maintain the quality of water.

Issues: The scheme faces the following issues:

• There is no Mass Balancing Reservoir (MBR).Hence there is no buffer storage to maintain continuity of service in the face of frequent power failures prevalent in the area.

• There is no WTP; hence, it is difficult to maintain the quality of water consistently across the villages.

• The elevation of source location (62m) is too low compared to the elevation of villages (220-240m) jeopardising the financial sustainability of the scheme due to high pumping costs. The tariff recovery is hardly Rs. 4 lakh while annual energy cost is around Rs. 16 lakh.

• Due to frequent power failures prevalent in the area, the ten km long main pipeline empties to a great extent. The loss of service time caused by it combined with loss of pressure due leakages in the AC pipeline has made the water supply insufficient, erratic and unreliable.

• Since the pipe lines are not laid along the road, the maintenance and repairs becomes extremely difficult consuming a lot of time and efforts.

• The operational staffs have been working as daily wage earners for many years at a low wage of Rs 154 per day. Hence they have very low morale.

New Khardi Scheme Proposed by MJP: To address the failure of the current scheme, MJP has been working on design of a new scheme since 2005 based on Bhatsa back water as source. The salient features of the scheme are given below.
• The scheme is designed only for Khardi village and newly developing semi urban area around. The scheme does not cover the neighbouring villages like Umbarkhand, Golban and Chanda that are still dependent on the old scheme.

• The elevation of the source (~122 m) is still very low compared to Khardi (~240m).Also, the raw water mains is 10 km long. This has resulted into high capital as well as energy cost.

• The per capita capital cost of the scheme is Rs. 3092 while the energy cost is Rs. 8.73 out of Operation and Maintenance cost of Rs. 13 per 1000 lit of water.

Kundan Dam Based Scheme Proposed by CTARA, IIT Bombay: To address the question of long term financial sustainability raised by MJP scheme due to relatively high energy cost & O&M cost, CTARA, IIT Bombay had undertaken an exercise of redesigning Khardi scheme for sustainability by evaluating all the alternate sources with an emphasis on reducing energy cost. The logical outcome of this exercise was a scheme proposal based on Kundan dam as source. Kundan dam offered critical advantage over Bhatsa due to its higher elevation than Khardi resulting into drastic reduction in energy cost and thereby paving the way for long term financial viability. The scheme was designed by us using MJP protocol. BRANCH software was used for network design and extensive hydraulic performance simulations were done using EPANET.

• The gravity assisted scheme based on Kundan dam covering two more villages besides Khardi is more cost effective than the MJP scheme design based on the Bhatsa back water with drastic reduction in capital, energy and O&M cost.

 The per capita capital cost of scheme based on Kundan Dam is Rs. 1917 against Rs. 3092 for the MJP scheme.

 The energy cost is Rs. 2.04 and the Operating and Maintenance cost is Rs. 6.92 per 1000 litre of water compared to Rs. 8.73 and Rs. 13 respectively for the MJP scheme.

• In the light of the reducing demand for irrigation water due to urbanization in surrounding area, the water from the dam can be used for drinking purpose with permission from appropriate authorities.

• Even if current reservation of water for irrigation purpose is maintained, adequate storage for drinking water supply can be created by increasing the height of the dam by 3 m.

In summary, the current scheme is financially unsustainable due to intrinsic design issues, outdated assets and its failure to meet even the bare minimum drinking water needs of the people. The MJP design based on Bhatsa back water, though better than the current scheme, doesn’t have long term financial sustainability due to high energy costs. Comparatively, the Kundan dam based gravity assisted scheme provides the best option as a viable and cost effective solution and hence is recommended as a long term solution to the drinking water crisis in the Khardi area. The lesson learnt through this exercise calls for over hauling the current process of source selection by placing energy consideration at higher priority than just proximity of source in every future scheme design.

Keywords Piped water supply, rural water supply, multi village scheme, piped network design, drinking water, piped water source selection.


IITB/CSE/2013/September/55, TR-CSE-2013-55
Design and Optimization of Piped Water Network for Tanker Fed Villages in Mokhada Taluka
Nikhil Hooda, Rajaram Desai, Om P. Damani

Many parts of coastal area of Maharashtra face severe drinking water shortage in spite of high rainfall lying typically in the range of 2000mm-3000mm. Thane district alone has more than 160 tanker fed villages, majority of which are concentrated in Jawhar, Mokhada and Shahapur talukas which are dominated by tribal population[4]. Ironically, this area has the distinction of being the major supplier of drinking water to city of Mumbai through reservoirs like Upper, Lower and Middle Vaitarna and Bhatsa. The Middle Vaitarna project, the latest among them undertaken by Municipal Corporation of Greater Mumbai (MCGM) is expected to augment drinking water supply of Mumbai city by 455MLD at a cost of about Rs. 2000 crores.

The source of Karegaon scheme, a piped water scheme based on water supply from Vaitarna River will be submerged due to the above project. It is learnt from local Water Supply department officials that MCGM has assured to finance the new Karegaon scheme. However, the scope of the new scheme under design in terms of coverage has been left unchanged. There is a cluster of tanker fed villages north of Karegaon (Palsunde to Kiniste) and another one south of Karegaon extending from Vihigaon to Kothare. Earlier studies have indicated that it would be possible to solve the water scarcity problem of the cluster of tanker fed villages at a tiny fraction of cost (~ 1%) of Middle Vaitarna project by appropriately designing a multi village scheme based on Upper Vaitarna as source.

The present study was undertaken to evaluate techno economic feasibility of a multi-village scheme for supplying water to the north cluster of tanker fed villages including the four villages covered by Karegaon scheme based on Upper Vaitarna as the source of water. The average elevation of the 13 tanker fed villages in this area is 363m. Preliminary simulation studies had indicated that a multi-village scheme based on Middle Vaitarna as source is not feasible due to the large elevation difference between the source located at elevation of 285m and the villages in the cluster. The much higher elevation of Upper Vaitarna (603m) on the other hand, offers a critical advantage to the proposed scheme by leveraging gravity flow of water thereby reducing both capital cost of piping as well as energy cost for pumping. A step by step design methodology based on protocols followed by MJP was followed for the design and cost estimation of the proposed scheme. The study revealed that the per capita cost of supplying water to the cluster of 17 villages having current population of about 18000 and ultimate design population of 48407 works out to be Rs. 2888, much below the current rural norm of Rs. 3250 and the estimated cost of Rs. 5083 of the Karegaon scheme being designed[1]. Also, the annual O&M cost including energy cost works out to be Rs. 6.35 per 1000L of water much below the norm of Rs 16. Thus, by shifting the source from Middle Vaitarna to Upper Vaitarna, it is possible to drastically bring down the cost thereby providing a permanent solution to the water scarcity problem of the 13 tanker fed villages. Hence, revamping the scope of current Karegaon scheme by including the cluster of 13 tanker fed villages and shifting the source to Upper Vaitarna deserves a serious high level consideration. Furthermore, adopting „inclusion‟ model, it should be possible to solve the water scarcity problem in similar areas well within the current norms of capital cost. It will not only help do away with recurring cost of supplying water by tankers but also remove the perpetual dependency of large number of rural population on tanker supplied water at subsistence level.

Keywords:
Piped water supply, rural water supply, multi village scheme, piped network design, drinking water, domestic water, Karegaon, Mokhada, Upper Vaitarna, Maharashtra


IITB/CSE/2013/August/54, TR-CSE-2013-54
Stochastic Model Based Opportunistic Spectrum Access in Wireless Networks
Manuj Sharma and Anirudha Sahoo

We present a stochastic model based opportunistic
channel access and transmission scheme for cognitive radio-enabled secondary users for single data channel. We refer to this scheme as RIBS (Residual Idle Time Distribution based Scheme). In this scheme, the SU randomly senses the channel and if the channel is sensed idle, then it uses residual idle time distribution to estimate its transmission duration such that the probability of its interference with PU is below a predefined threshold. We derive analytical formulae for computing the average raw SU frames per transmission operation, average raw SU throughput, and the average sensing overhead for SU. Using simulation experiments, we show that using RIBS, SU can use the channel opportunistically without violating the interference
probability constraint. We conduct simulation experiments by collecting channel occupancy data due to PU traffic in two different scenarios. In the first scenario, we synthetically generate channel occupancy data due to PU transmissions using two stan-dard distributions (2-phase Erlang and Uniform distribution). We validate the analytical formulations using simulation with synthetic channel occupancy. In the second scenario, we simulate a TDMA-based PU network that runs realistic applications (VoIP and Web browsing). A pair of sender and receiver SU uses RIBS to opportunistically transmit on the channel. We analyze the characteristics of white spaces obtained due to these realistic applications, list some of the challenges in using RIBS in realistic scenarios, and provide comprehensive methodology to use RIBS in primary network running real applications.


IITB/CSE/2013/August/53, TR-CSE-2013-53
Analytical Modeling and Performance Study of TGREP System
Mayank Singhal and Anirudha Sahoo

Interworking of Voice over IP (VoIP)
network with traditional Public Switched Telephone Networks (PSTN) is
essential for wide adoption of VoIP services. Telephony gateways
act as interface between PSTN and IP telephony networks. These gateways use
Telephony Gateway REgistration Protocol (TGREP) to advertise their resource information
to Location Servers which are responsible for routing calls to appropriate gateway.
The periodic interval, known as update interval (UI), at which gateways advertise
their resources play an important role in deciding the performance of TGREP system
in terms of call blocking probability.
In this paper, we provide an analytical
method to model TGREP system.
To the best of our knowledge, this is
the first study which provides accurate modeling of TGREP
system. We validate our model by comparing its characteristics with
that of a discrete event simulator developed by us. Using the
model, we then analyze the performance of TGREP system
as various operating parameters,
like update interval, load, call arrival rate and average call length
change.
This study gives very useful insight into the system.
For example, our study shows that a TGREP system should
never run above relative load of 1.0 in order to
keep the blocking probability below 0.1. Our study also shows
that when system is running with light load, update interval can
be arbitrarily large.
Based on our study, we have provided a reference
table which, will be useful for TGREP system
administrators in setting update interval based on the system
configuration and load.


IITB/CSE/2013/June/52, TR-CSE-2013-52
CMC: A Model Computer Science Curriculum for K-12 Schools
Sridhar Iyer, Farida Khan, Sahana Murthy, Vijayalakshmi Chitta, Malathy Baru and Usha Vishwanathan

Schools in India have been offering Computers as a subject to their students for the last 10 years or so. While the syllabi for 9th – 12th grades are defined by various examination Boards, there is no formal curriculum for 1st – 8th grades. The NCERT (National Council of Educational Research and Training) specifies the computer literacy competencies and skills to be developed at Primary, Middle and Secondary levels. However these specifications are broad, leading to variations in their interpretation across textbooks and schools. Hence there is a need for a detailed specification for computer curriculum. CMC, described in this document, fills this gap.

This document provides: (i) the rationale, underlying philosophy, and key features of CMC, (ii) details of topics and specific learning objectives to be addressed in each grade, (iii) recommendations for teaching-learning strategies, and (iv) one instance of how CMC was implemented through textbooks.


IITB/CSE/2013/February/50, TR-CSE-2013-50
Resource Availability Based Performance Benchmarking of Virtual Machine Migrations
Senthil Nathan, Purushottam Kulkarni, and Umesh Bellur

Virtual machine migration enables load balancing, hot spot mitigation and server consolidation in virtualized environments. Live VM migration can be of two types - adaptive, in which the rate of page transfer adapts to virtual machine behavior (mainly page dirty rate), and non-adaptive, in which the VM pages are transferred at a maximum possible network rate. In either method, migration requires a significant amount of CPU and network resources, which can seriously impact the
performance of both the VM being migrated as well as other VMs. This calls for building a good understanding of the performance of migration itself and the resource needs of migration. Such an understanding can help select the appropriate VMs for migration while at the same time allocating the appropriate amount of resources for migration. While several empirical studies exist, a comprehensive evaluation of migration techniques with resource availability constraints is missing. As a result, it is not clear as to which migration technique to employ under a given set of conditions. In this work, we conduct a comprehensive empirical study to understand the sensitivity of migration performance to resource availability and other system parameters (like page dirty rate and VM size). The empirical study (with the Xen Hypervisor) reveals several shortcomings of the migration process. We propose several fixes and develop the Improved Live Migration technique (ILM) to overcome these shortcomings. Over a set of workloads used to evaluate ILM, the network traffic for migration was reduced by 14-93\% and the migration time was reduced by 34-87\% compared to the vanilla live migration technique. We also quantified the impact of migration on the performance of applications running on the migrating VM and other co-located VMs.


IITB/CSE/2013/February/49, TR-CSE-2013-49
All page sharing is equal, but some sharing is more equal than others
Shashank Rachamalla, Debadatta Mishra, Purushottam Kulkarni

Content based memory sharing in virtualized environments has been proven to be a useful memory management technique. As part of this work, we are interested in studying the trade-off between extent of sharing opportunities (compared to actual sharing potential) that can be identified and the overheads for doing the same. Our work is based on Kernel Virtual Machine (KVM) in Linux which in turn uses Kernel Samepage Merging (KSM) to identify and exploit sharing opportunities. We instrument the Linux kernel and KSM to log sharing and copy-on-write related parameters for accurate estimation of useful and effective KSM sharing extent. For several combinations of workloads, exhibiting different memory usage characteristics, we benchmark KSM performance (achieved savings vs. total possible savings) and CPU overheads for different KSM configurations. The benchmarking results are used to develop an adaptive scheme to dynamically reconfigure KSM configuration parameters (scan rate and pages per scan), to maximize shar ing opportunities at minimal overheads. We evaluate the adaptive technique and compare it with default settings of KSM to show its efficacy. Based on our experiments we show that the adaptive sharing technique correctly adapts scan rate of KSM based on sharing potential. With our setup, the adaptive sharing technique yields a factor of 10 improvement in memory savings at negligible increase in CPU utilization.


IITB/CSE/2012/September/48, TR-CSE-2012-48
Building A Low Cost Low Power Wireless Network To Enable Voice Communication In Developing Regions
Vijay Gabale, Jeet Patani, Rupesh Mehta, Ramakrishnan Kalyanaraman, Bhaskaran Raman

In this work, we describe our experiences in building a low cost and low power wireless mesh network using IEEE 802.15.4 technology to provide telephony services in rural regions of the developing world. 802.15.4 was originally designed for a completely different application space of non-real-time, low data rate embedded wireless sensing. We use it to design and prototype a telephony system, which we term as Lo3 (Low cost, Low power, Local voice). Lo3 primarily provides two use cases; (1) local and broadcast voice within the wireless mesh network, and (2) remote voice to a phone in the outside world. A Lo3 network can cost as less as $2K, and can last for several days without power “off the grid”, thus making it an ideal choice to meet cost and power constraints of rural regions. We test deployed a full-fledged Lo3 system in a village near Mumbai, India for 18 hours over 3 days. We established voice calls with an end-to-end latency of less than 120ms, with an
average packet loss of less than 2%, and a MOS of 3.6 which is considered as good in
practice. The users too gave a positive response to our system. We also tested Lo3 within
our department where it can be used as a wireless intercom service. To our knowledge, Lo3
is the first system to enable such a voice communication system using 802.15.4 technology,
and show its effectiveness in operational settings.


IITB/CSE/2012/July/46, TR-CSE-2012-46
Host-Bypass: Approximating Node-Weighted Connectivity Problems In Two-Tier Networks
Vijay Gabale, Ashish Chiplunkar

We focus on network design problems in those networks which have two distinct sets or {it tiers} of network nodes: hosts and infrastructure nodes. For instance, wireless mesh networks are often composed of the infrastructure nodes which relay the data between the client nodes. Similarly, the switches (infrastructure) in data center networks sit between the servers (hosts) to route the information flow. In such {it two-tier} networks, a network designer typically requires paths between hosts in the host-tier through nodes in the infrastructure-tier while minimizing the cost in building the network. A subtle constraint, which we call as {it the host-tier constraint},in choosing such paths requires that no host in host-tier is an intermediate node on any path. Often, this constraint is necessary for designing network topologies in practice. In this work, we first show that the network design algorithms in prior work are inapplicable to the above described Two-Tier Network Connectivity (TTNC) problem with the host-tier constraint. We then investigate TTNC problem with the goal of minimizing the cost in building the network. We show that TTNC problem is NP-hard, and present {it Host-Bypass}, the first algorithm for TTNC problem with provable performance bound on cost of $O(h^{frac{1}{2}+epsilon} log^2 h)$ where $h$ is the number of host nodes, $epsilon > 0$. Host-Bypass has polynomial running time of $O(n^{1+frac{1}{epsilon}}h^{2+frac{2}{epsilon}})$ ($n$ is total number of nodes), and our simulation study shows that Host-Bypass performs close to optimal and is 30\% better than a heuristic from an algorithm in prior work.


IITB/CSE/2012/July/45, TR-CSE-2012-45
A Per-File Partitioned Page Cache
Prateek Sharma, Purushottam Kulkarni

In this paper we describe a new design of the operating system page
cache. Page caches form an important part of the memory hierarchy and
are used to access file-system data. In most operating systems, there
exists a single page cache whose contents are replaced according to
a LRU eviction policy.

We design and implement a page cache which is partitioned by
file---the per-file page cache. The per-file page cache provides
the ability to control fine-grained caching parameters such as cache
size and eviction policies for each file. Furthermore, the deterministic
cache allocation and partitioning allows for improved isolation among
processes. We provide dynamic cache partitioning(among files) by
using utility derived from the miss-ratio characteristics of the
file accesses.

We have implemented the per-file page cache architecture in the Linux kernel
and demonstrate its efficacy using disk image files of virtual machines
and different types of file access patterns by processes.
Experimental results show that the utility-based partitioning can
reduce the cache size by up to an order of magnitude while increasing
cache hit ratios by up to 20%. Among other features, the per-file page
cache has fadvise integration, a scan-resistant eviction algorithm
and reduced lock-contention and overhead during the eviction process.


IITB/CSE/2012/February/42, TR-CSE-2012-42
Singleton: System-wide Page Deduplication in Virtual Environments
Prateek Sharma, Purushottam Kulkarni

We consider the problem of providing memory-management in hypervisors and propose Singleton, a KVM-based system-wide page deduplication solution to increase memory usage efficiency. Specifically, we address the problem of double-caching that occurs in KVM---the same disk blocks are cached at both the host(hypervisor) and the guest(VM) page-caches. Singleton's main components are identical-page sharing across guest virtual machines and an implementation of an exclusive-cache for the host and guest page-cache hierarchy. We use and improve KSM--Kernel SamePage Merging to identify and share pages across guest virtual machines. We utilize guest memory-snapshots to scrub the host page-cache and maintain a single copy of a page across the host and the guests. Singleton operates on a completely black-box assumption---we do not modify the guest or even the hypervisor. We show that conventional operating system cache management techniques are sub-optimal for virtual environments, and how Singleton supplements and improves the existing Linux kernel memory management mechanisms. Singleton is able to improve the utilization of the host cache by reducing its size(by upto an order of magnitude), and increasing the cache-hit ratio(by factor of 2x). This translates into better VM performance(40% faster I/O). Singleton's unified page deduplication and host cache scrubbing is able to reclaim large amounts of memory and facilitates higher levels of memory overcommitment. The optimizations to page deduplication we have implemented keep the overhead down to less than 20% CPU utilization.


IITB/CSE/2011/December/41, TR-CSE-2011-41
General Caching with Lifetimes
Ashish Chiplunkar, Sundar Vishwanathan

We consider the problem of caching with lifetimes, where a lifetime is specified whenever a page is loaded into the cache. The copy of a page loaded into the cache may be used to serve requests to the same page, only until its expiration time. We present a generic method to get an algorithm for caching with lifetimes, from an algorithm for caching without lifetimes. This method works for any cost model, and in online as well as offline settings. In the online (resp. offline) setting, the competitive (resp. approximation) ratio of resulting algorithm for caching with lifetimes, is one more than the competitive (resp. approximation) ratio of the original algorithm for caching without lifetimes. Using this method and the existing algorithms for caching without lifetimes, we get an $H_k+1$ competitive randomized algorithm and a $2$-approximation algorithm for standard caching with lifetimes, where $k$ is the cache size. This is an improvement over the $(2H_k+3)$-competitive algorithm and the $3$-approximation algorithm given by Gopalan et. al. \cite{GopalanKMMV02}. We also get $\mathcal{O}(\log k)$ competitive randomized algorithms for various subclasses of general caching such as weighted caching and the Bit and Fault models, asymptotically matching the lower bound of $H_k$ on the competitive ratio; and a $5$ approximation algorithm for the general offline caching problem.


IITB/CSE/2011/December/40, TR-CSE-2011-40
Importance Aware Bloom Filter for Set Membership Queries in Streaming Data
Purushottam Kulkarni, Ravi Bhoraskar, Vijay Gabale, Dhananjay Kulkarni

Various data streaming applications like news feeds, stock trading generate a continuous sequence of data items. In such applications, based on the priority of different items, a set of items are more important than others. For instance, in stock trading, some stocks are more important than others, or in a cache-based system, the cost of fetching new objects into the cache is directly related to the the size of the objects. Motivated by these examples, our work focuses on developing a time and space efficient indexing and membership query scheme which takes into account data items with different importance levels. In this respect, we propose Importance-aware Bloom Filter (IBF), an extension of Bloom Filter (BF) which is a popular data structure to approximately answer membership queries on a set of items. As part of IBF, we provide a set of insertion and deletion algorithms to make BF importance-aware and to handle a set of unbounded data items. Our comparison of IBF with other Bloom filter-based mechanisms, for synthetic as well as real data sets, shows that IBF performs very well, it has low false positives, and low false negatives for important items. Importantly, we find properties of IBF analytically, for instance, we show that there exists a tight upper bound on false positive rate independent of the size of the data stream. We believe, IBF provides a practical framework to balance the application-specific requirements to index and query data items based on the data semantics.


IITB/CSE/2011/September/39, TR-CSE-2011-39
"At least one" caching
Ashish Chiplunkar, Sundar Vishwanathan

We consider a variant of the caching problem, where each request is a set of pages of a fixed size, instead of a single page. In order to serve such a request, we require at least one of those pages to be present in the cache. Each page is assumed to have unit size and unit cost for getting loaded into the cache. We prove lower bounds on the competitive ratio for this problem in both the deterministic and the randomized settings. We also give online algorithms for both settings and analyze them for competitive ratio.


IITB/CSE/2011/May/35, TR-CSE-2011-35
Efficient Rule Ensemble Learning using Hierarchical Kernels
Pratik J., J. Saketha Nath and Ganesh R.

This paper addresses the problem of Rule Ensemble Learning (REL), where the goal is simultaneous discovery of a small set of simple rules and their optimal weights that lead to good generalization. Rules are assumed to be conjunctions of basic propositions
concerning the values taken by the input features. From the perspectives of interpretability as well as generalization, it is highly desirable to construct rule ensembles with low training error, having rules that are i) simple, {\em i.e.}, involve few conjunctions and ii) few in number.
We propose to explore the (exponentially) large feature space of all possible conjunctions optimally and efficiently by employing the recently introduced Hierarchical Kernel Learning (HKL) framework. The regularizer employed in the HKL formulation can be interpreted as a potential for discouraging selection of rules involving large number of conjunctions -- justifying its suitability for constructing rule ensembles. Simulation results show that, in case of many benchmark datasets, the proposed approach improves over state-of-the-art REL algorithms in terms of generalization and indeed learns simple rules.
Although this is encouraging, it can be shown that HKL selects a conjunction only if all its subsets are selected, and this is highly undesirable. We propose a novel convex formulation which alleviates this problem and generalizes the HKL framework. The main technical contribution of this paper is an efficient mirror-descent based active set algorithm for solving the new formulation. Empirical evaluations on REL problems illustrate the utility of generalized HKL.


IITB/CSE/2011/February/34, TR-CSE-2011-34
Affinity-aware Modeling of CPU Usage for Provisioning Virtualized Applications
Sujesha Sudevalayam, Purushottam Kulkarni

While virtualization-based systems become a reality, an important issue is that of virtual machine migration-enabled consolidation and dynamic resource provisioning. Mutually communicating virtual machines, as part of migration and consolidation strategies, may get co-located on the same physical machine or placed on different machines. In this work, we argue the need for network affinity-aware resource provisioning for virtual machines. First, we empirically demonstrate and quantify the resource savings due to colocation of communicating virtual machines. We also discuss the effect on resource usage due to dispersion of previously co-located virtual machines. Next, we build models based on different resource-usage micro-benchmarks to predict the resource usages when moving from non-colocated placements to co-located placements and vice-versa. These generic models can serve as an
important input for consolidation and splitting decisions. Via extensive experimentation, we evaluate the applicability of our models using synthetic and real application workloads.


IITB/CSE/2010/December/33, TR-CSE-2010-33
Distributed Fault Tolerance for WSNs with Routing Tree Overlays
Chilukuri Shanti and Anirudha Sahoo

WSNs are inherently power constrained and are
often deployed in harsh environments. As such, node death is
a possibility that must be considered while designing protocols
for such networks. Rerouting of data is generally necessary so
that data from the descendant nodes of the dead node can
reach the sink. Since slot allocation in TDMA MAC protocols
is generally done based on the routing tree, all the nodes must
switch to the new routing tree to avoid collisions. This necessitates
disseminating the fault information to all the nodes reliably. We
propose a flooding algorithm for disseminating fault info to the
network reliably even in a lossy channel. Simulation results show
that the proposed flooding scheme consumes lesser energy and
converges faster than a simple flooding scheme. Rerouting the
data may result in increased TDMA schedule length. The energy
expenditure of the newly assigned parents also increases because
they have to relay data from more children than before. We
propose two distributed parent assignment algorithms in this
paper. The first algorithm minimizes the change in the TDMA
schedule and the second algorithm balances the load among the
newly assigned parents. Simulation of random topologies shows
that the increase in the TDMA frame length is lesser than that
with random parent assignment when the first algorithm is used.
We also observe that the lifetime of the most energy constrained
node (after fault recovery) is longer when the second algorithm
is used than that when random parent assignment is done.


IITB/CSE/2010/November/32, TR-CSE-2010-32
On Delay-constrained Scheduling in Multi-radio, Multi-channel Wireless Mesh
Vijay Gabale, Ashish Chiplunkar, Bhaskaran Raman

In this work, we consider the goal of scheduling the maximum number of voice calls in a TDMA-based multi-radio, multi-channel mesh network. One of main challenges to achieve this goal is the difficulty in providing strict (packet-level) delay guarantees for voice traffic in capacity limited multi-hop wireless networks. In this respect, we propose DelayCheck, an online centralized scheduling and call-admission-control (CAC) algorithm which effectively schedules constant-bit-rate voice traffic in TDMA-based mesh networks. DelayCheck solves the joint routing, channel assignment and link scheduling problem along with delay constraint. We formulate a relaxed version of this scheduling problem as an Integer Linear Program (ILP), the LP version of which gives us an optimality upper bound. We compare the output of DelayCheck with the LP-based upper bound as well as with two state-of-the-art prior scheduling algorithms. DelayCheck performs remarkably well, accepting about 93\% of voice calls as compared to LP-based upper bound. As compared to state-of-the-art algorithms, DelayCheck improves scheduler efficiency by more than 34\% and reduces call rejections by 2 fold. We also demonstrate that DelayCheck efficiently exploits the number of channels available for scheduling. With implementation optimizations, we show that DelayCheck has low memory and CPU requirements, thus making it practical.


IITB/CSE/2010/August/31, TR-CSE-2010-31
Ranking in Information Retrieval
Joydip Datta

In this report we study several aspects of an information retrieval with focus on ranking. First we introduce basic concepts of information retrieval and several components of an information retrieval system. Then we discuss important theoretical models of IR. Web-specific c topics like
link analysis and anchor text are presented next. We discuss how IR systems are evaluated and different IR evaluation forums. We end the report with a case study of cross lingual information retrieval system at IIT Bombay. In the future scope, we introduce upcoming trends in Web IR like user modeling and Quantum IR.


IITB/CSE/2010/June/30, TR-CSE-2010-30
Residual White Space Distribution Based Opportunistic Channel Access Scheme for Cognitive Radio Systems
Manuj Sharma and Anirudha Sahoo

We propose an opportunistic channel access scheme for cognitive radio-enabled
secondary networks. In our work, we model the channel occupancy due to Primary
User (PU) activity as a 2-state Alternating Renewal Process, with alternating
busy and idle periods. Once a Secondary Node (SN) senses the channel idle, the
proposed scheme uses the residual idle time distribution to estimate the transmission
duration in the remaining idle time. The SN transmits the frames within the
transmission duration without further sensing the channel, thereby reducing average
sensing overhead per transmitted frame. The analytical formulations used by
the scheme does not require the SN to know the start of the idle period. We validate
the analytical formulations using simulations, and compare the performance of the
proposed scheme with a Listen-Before-Talk (LBT) scheme.


IITB/CSE/2010/April/29, TR-CSE-2010-29
Balancing Response Time and CPU allocation in Virtualized Data Centers using Optimal Controllers
Varsha Apte, Purushottam Kulkarni, Sujesha Sudevalayam and Piyush Masrani

The cloud computing paradigm proposes a flexible payment model, where a cloud user pays only for the amount of resources used. Thus, the cloud provider must only allocate as many resources to a user, as are to meet client performance requirements. In this paper, we present a control-theoretic approach to tune the CPU resource allocated to a virtual machine (VM) such that application performance metrics are optimized while using minimal CPU resource. Moreover, our approach expects no inputs of performance targets to steer the system towards optimal values of CPU share allocations and system performance. We prototype and validate our methodology on a small virtualized testbed using the Xen virtualization environment. A simple server consolidation scheme which triggers VM migrations and powers servers up and down, uses our control-theoretic resource tuning approach. Preliminary results demonstrate that our controller allocates CPU share optimally, even in the presence of changing load, and can work well with a server consolidation
scheme to achieve good application performance.


IITB/CSE/2010/March/25, TR-CSE-2010-25
Reachability of Safe State for Online Update of Concurrent Programs
Yogesh Murarka and Umesh Bellur

Online update helps in reducing the maintenance downtime by applying a patch to a running process. To avoid the errors that can arise due to an online update, existing online update solutions interleave the update with process execution at specific states called safe states. This approach works well for sequential processes but for multithreaded process it presents the challenge of reaching a safe state across all threads. It turns out that in concurrent processes, even if individual threads can reach a safe state in bounded time a process may take unbounded time to reach a safe state or may never reach it. Therefore, with existing update solutions a user is uncertain whether a patch can be applied or not till the user tries to apply the patch.

In this report we address the problem of safe state reachability for multithreaded processes. We prove that identifying whether a process can reach a safe state is undecidable. Hence, we derive a sufficient (conservative) condition for checking the reachability of a safe state. Our novel thread scheduling technique can force a process to reach a safe state in bounded time if the sufficient condition is met. The proposed approach eliminates the uncertainty of online updates. Complexity analysis of our algorithms shows that the practical implementation of our technique will work well with most real-world applications.


IITB/CSE/2009/October/24, TR-CSE-2009-24
Energy aware contour covering using collaborating mobile sensors
Sumana Srinivasan, Krithi Ramamritham, Purushottam Kulkarni

Environmental sensing systems are useful in the area of disaster management where the system can provide alerts as well as remediation services in the event of a disaster such as pollutant spills. Recent advances in robotics technology have led to the development of sensors with the ability to sense, move, and
we show how such sensors can be deployed to perform tasks such as locating and covering a hazardous concentration contour in a pollutant spill. As mobile sensors consume energy for movement and have limited available energy, it is important for the sensors to plan their movement such that the coverage is maximized and the error in coverage (i.e., area not relevant to the contour) is minimized. In this paper, we address the nontrivial problem of continuously adjusting the direction of movement by presenting a distributed algorithm where a sensor determines the direction of movement by (i) estimating the distance to the contour and perimeter of the contour using locally sensed information as well as information gathered through collaboration and (ii) deciding dynamically whether to directly approach the contour or surround the contour based on the amount of work remaining to be done by the sensor vis a vis the energy remaining. We show that our proposed algorithm has the best coverage vs. error trade-off when compared to algorithms that directly approach or surround the contour.


IITB/CSE/2009/October/23, TR-CSE-2009-23
Light-trains: An Integrated Optical-Wireless Solution for High Bandwidth Applications in High-Speed Metro-Trains
Ashwin Gumaste, Akhil Lodha, Saurabh Mehta, Jianping Wang and Si Qing Zheng

Moving trains represent a voluminous mass of users
moving at high velocities that require bandwidth (on demand). All wireless solutions alone cannot scale efficiently to provide for bandwidth to such fast-moving voluminous users. A new solution is proposed that facilitates dynamic provisioning, good scalability
and efficient use of available bandwidth. The proposed solution called light-trains seamlessly integrates optical and wireless networking modules to provide an ideal broadband Internet access solution to users in moving trains. The solution identifies the set of requirements that a solution would require – such as fast hand-off, low-cost of deployment, mature technology and ability to provide dynamic bandwidth provisioning (and hence low experienced delay).


IITB/CSE/2009/August/22, TR-CSE-2009-22
Model-Based Opportunistic Channel Access in Cognitive Radio Enabled Dynamic Spectrum Access Networks
Manuj Sharma, Anirudha Sahoo, K.D. Nayak

We propose a model-based channel access mecha-
nism for cognitive radio-enabled secondary network, which op-
portunistically uses the channel of an unslotted primary network
when the channel is sensed idle. We refer to primary network
as the network which carry the main traffic in a designated
spectrum band. We have considered IEEE 802.11 WLAN as a de
facto primary network operating in ISM band. Our study focuses
on a single WLAN channel that is used by WLAN clients and
a WLAN server for a mix of Email, FTP, and HTTP-based web
browsing applications. We model the occupancy of the channel by
primary WLAN nodes as an alternating renewal process. When
the secondary sender node has one or more frames to send, this
model is used by the the sender and receiver pair to estimate
residual idle time duration after the channel is sensed as idle.
The secondary sender then opportunistically transmits frames in
that duration without significantly degrading performance of the
primary WLAN applications. Our simulation results show that
the performance of secondary network is sensitive to the channel
sensing duration and that high secondary throughput can be
achieved without affecting the primary network significantly by
choosing appropriate value of channel sensing duration.


IITB/CSE/2008/December/19, TR-CSE-2008-19
Energy Harvesting Sensor Nodes: Survey and Implications
Sujesha Sudevalayam, Purushottam Kulkarni

Sensor networks with battery-powered nodes can seldom simultaneously meet the design goals of lifetime,
cost, sensing reliability and sensing and transmission coverage. Energy-harvesting, converting ambient energy to
electrical energy, has emerged as an alternative to power sensor nodes. By exploiting recharge opportunities and
tuning performance parameters based on current and expected energy levels, energy harvesting sensor nodes have the potential to address the conflicting design goals of lifetime and performance. This paper surveys various aspects of energy harvesting sensor systems---architecture, energy sources and storage technologies and examples of harvesting based nodes and applications. The study also discusses the implications of recharge opportunities on sensor node operation and design of sensor network solutions.


IITB/CSE/2008/October/18, TR-CSE-2008-18
Graph Theoretic Concepts for Highly Available Underlay Aware P2P Networks
Madhu Kumar S D and Umesh Bellur

In our previous work we have demonstrated that underlay awareness is necessary in P2P overlays for the availability of overlay paths and proved that the problem of formation of overlay networks with guaranteed availability is NP complete. Despite this complexity,
underlay aware overlay networks, which use knowledge of the underlay to provide guaranteed levels of availability can be efficiently formed and maintained, under a
specified set of constraints. In this technical report, we
present the graph theoretic concepts developed, leading
to the development of efficient algorithms for availability guaranteed overlay construction and maintenance.


IITB/CSE/2008/October/17, TR-CSE-2008-17
Towards a Transponder for Serial 100 Gigabit Ethernet using a Novel Optical SERDES
Ashwin Gumaste

A 100 Gbps serial transponder is proposed using off-the-shelf 10 Gbps optics, a novel combining optical SERDES and mature optical logic. Data-center and metro connectivity is simulated validating the preliminary design.


IITB/CSE/2008/August/16, TR-CSE-2008-16
TDMA Scheduling in Long-Distance WiFi Networks
Debmalya Panigrahi and Bhaskaran Raman

In the last few years, long-distance WiFi networks have been used to provide Internet connectivity in rural areas. The strong requirement to support real-time applications in these settings leads us to consider TDMA link scheduling. Such scheduling takes on a different flavour in long-distance mesh networks due to the unique spatial reuse pattern. In this paper, we consider the FRACTEL architecture for long-distance mesh networks. We propose and substantiate a novel angular interference model. This model is not only practical, but also makes the problem of TDMA scheduling tractable. We then make two significant algorithmic contributions. (1) We first present a simple, 3/2 approximate algorithm for TDMA scheduling. (2) We then consider delay-bounded scheduling and present an algorithm which uses at most 1/3rd more time-slots than the optimal number of slots required without the delay bound. Our evaluation on random as well as real network topologies shows that the algorithms are practical, and are more efficient in practice than their worst-case bounds.


IITB/CSE/2008/July/15, TR-CSE-2008-15
Alternating Renewal Theory-based MAC Protocol for Cognitive Radio Networks
Manuj Sharma, Anirudha Sahoo and K.D. Nayak

We propose a MAC protocol for cognitive radioenabled secondary networks, which opportunistically use the channels of CSMA/CA-based primary network. We have considered IEEE 802.11 WLAN as a de facto primary network in ISM band. Our study focuses on a single WLAN channel that is used by a WLAN client and server for HTTP-based web browsing application. We use the theory of alternating renewal processes to model the occupancy of channel by WLAN nodes. This model is used by a pair of secondary nodes for estimating idle time durations on the channel and opportunistically transmit frames during the estimated idle times to maximize channel throughput without significantly degrading the WLAN and web application performance.


IITB/CSE/2008/July/14, TR-CSE-2008-14
Channel Modeling based on Interference Temperature in Underlay Cognitive Wireless Networks
Manuj Sharma, Anirudha Sahoo and K.D. Nayak

Cognitive radio based dynamic spectrum access network is emerging as a technology to address spectrum scarcity. In this study, we assume that the channel is licensed to some primary (licensed) operator. We consider a sensor network with cognitive radio capability that acts as a secondary (unlicensed) network and uses the channel in underlay mode. The secondary network uses interference temperature model [3] to ensure that the interference to the primary devices remain below a predefined threshold. We use Hidden Markov Model (HMM) to model the interference temperature dynamics of the primary channel. On sensing the event of interest, the sensor nodes transmit observation packets (to some central monitoring station), which constitutes the interference traffic on the primary channel. One of the secondary sensor nodes periodically measures the interference temperature on the channel. If the measured interference temperature is greater than a predefined threshold, then the measuring node records symbol 1; otherwise it records symbol 0. Using a sequence of symbols measured over a period of time, the node constructs and trains an HMM using Baum-Welch procedure. The trained HMM is shown to be statistically stable. The node uses this trained HMM to predict the future sequences and use them in computing the value of Channel Availability Metric for the channel, which is used to select a primary channel. Results of application of such trained HMMs in channel selection in multi-channel wireless network are presented.


IITB/CSE/2008/July/13, TR-CSE-2008-13
Self Organizing Underlay Aware P2P Networks
Madhu Kumar S D, Umesh Bellur and V.K Govindan

Computing nodes in modern distributed applications on large scale networks form an overlay network, a logical abstraction of the underlying physical network. Reliability of data delivery is an essential requirement for many of these applications. Traditional methods of ensuring guaranteed data delivery among overlay nodes, are built on the idea of duplication of routing tables at multiple overlay nodes, to handle data delivery even when overlay nodes or links fail. This does not guarantee data delivery as the new routing path to the destination node may also contain the failed node or link in the underlay path. We claim that reliable data delivery in overlay networks can be achieved only through underlay awareness at the overlay level. We propose that reliable data delivery can be achieved by using highly available overlays. An overlay network with reliability guarantees has to be self organizing, due to the large size, distributed nature and cost of redeployment. The major contribution of this paper is a chordal ring based self organizing overlay which is self healing upto two node/link failures. We also present asynchronous distributed algorithms that are executed by the nodes that join and leave the overlay, which ensure that the data delivery guarantees of the overlay are maintained. The algorithms are proved to be correct under distributed and concurrent executions and the time, space and message complexities are O(pathlength), where \'pathlength\' is the average path length between overlay nodes, counting underlay nodes of degree greater than two. The scalability of the algorithm is demonstrated with a simulation study.


IITB/CSE/2008/June/12, TR-CSE-2008-12
Correctness of Request Executions in Online Updates of Concurrent Programs
Yogesh Murarka and Umesh Bellur

Online update is a technique which reduces the disruption caused during software update by applying a patch to a running process. It is a challenge to update a process while ensuring that it continues to operate correctly during and after the update. Most of the continuously running processes concurrently execute the requests arriving in a stream. Online update should guarantee a correct outcome of the requests which execute during and after an update. In this report we provide a solution for the same. We first define a request execution criteria to ensure the correct outcome of the requests which execute concurrently with an update. The report then describes an online update approach that fulfills this criteria. The approach avoids deadlocks during update by analyzing interthread dependencies and guarantees that the process remains in a consistent state after the update. Thus, the update procedure is guaranteed to terminate and the requests which execute during and after an update are ensured the correct execution. Our literature survey reveals that this is the first solution to update concurrent programs while requests are executing and ensure correctness.


IITB/CSE/2008/April/11, TR-CSE-2008-11
Evolution of Carrier Ethernet – Technology Choices and Drivers
Ashwin Gumaste, Naresh Reddy, Deepak Kataria and Nasir Ghani

The shift from native Ethernet in LANs to switched Ethernet in WANs has propelled efforts of making Ethernet as an ideal candidate technology for transport. Carrier Ethernet has the advantage of being able to be offered as a service to customers. Two questions that we desire to answer in this paper are (1) how Carrier Ethernet can scale as a service in the metropolitan and access premises and (2) what are the key applications that will most benefit from such technology development. We attempt to answer the first question by first reviewing the multiple strategies adapted by vendors in deploying Carrier Ethernet. We then show how scalable Carrier Ethernet can be used for multiple service offerings, especially focusing on video distribution/SAN/mobile-backhaul. We then discuss the service requirements which need to be satisfied to make the native Ethernet carrier class. The paper also discusses impacting underlying technologies which are crucial in making Carrier Ethernet a success. A simulations study throws light on the different strategies of implementing Carrier Ethernet.


IITB/CSE/2008/April/10, TR-CSE-2008-10
A CAVALIER Architecture for Metro Data Center Networking
Akhil Lodha and Ashwin Gumaste

Data Center Networking (DCNs) is a fast emerging enterprise application that is driving metropolitan bandwidth needs. This paper evaluates the needs of this emerging IT-centric, bandwidth voluminous and service rendering application from a metro optical networking perspective. We identify a set of needs called CAVALIER (Consolidation, Automation, Virtualization, Adaptability, Latency, Integration, Economy and Reliability) that are underlying requirements for a network to support DCN services. The CAVALIER requirements are met by proposing a metro optical solution which is based on light-trail ROADM technology. Light-trails exhibit properties such as dynamic bandwidth provisioning, optical multicasting, sub-wavelength granular support and low-cost for deployment. Adapting light-trails to DCN needs is discussed in this paper through engineering requirements and network-wide design. Each aspect of the CAVALIER requirement is then mapped on to light-trail technology. Simulation results are shown to lead to performance betterments.
Key words: data-center metro networks, light-trails, ROADMs


IITB/CSE/2008/January/9, TR-CSE-2008-9
Topology Abstraction Algorithms for Light-Mesh - An Alternate Model for PON
Anuj Agrawal, Ashwin Gumaste, Mohit Chamania and Nasir Ghani

Abstract: Light-mesh – an alternate solution for access networks is presented. Two heuristic topology algorithms are discussed and simulated showing cost and performance benefits.
© 2008 Optical Society of America
OCIS codes: 060.4250 Networks; 060.4250 Networks


IITB/CSE/2007/November/8, TR-CSE-2007-8
Complexity Analysis of Availability Models for Underlay Aware Overlay Networks
Madhu Kumar S.D, Umesh Bellur

Availability of an overlay network is a necessary con- dition for event delivery in event based systems. The availability of the overlay links depends on the under- lying physical network. Overlay networks have to be underlay aware in order to provide assured levels of availability. We propose two models of availability for overlay networks, Manifest and Latent Availability. In the Manifest availability model, distinct paths at the overlay level are also node disjoint at the underlay and hence the alternate paths viewed by the overlay are in- dependent in the underlay also. In the latent avail- ability model, it is only guaranteed that any two over- lay nodes have a guaranteed number of node disjoint paths between them in the underlay. We analyze both the models for complexity of formation and mainte- nance, and prove that in the general case, both are NP-complete. Then we identify a set of practical con- straints applicable to large scale networks. We demon- strate that under these constraints, latent availability constraint becomes a polynomial time problem. We also introduce the concept of reduced underlays, and further reduce the complexity of the problem of determining la- tent availability overlays.


IITB/CSE/2007/November/7, TR-CSE-2007-7
Tracking Dynamic Boundary Fronts using Range Sensors
S. Duttagupta, K. Ramamritham and P. Kulkarni and K. M. Moudgalya

We examine the problem of tracking dynamic boundaries occurring in natural phenomena using sensor networks. Remotely placed sensor nodes produce noisy measurements of various points on the boundary using range-sensing. Two main challenges of the boundary tracking problem are energy-efficient boundary estimations from noisy observations and continuous tracking of the boundary. We propose a novel approach which uses discrete estimations of points on the boundary using a regression-based spatial estimation technique and a smoothing interpolation scheme to estimate a confidence band around the entire boundary. In addition, a Kalman Filter-based temporal estimation is used to help selectively refresh the estimated boundary at a point only if the boundary is predicted to move out of the previous estimated intervals at that point. An algorithm for dynamic boundary tracking (DBTR), the combination of temporal estimation with an aperiodically updated spatial estimation, allows us to provide a low overhead solution to track dynamic boundaries that does not require prior knowledge about the nature of the dynamics. Experimental results demonstrate the effectiveness of our algorithm and estimated confidence bands achieve loss of coverage of less than 2% for smooth boundaries.


IITB/CSE/2007/October/6, TR-CSE-2007-6
A Novel Node Architecture for Light-trail Provisioning in Mesh WDM Metro Networks
Ashwin Gumaste, Admela Jukan, Akhil Lodha, Xiaomin Chen and Nasir Ghani

We propose an efficient node architecture to support light-trails (intelligent-shared-wavelength bus) in mesh WDM metro networks. Simulation results and performance benefits as compared to legacy technologies are shown.


IITB/CSE/2007/October/5, TR-CSE-2007-5
Light-trains: An Integrated Optical-Wireless Solution for High Bandwidth Applications in High-Speed Metro-Trains
Ashwin Gumaste, Akhil Lodha, Jianping Wang, Nasir Ghani and Si Qing Zheng

Moving trains represent a voluminous mass of users moving at high velocities that require bandwidth (on demand). Existing solutions typically based on wireless technology alone cannot scale efficiently to provide for bandwidth to such fast-moving voluminous users. A new approach is proposed that facilitates dynamic provisioning, good scalability and efficient use of available bandwidth. The proposed approach called light-trains seamlessly integrates optical and wireless networking techniques to provide an ideal broadband Internet access solution to users in moving trains. We identify the set of requirements that a solution would require – such as fast hand-off, low-cost of deployment, mature technology and ability to provide dynamic bandwidth provisioning (and hence low experienced delay). The requirements mentioned influence our choices of technologies for different domains in the network: (1) For access to end-users that reside within the train we use an efficient wireless last-inch solution like WiFi in the Point Coordination Function (PCF) mode. (2) For providing access between the wired backbone network and the train we use a modified WiMax system that gives good throughput and efficient channel utilization capacity in a point-to-point configuration. (3) Finally, to be able to provision bandwidth between the core network and the WiMax base-stations we propose the use of light-trail technology at the optical layer. The light-trail technology allows for dynamic provisioning of bandwidth at the optical layer through a unique node architecture and out-of-band control protocol. It turns out that the light-trail control protocol is useful for assisting fast hand-offs. The hand-off time being drastically reduced enables efficient utilization of the wireless channel even at very high speeds. A protocol that enable end-to-end provisioning across the three domains of technology aka light-trails, WiMax and WiFi is also proposed. The proposed protocol is a cornerstone mechanism for providing inter-domain (technology) connectivity in a pragmatic way. Different aspects of the protocol are considered, amongst which delay and efficiency are computed. The protocol and system requirements juxtaposed are simulated extensively. Results pertaining to utilization, delay, efficiency and network wide performance are all showcased. Viability of the model in being able to provide bandwidth to moving users is shown.


IITB/CSE/2007/July/3, TR-CSE-2007-3
1-Persistent Collision-Free MAC Protocols for Opportunistic Optical Hyperchannels
Jing Chen, Ashwin Gumaste, Jianping Wang and Si Qing Zheng

Recently, a new WDM optical network infrastructure named SMART [1], which is capable of dynamically setting up, modifying, and tearing down optical connections, was proposed. The performance of SMART is determined by the performance of its hyperchannels, which are essentially optical buses or light-trails. Previously proposed MAC protocols for unidirectional optical buses can be used as distributed medium access control protocols for hyperchannels. However, these protocols require optical detection and they are either unfair, or not work-conserving. In this paper, we propose a class of workconserving, collision-free, detection-free, and fair MAC protocols for opportunistic hyperchannels which are tailored hyperchannels for SMART. We compare our proposed MAC protocols with known pi-persistent CSMA protocols and priority-based CSMA protocol by simulations. We show that the performances of our proposed protocols are much better than that of pi-persistent CSMA and that of priority-based CSMA protocol in terms of throughput and fairness.


IITB/CSE/2007/July/2, TR-CSE-2007-2
Merge-by-Wire: Algorithms and System Support
Gurulingesh Raravi, Vipul Shingde, Ashish Gudhe, Krithi Ramamritham

Automakers are trying to make vehicles more intelligent and safe by embedding processors which can be used to implement by-wire applications for taking smart decisions on the road or assisting the driver in doing the same. Given this proliferation, there is a need to minimize the computing power required without affecting the performance and safety of the applications. The latter is especially important since these by-wire applications are distributed and real-time in nature and hence deal with critical data and involve deadline bound computations on data gathered from the environment. These applications have stringent requirements on the freshness of data items and completion time of the tasks. Our work studies one such safety-related application namely, Automatic Merge Control (AMC) which ensures safe vehicle maneuver in the region where two or more roads intersect.

As our contributions, we (i) propose two merge algorithms for AMC: Head of the Lane (HoL) and All Feasible Sequences (AFS) (ii) demonstrate how DSRC-based wireless communication protocol can be leveraged for the development of AMC (iii) present a real-time approach towards designing AMC by integrating mode-change and real-time repository concepts for reducing the processing power requirements and (iv) identify task characteristics of AMC and provide a scheduling strategy to meet their timing requirements. Simulations demonstrate the advantages of using our approach for constructing merge-by-wire systems.


IITB/CSE/2007/July/1, TR-CSE-2007-1
DynaSPOT: Dynamic Services Provisioned Optical Transport Test-bed <96> Achieving Multi-Rate Multi-Service Dynamic Provisioning using Strongly connected Light-trail (SLiT) Technology
Ashwin Gumaste, Nasir Ghani, Paresh Bafna, Akhil Lodha, Anuj Agrawal, Tamal Das and Si Qing Zheng

We report on the DynaSPOT (Dynamic Services Provisioned Optical Transport) test-bed <96> a next-generation metro ring architecture that facilitates provisioning of emerging services such as Triple Play, Video-on-Demand (VoD), Pseudo Wire Edge to Edge Emulation (PWE3), IPTV and Data Center Storage traffic. The test-bed is based on the recently proposed Strongly connected Light-trail (SLiT) technology that enables the triple features of dynamic provisioning, spatial sub-wavelength grooming and optical multicasting <96> that are quintessential for provisioning of the aforementioned emerging services. SLiT technology entails the use of a bidirectional optical wavelength bus that is time-shared by nodes through an out-of-band control channel. To do so, the nodes in a SLiT exhibit architectural properties that facilitate bus function. These properties at the network side include ability to support the dual signal flow of drop and continue as well as passive add, while at the client side include the ability to store data in order to support time-shared access. The latter (client side) improvisation is done through a new type of transponder card <96> called the trailponder that provides for storage (electronic) of data and fast transmission (burst-mode) onto the SLiT. Further in order to efficiently provision services over the SLiT, there is a need for an efficient algorithm that facilitates meeting of service requirements. To meet service requirements we propose a dynamic bandwidth allocation algorithm that allocates data time-slots to nodes based on a valuation method. The valuation method is principally based on an auctioning scheme whereby nodes send their valuations (bids) and a controller node responds to bids by sending a grant message. The auctioning occurs in the control layer, out-of-band and ahead in time. The novelty of the algorithm is the ability to take into consideration the dual service requirements of bandwidth request as well as delay sensitivity. At the hardware level, implementation is complex <96> as our trailponders are layer-2 devices that have limited service differentiation capability. Here, we propose a dual VLAN tag and GFP based unique approach that is used for providing service differentiation at layer-2. Another innovation in our test-bed is the ability to support multi-speed traffic. While some nodes function at 1 Gbps, and others function at 2.5 Gbps (using corresponding receivers), a select few nodes can support both 1 Gbps and 2.5 Gbps operation. This novel multi-speed support coalesced with the formerly mentioned multi-service support is a much needed boost for services in the metro networks. We showcase the test-bed and associated results as well as descriptions of hardware subsystems.


Webmail

Username:
Password:
Faculty CSE IT
Forgot Password
    [+] Sitemap     Feedback