Israeli Networking Day 2011

Israeli Networking Day 2011

Hosted by Google R&D Center, Israel

31 March 09:30-18:00

Location: Google Tel-Aviv, Levinstein Tower

You are invited to join us for the 6th Israeli Networking Seminar hosted this year by Google at Tel Aviv. We hope to bring together researchers and technologists in networking and communications from the Israeli Academy and High-Tech industry for a one day seminar on networking.

Participation is free, but space is limited.  Please register in advance in:

https://services.google.com/fb/forms/ilnetworkingseminar2011/

Registration is now re-opened. Please email Hanoch and David if you haven't registered yet.

See you in the seminar!


Hanoch Levy, Tel Aviv University (
hanoch@cs.tau.ac.il)

David Hay, The Hebrew University (dhay@cs.huji.ac.il)


Program schedule: (speaker names are underlined)

09:30-10:00         Gathering

10:00-10:05                 Yossi Matias

Welcome

10:05-10:30                  David Ziegler

Distributed Peta-Scale Data Transfer

10:30-10:55                Udi Ben-Porat, Anat Bremler-Barr, Hanoch Levy and Bernhard Plattner

On the Vulnerability of the Proportional Fairness Scheduler to Retransmission

Attacks

10:55-11:20                Amir Hertzberg and Moti Geva

QoSoDoS: If You Can't Beat Them Join Them!

11:20-11:45                Udi Weinsberg, Augustin Soule and Laurent Massoulie

Inferring Traffic Shaping and Policy Parameters using End Host Measurements

11:45-12:05         Coffee Break

12:05-12:15                Boris Oklander and Moshe Sidi

Modeling and Analysis of System Dynamics and State Estimation in Cognitive

Radio Networks

12:15-12:25                Bracha Hod, Gaddi Blumrosen, Tal Anker, Danny Dolev and Boris Rubinsky

Enhancing RSSI-based Tracking Accuracy in Wireless Sensor Networks

12:25-12:35                Ori Rottenstreich and Isaac Keslassy

Worst-Case TCAM Rule Expansion

12:35-12:45                Ofer Hermoni, Niv Gilboa, Eyal Felstaine, Yuval Elovici and Shlomi Dolev

Rendezvous Tunnel for Anonymous Publishing

12:45-13:40         Lunch

13:40-14:05                Dror Rawitz, Yishay Mansour and Boaz Patt-Shamir

Overflow Management with Multipart Packets

14:05-14:30                Alex Shpiner, Isaac Keslassy, Gabi Bracha, Eyal Dagan, Ofer Iny and Eyal Soha

A Switch-Based Approach to Throughput Collapse and Starvation in Data

Centers

14:30-14:55                Moti Medina, Guy Even, Gregor Schaffrath, and Stefan Schmid

Competitive and Deterministic Embeddings of Virtual Networks.

14:55-15:20                Gabriel Scalosub, Isaac Keslassy, Kirill Kogan and Michael Segal

Providing Performance Guarantees in Multipass Network Processors

15:20-15:45                 Michal Segalov, Haim Kaplan, Tzvika Hartman, Avinatan Hassidim and Danny Raz

How to split a flow ?

15:45-16:10 Coffee Break

16:10-16:35                Ittay Eyal, Idit Keidar and Raphael Rom

LiMoSense - Live Monitoring in Dynamic Sensor Networks

16:35-17:00                Chen Avin, Michael Borokhovich, Keren Censor-Hillel, and Zvi Lotker

Order Optimal Information Spreading Using Algebraic Gossip

17:00-17:25                Zohar Naor and Sajal Das

Mobile Real-Time Group Communication Service

17:25-17:50                David Hadas, S. Guenender, B. Rochwerger,  A. Landau,  and M. Ben-Yehuda

Plugging the hypervisor abstraction leaks caused by virtual networking


Directions:

Google Tel Aviv Office is in Levinstein Tower, 23 Menachem Begin Road, 4th floor, Tel Aviv. It is a 15 min walk from the HaHagana Train station and 25 min walk from the Azrieli Center. The office is a few minutes by taxi from both locations.

Parking: If you decide to arrive by car, you will find an underground parking lot below the Levinstein Tower. Please bring the parking ticket with you to the event. Our receptionist will stamp the parking ticket so that you can get a discounted hourly rate.


Previous Israeli Networking Days

  1. 2009: Danny Raz and Ori Gerstel, http://www.cs.technion.ac.il/~danny/INS-2009.htm
  1. 2008: Shlomi Dolev and Idit Keidar, http://www.cs.bgu.ac.il/~dolev/NetSem/
  2. 2007: Isaac Keslassy and Anat Bremler-Barr, http://www.faculty.idc.ac.il/bremler/seminar.htm
  3. 2006: Yuval Shavitt and Reuven Cohen, http://www.eng.tau.ac.il/~shavitt/NetArtzi.html
  4. 2005: Yehuda Afek, David Belz and Israel Cidon, http://www.math.tau.ac.il/~afek/NetArtzi.html


List of Abstracts

Distributed peta-scale data transfer, David Ziegler, Google

Many internal Google systems need to copy huge volumes of data between Google's data centers. Solving this problem requires managing very high traffic volumes, dealing effectively with packet loss, scheduling and prioritizing the use of constrained resources and avoiding a single point of failure. Effingo, Googles internal large scale copy service, is designed to scale and sustain very high traffic throughput. Effingo features the following characteristics: - A custom network protocol over UDP. - Distributed scheduling decisions when managing the use of constrained resources. - Multi-point topological efficiency. - High availability even when clusters are down, or slow. - Scalability to many files, many users and to very high throughput globally. In this talk Ill describe the challenges of peta-scale data transfer, and how they are addressed using Effingo.

Joint work with many other Google engineers and researchers.

On the Vulnerability of the Proportional Fairness Scheduler to Retransmission Attacks, Udi Ben-Porat, ETH Zurich

Channel aware schedulers of modern wireless networks – such as the popular Proportional Fairness Scheduler (PFS) – improve throughput performance by exploiting channel fluctuations while maintaining fairness among the users. In order to simplify the analysis, PFS was introduced and vastly investigated in a model where frame losses do not occur, which is of course not the case in practical wireless networks. Recent studies focused on the efficiency of various implementations of PFS in a realistic model where frame losses can occur. In this work we show that the common straight forward adaptation of PFS to frame losses exposes the system to a malicious attack (which can alternatively be caused by malfunctioning user equipment) that can drastically degrade the performance of innocent users. We analyze the factors behind the vulnerability of the system and propose a modification of PFS designed for the frame loss model which is resilient to such malicious attack while maintaining the fairness properties of original PFS.

Joint work with Anat Bremler-Barr, Hanoch Levy and Bernhard Plattner. To be presented in Infocom 2011. [Presentation]

QoSoDoS: If You Can't Beat Them Join Them!, Amir Hertzberg, Bar Ilan

We present QoSoDoS, a protocol that ensures QoS over a DoS-prone (Best-Effort) network. QoSoDoS ensures timely delivery of time sensitive messages over unreliable network, susceptible to high congestion and network flooding DoS attacks. QoSoDoS is based on scheduling multiple transmissions of packets while attempting to minimize overhead and load, and avoiding self-creation of DoS. We present a model and initial empirical results of QoSoDoS implementation. Our results show that under typical scenarios, QoSoDoS can handle high congestion and DoS attacks quite well.

Joint work with Moti Geva. To be presented in Infocom 2011

Inferring Traffic Shaping and Policy Parameters using End Host Measurements, Udi Weinsberg, Tel Aviv U.

The increasing adoption of high speed Internet connectivity in homes has led to the development of bandwidth hungry applications. This, in turn, induces ISPs to protect their core networks by deploying traffic shaping devices. End users, ISPs and regulators need to better understand the shaping policies that are enforced by the network. 

The paper presents a method for inferring flow discrimination and shaping parameters in the presence of cross traffic using active probing. The key concept is a stochastic comparison of the inter-arrival times of packets and measured bandwidth of a base-line flow and the measured flow. We present Packsen, a framework designed to provide high detection accuracy by sending interleaved flows at a very precise bandwidth, and used it for measurements on a local testbed and on PlanetLab. Evaluation shows the accuracy and robustness of the proposed method for detecting traffic shaping and inferring its parameters.

Joint work with Augustin Soule and Laurent Massoulie (Technicolor, Paris). To be presented in Infocom 2011 mini conference.

Modeling and Analysis of System Dynamics and State Estimation in Cognitive Radio Networks, Boris Oklander, Technion

Cognitive radio networks (CRN) are subject of a multidisciplinary research in wireless networking, digital communications, artificial intelligence, and other fields. Special emphasis is given to investigation of the CRN capability of improved RF spectrum utilization by means of dynamic spectrum access (DSA). This talk presents a new analytic framework for modeling the adaptive behavior of CRN. In particular, a model of the CRN is developed which incorporates sensing of the electromagnetic environment, estimating states of the communication channels, making decisions based on some policy and dynamically switching the transmissions to the channels that are not currently occupied by primary users. The proposed framework is analyzed using mathematical tools of queueing theory and the performance of the CRN is evaluated. The analysis shows the impact of sensing rate and the system dynamics on the waiting times of secondary users.

Joint work with Moshe Sidi.  Presneted in PIMRC’10 (CogCloud’10 Workshop)

Enhancing RSSI-based Tracking Accuracy in Wireless Sensor Networks, Bracha Hod, Hebrew U.

In recent years, the demand for high-precision tracking systems has significantly increased in the field of Wireless Sensor Network (WSN). WSN tracking systems that utilize radio technology can be based on Received Signal Strength Indicator (RSSI).

RSSI is a measurement of the signal power on a radio link. It has been extensively used as  one of the ranging techniques in WSNs due to its simplicity, low power consumption and economical price.

We propose a new RSSI-based tracking system that includes few sensor nodes which are deployed in close-proximity and transmit data in a relatively high transmission rate.

The close-proximity enables accurate conversion of RSSI measurements to range estimates and the high transmission rate enables utilizing RSSI measurements spatial and temporal diversity. The system includes an advanced calibration scheme, a range estimation between the nodes and a location estimation. Advanced statistical and signal processing methods are used for optimal transmit power level selection to mitigate channel distortion and to compensate for packet loss. The system is evaluated in indoor settings and achieves tracking resolution of few centimeters, which is compatible with the theoretical bounds.

In the talk, I'll give an introduction to the topic. Then, I'll describe the system model and the problem statement. I'll explain our data processing phases for solving the problem and will show  the experimental setup and the experimental results.

Joint work with Gaddi Blumrosen, Dr. Tal Anker, Prof. Danny Dolev and Prof. Boris Rubinsky. Part of the talk is based on a paper presented in the International Conference on Body Sensor Networks (BSN 2010), Singapore, June 2010. [Presentation]

Worst-Case TCAM Rule Expansion, Ori Rottenstreich, Technion

In recent years, hardware-based packet classification has became an essential component in many networking devices. It often relies on ternary content-addressable memories (TCAMs), which can compare in parallel the packet header against a large set of rules. Designers of TCAMs often have to deal with unpredictable sets of rules. These result in highly variable rule expansions, and can only rely on heuristic encoding algorithms with no reasonable guarantees.

In this talk, given several types of rules, we provide new upper bounds on the TCAM worst-case rule expansions, improving upon the previously-known bounds by almost a factor of two. We also introduce new analytical tools based on independent sets and alternating paths, and use these tools to prove the tightness of the upper bounds.  Last, we propose a modified TCAM architecture that can use additional logic to significantly reduce the rule expansions, both in the worst case and using real-life classification databases.

Joint work with Isaac Keslassy. Presented in Infocom 2010 mini conference and ISIT 2010.

Rendezvous Tunnel for Anonymous Publishing, Ofer Hermoni, Ben-Gurion U.

Many anonymous Peer-to-Peer (P2P) file sharing systems have been proposed in recent years. In a P2P file sharing system there are three types of participants: The publisher that inserts the content to the system, the server that stores the content and the reader that retrieves the content from the server. Existing anonymity solutions typically consider only the server and the reader, or the publisher and the reader, but do not consider the anonymity of all three types of the participants. In this work we propose a novel solution for a P2P file sharing system. Our solution provides overall anonymity to all three types of participants. The proposed solution is simple to understand and implement since it is based on (three) anonymity tunnels.

Under the assumption that the adversary controls only a limited number of nodes in the system, a publisher can insert a document anonymously to the system, a server that holds the document preserves its anonymity even with respect to itself, and a reader can retrieve documents anonymously from the system. If
t is the maximum number of nodes that the adversary controls, then the total communication in the network is 2t times the total communication required for standard file transfer. The storage overhead is insignificant.

Joint work with Niv Gilboa, Eyal Felstaine, Yuval Elovici and Shlomi Dolev. Presented as a poster in ACM CCS10.

Overflow Management with Multipart Packets, Dror Rawitz, Tel Aviv U.

We study an abstract setting, where the basic information units (called ``superpackets'') do not fit into a single packet, and are therefore spread over multiple packets. We assume that a superpacket is useful only if the number of its delivered packets is above a certain threshold. Our focus of attention is communication link ingresses, where large arrival bursts result in dropped packets. The algorithmic question we address is which packets to drop so as to maximize goodput. Specifically, suppose that each superpacket consists of $k$ packets, and that a superpacket can be reconstructed if at most $\beta \cdot k$ of its packets are lost, for some given parameter $0 \leq \beta < 1$. We present a simple online distributed randomized algorithm in this model, and prove that in any scenario, its expected goodput is at least $O(\opt/(k \sqrt{(1-\beta)\sigma}))$, where $\opt$ denotes the best possible goodput by any algorithm, and $\sigma$ denotes the size of the largest burst (the bound can be improved as a function of burst-size variability). We also analyze the effect of buffers on goodput under the assumption of fixed burst size, and show that in this case, when the buffer is not too small, our algorithm can attain, with high probability, $(1-\epsilon)$ goodput utilization for any $\epsilon>0$. Finally, we present some simulation results that demonstrate that the behavior of our algorithm in practice is far better than our worst-case analytical bounds.

Joint work with Yishay Mansour and Boaz Patt-Shamir. To be presented in Infocom 2011. [Presentation]

A Switch-Based Approach to Throughput Collapse and Starvation in Data Centers, Alex Shpiner, Technion

Data center switches need to satisfy stringent low-delay and high-capacity requirements. To do so, they rely on small switch buffers. However, in case of congestion, data center switches can incur throughput collapse for short TCP flows as well as temporary starvation for long TCP flows.

In this talk, we introduce a lightweight hash-based algorithm called HCF (Hashed Credits Fair) to solve these problems at the switch level while being transparent to the end users. We show that it can be readily implemented in data center switches with O(1) complexity and negligible overhead. We illustrate using simulations how HCF mitigates the throughput collapse of short flows. We also show how HCF reduces unfairness and starvation for long-lived TCP flows as well as for short TCP flows, yet maximizes the utilization on the congested link. Last, even though HCF can store packets of a same flow in different queues, we also prove that it prevents packet reordering.

Joint work with Isaac Keslassy, Gabi Bracha, Eyal Dagan, Ofer Iny and Eyal Soha. Recipient of the best paper award in IWQoS 2010. [Presentation]  

Competitive and Deterministic Embeddings of Virtual Networks, Moti Medina, Tel Aviv U.

Network virtualization is a paradigm that allows for flexible and   efficient allocation of the resources among multiple virtual   networks (VNets). In this paper we deal with the problem of   embedding dynamically arriving VNet requests.  We describe a generic   algorithm for the online VNet embedding problem and analyze its   competitive ratio.  This means that we compare the benefit   accumulated by the algorithm with the benefit of an optimal offline   algorithm. We prove that the competitive ratio of our online   algorithm is, loosely speaking, logarithmic in the sum of the   resources.    Our algorithm is generic in the sense that it supports multiple   traffic models, multiple routing models, and allows for nonuniform   benefits and durations of VNet requests.  Concretely, the routing   models considered in this paper include: multipaths, single paths,   and tree routing.  For modeling traffic, we study the customer-pipe   model, the hose model, and a new traffic model, called the aggregate   ingress model, that is well suited for modeling multicasts and   multi-party video conferences.

Joint work with Guy Even, Gregor Schaffrath, and Stefan Schmid. [Presentation

Providing Performance Guarantees in Multipass Network Processors, Gabriel Scalosub, Ben Gurion U.

Multi-core Network Processors (NPs) are widely used to perform complex packet processing tasks in modern highspeed routers. NPs are able to address such diverse functions as forwarding, classification, protocol conversion, DPI, intrusion detection, SSL, NAT, firewalling, and traffic engineering. They are often implemented using many processing cores. These cores are either arranged as a pool of identical cores (e.g., the Cavium CN68XX), as a long pipeline of cores (e.g., the Xelerated X11), or as a combination of both (e.g., the EZChip NP-4).

Current NPs further increasingly deal with packets with heterogeneous processing times. In such an environment packets that require many processing cycles delay low latency traffic, because the common approach in today's NPs is to employ run-to-completion processing. These difficulties have led to the emergence of the Multipass NP architecture, where after a processing cycle ends, all processed packets are recycled into the buffer and re-compete for processing resources (e.g., in the recent Cisco QuantumFlow NP which forms the heart of Cisco’s most recent ASR 1000 edge routers).

In this work we provide a model that captures many of the characteristics of this architecture, and consider several scheduling and buffer management algorithms that are specially designed to optimize the performance of multipass network processors. In particular, we provide analytical guarantees for the throughput performance of our algorithms. We further conduct a comprehensive simulation study which validates our results.

Joint work with Isaac Keslassy, Kirill Kogan and Michael Segal. To be presented in Infocom 2011.

How to split a flow?, Michal Segalov, Google


Algorithms that compute a flow or a multicommodity flow typically produce the flow or flows (if there are many commodities)  as a set of values associated with  the arcs subject to capacity and conservation constraints.  However, to deploy a flow in a network we often need to represent it as a set of source-sink paths.

In this paper we consider the problem of decomposing a flow into a small number of paths. We show that the problem is NP-hard and MAX-SNP hard even if the flow values on the arcs are only $1$, $2$ and $4$. We also show that straightforward greedy algorithms for the problem can produce large decompositions.  On the positive side we give a new approximation algorithm that decomposes all but an $\epsilon$-fraction of the flow into at most  $O(1/\epsilon^2)$ times the smallest possible  number of paths.

We compared the decompositions produce by these algorithms on real and random data. The experiments point out that the greedy algorithm  works best in practice. Our new algorithm typically produces a slightly larger decomposition but its value is its theoretical worst case guarantee.

Joint work with Haim Kaplan, Tzvika Hartman, Avinatan Hassidim and Danny Raz.

 LiMoSense - Live Monitoring in Dynamic Sensor Networks, Ittay Eyal, Technion

We present LiMoSense, a fault-tolerant live monitoring algorithm for dynamic sensor networks. LiMoSense uses gossip to dynamically track and aggregate a large collection of ever-changing sensor reads. It overcomes message loss, node failures and recoveries, and dynamic network topology changes. We formally prove correctness of LiMoSense;  we use simulations to illustrate its ability to quickly react to network and value changes and provide accurate information.

Joint work with Idit Keidar and Raphael Rom.

Order Optimal Information Spreading Using Algebraic Gossip, Chen Avin, Ben Gurion U.

In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate k distinct messages to all n nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of O((k+log n + D)d) rounds with high probability, where D and d are the diameter and the maximum degree in the network, respectively. For many topologies and selections of k this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is order optimal and the stopping time is \Theta(k + D). 

To eliminate the factor of d from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol S. The stopping time of TAG is O(k+\log n +D(S)+t(S), where t(S) is the stopping time of the spanning tree protocol, and D(S) is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for k=\Omega(n), where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after \Theta(n) rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for k=\Omega(polylog(n)). The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.

Joint work with Michael Borokhovich, Keren Censor-Hillel, and Zvi Lotker. To be presented in PODC 2011. Presentation

Mobile Real-Time Group Communication Service, Zohar Naor, Haifa U.

A scalable framework for mobile real-time group communication services is developed in this paper. Examples for possible applications of this framework are mobile social networks, mobile conference calls, mobile instant messaging services, and mobile multi-player on-line games. A key requirement for enabling a real-time group communication service is the tight constraint imposed on the call delivery delay. Since establishing such communication service for a group of independent mobile users under a tight delay constraint is NP-hard, a two-tier architecture is proposed, that can meet the delay constraint imposed by the real-time service requirement for many independent mobile clients in a scalable manner. The time and memory complexity associated with the group services provided by the proposed framework are O(N) for each service, where N is the number of nodes being served, while a distributed scheme requires O(N^2) for both time and memory complexity.

Joint work with Sajal Das. Presented in Infocom 2010.

Plugging the hypervisor abstraction leaks caused by virtual networking, David Hadas, IBM Haifa

Virtual machines are of very little use if they cannot access the underlying physical network. Virtualizing the network has traditionally been considered a challenge best met by such network-centric measures as VLANs, implemented by switches. In this work, we present the benefits of adding network abstraction to hypervisors, moving the responsibility for network virtualization from the network to the server. Modern hypervisors do a poor job in virtualizing the network, leaking details of the physical network into virtual machines. For example, IP addresses used across the host's physical network, are exposed to guest virtual machines. We propose a method for plugging the network-related leaks by ensuring that the virtual network traffic is encapsulated inside a host envelope prior to transmission across the underlying physical network.

Joint work with  S. Guenender, B. Rochwerger,  A. Landau,  and M. Ben-Yehuda. Presented in SYSTOR 2010.