The Hebrew University – Experimental Systems Lab
(previously Parallel Systems Lab)


Excellent students who would like to work on some project in our lab (or initiate their own projects) are invited to look at the list of projects and contact Dror Feitelson ().
CURRENT PROJECTS: Our research is generally about experimental computer science, in the natural sciences meaning of the term: using observation and measurements to learn about the world, in our case the world of computer systems. The "parallel systems" title is outdated, as much of the work is not related to parallelism any more.

Code Complexity and Comprehension

We are interested in what makes program code hard to understand. This started with the high-MCC functions of Linux, but has since taken a life of its own. The main points are

Linux Kernel Development

We are not so much participating in developing Linux, as observing the development process, in the interest of understanding software evolution in general and open-source development in particular. This includes

Workloads and Their Effect on Design and Performance

The performance (and indeed, the behavior) of a computer system depends on its design, its implementation, and on its workload — what it is actually requested to do. We are involved in various aspects of workload characterization and modeling, including data cleaning. We're also interested in how what we learn about the workloads may affect new design ideas. Notable activities include
PAST PROJECTS:

Global Scheduling for Virtualization (2007 – 2013)

Scheduling and resource management in virtualization environments (especially server consolidation) typically boil down to supporting pre-defined allocations. But what does it mean to give a certain virtual machine 20% of the resources, when it needs more of the CPU but less of the network? Our work tries to define the semantics of multi-resource allocations, and support them by carefully scheduling the bottleneck device(s). The main components are

What's Your System Doing? (2000 – 2008)

Modern computer systems, from individual microprocessors through operating systems and up to the complete Internet, are very complex. Thus even their designers cannot claim a real understanding of what they do and how they operate. Our project was to develop monitoring tools that look into the system and record what it really does in everyday use; currently this is done mainly at the operating system level. Notable activities include KLogger continues to be used and ported to new versions of Linux from time to time.

The ParPar Project (1996 – 2005)

The ParPar system was our cluster research platform, perpetually in advanced stages of implementation. This was a distributed memory machine based on Pentium PCs connected by a high speed LAN. We developed system software that ran above the Unix BSDI kernel and provided support for parallel processing, including A brief description of the design is available on-line, with a link to the full design document and pictures of the prototype cluster.

"Parpar" means butterfly in Hebrew; if you're interested in the fluttery variety, look here.

The BoW Project (1997 – ?)

This project is only superficially connected to parallel systems. It deals with the creation and maintenance of an on-line bibliography (BOW stands for bibliography on the web). The idea is that users will be able to add bibliographical entries, and index them according to a comprehensive hyper-text-based subject index. They can also add comments about existing entries, and links among related entries. Finally, they can give feedback about their satisfaction, and this is displayed by the system, so as to help identify useful high-quality pages.

A nearly-fully implemented demo version had been available for much longer than the underlying technology was expected to last, but the server was finally decommissioned in 2012...

The SEA Project (1994 – 2002)

This was a joint project with Ronnie Agranat's group from Applied Physics to develop a prototype optical interconnection network. Ronnie's group did the optics, based on innovative physical effects in certain crystals. We were responsible for the software layers and the development of communication protocols. The goal of creating a real working parallel computer using this technology did not materialize.

The Virtual Servers Project (1998 - 2000)

This was a joint project with Danny Dolev's group and several other groups from the Technion and Haifa University. The idea was to create a framework where the notion of "server" is divorced from the notion of "a box". Thus servers may expand across the network when the load on them increases (that is, when they get more requests for service), and then shrink again when they are not needed any more.

Our Technion partners have constructed a CORBA-like environment to support virtual servers, called Symphony.

The Makbilan Project (1989 – 1995)

The Makbilan was our previous research machine, and was put into service in 1989. This was a home built parallel machine consisting of 16 Processors interconnected by a Multibus II. Each processor board consists of an Intel 386/387 CPU, 4 Megabytes memory, cache, and a special MPC chip that controls access to the system bus. All memory is globally accessible. In addition, there is a Unix host also connected to the bus supporting Unix operations for the parallel application.

More details (and pictures!) are available here.


People

For PUBLICATIONS, see individual home pages

Software Distributions


Useful Links


Aug 4, 2005 /