This code is released for **academic research only**
under the LESSER GENERAL PUBLIC LICENSE

If you use this code, please cite
this paper

**
@incollection{Elidan+al:2002,
author = "Gal Elidan and Matan Ninio and Nir Friedman and Dale Schuurmans",
booktitle = "Proc. National Conference on Artificial Intelligence (AAAI-02)",
pages = "132-139",
year = "2002",
title = "Data Perturbation for Escaping Local Maxima in Learning",
}
**

For purposes other then academic research you should contact:

The idea of **Weight Annealing** is to introduce perturbations into the
search process by changing the weight of the instances. This combined
with annealing gives state of the art results in many domains
(so far we have tried Bayesian networks, Neural network like functions,
clustering, decision trees and more). The outline of the algorithm is very
simple: given the current model, new weights are calculated. These are
fed into the black box optimizer (e.g. greedy hill climbing, gradient ascent...) that produces a new hypothesis and so on. The process is annealed toward
the original weights so that at the final stage optimization is performed
on the actual target function.

At the heart of this process is the re-weighting procedure. There are two schemes:

**Random Reweighting**samples weights distributed around the original weights and with a variance that is proportional to the temperature.**Adversarial Reweighting**challenges the local search by trying to minimize the score of the current model. This is achieved by reweighting as a function of the gradient of each instance. Simply, this increases the weight of "bad" (low scoring) instances and reduces the weights of good instances (this is not**Boosting**- see the paper for an explanation).

In order to use **Weight Annealing** you should be able to formulate your
target function (or score) as a function of instances and their weights.
If you want to use **Adversarial Reweighting** the target function also
has to be differentiable with respect to the weights

sum ( w(i) * cos(PI * i/20 * x) )where i = { 1...20 }. We look for x that maximized this function. This can be viewed as finding the frequency that fits the cos with our instance points.

Both reweighting objects are initialized with the starting and stopping
temperature and the coolling factor. The object progresses via
iterative calls to the **CoolDown()** method followed by
a call to **GetWeights()**. See the
header file for more information.

The easiest way to start is to look at the simple example
described above implemented in testAnneal.cpp.
Start with looking at the **main()** function.

all the files you need can be found in
this tarball.

in the tarball you will find:

- readme.txt - instructions for using the library
- WeightUpdate.h - weight reweighting header file
- WeightUpdate.cpp - weight reweighting implementation file
- RandomProb.h - utility random probability generator header file
- RandomProb.cpp - utility random probability generator implementation file
- testNoAnneal.cpp - demo implementation of local search with no annealing
- testAnneal.cpp - same problem as above, but with Adversarial Weight Annealing
- testRandomAnneal.cpp - same problem as above, but with Random Weight Annealing
- testCommon.h - header for utility functions for the demo
- testCommon.cpp - utility functions for the demo
- Makefile - generates executables and library