Using autoencoders to optimize two-dimensional signal constellations for fiber optic communication systems
Go to file
Mattéo Delabre 6921be6ca6
Augment illustration
2019-12-18 11:21:23 -05:00
constellation Simplify arguments for ConstellationNet 2019-12-18 10:26:35 -05:00
output Save trained models and plot encoding 2019-12-13 15:17:57 -05:00
results Augment illustration 2019-12-18 11:21:23 -05:00
.gitignore Add Gaussian channel model 2019-12-14 23:04:35 -05:00
README.md Augment illustration 2019-12-18 11:21:23 -05:00
experiment.py Reflect parameter changes in `experiment.py` 2019-12-18 11:01:45 -05:00
plot.py Fixed name clash in plot.py 2019-12-18 11:21:14 -05:00
train.py Simplify arguments for ConstellationNet 2019-12-18 10:26:35 -05:00

README.md

ConstellationNet

This is the source code for the “Using Autoencoders to Optimize Two-Dimensional Signal Constellations for Fiber Optic Communication Systems” project.

Results for 4, 16 and 32 QAM

Structure

The ConstellationNet model is defined inside the constellation/ folder. At the root are several scripts (described below) for training the model and testing it.

Available scripts

Training

train.pyis a script for training a ConstellationNet network. Hyperparameters are defined at the top of that file and can be changed there. After training, the resulting model is saved as output/constellation-order-X.pth, where X is the order of the trained constellation.

Plotting

plot.py generates plots of trained models. It loads a trained model from output/constellation-order-X.pth where X is the order (can be changed at the top of the file). The constellation learned by the encoder is plotted as points and the decision regions learned by the decoder as colored areas around these points.

Experimentation

experiment.py runs experiments to find the best hyperparameters for each constellation order. Currently, the following parameters are tested:

  • order, the number of points in the constellation, is 4, 16 or 32.
  • initial_learning_rate, the first learning rate used by the optimizer, is 10^-2, 10^-1 or 1.
  • batch_size, the number of training examples in each batch, expressed as a multiple of order, is 8 or 2048.
  • first_layer, the size of the first hidden layer, ranges from 0 to the constellations order in steps of 4 (0 meaning that there is no hidden layer).
  • last_layer, the size of the second hidden layer, ranges from 0 to first_layer (because the second layer is never larger than the first).

For a total of 378 different configurations. To enable parallelizing the experimentation, this set of configurations can be partitioned using two arguments, the first one describes the total number of parts to divide the set into and the second one which part number is to be tested.

Results of this experiment are available in results/experiment.csv.