Introduction

My aim is to try and encode a simulated swarm into a lower dimentional representation so I can identify interesting parameters with which to map into a sound generating system.

My initial research into reducing complex time-series data lead me to a paper by Ali et al

They use convolutional layers in an auto encoder to do dimension reduction on various time series data.

This seemed like a good place to start, so initially I am just documenting my attempt to recreate their research, using a single agents coordinates in 3d space to match the shape of their model.

Details

Their approach uses 1D Convolution and the data split into interleaved sliding windows of data:

They used a 60 frame window and 3 readings from accelerometers resulting in a 60x3 shape input image.

As a test of their approach I used a single agents movement data to match the original model specifications.

Data is captured from a separate flocking simulation (source available on my github soon) of 300 agents over approximately 30 seconds.

using Flux, CSV, DataFrames, MLDataPattern, CUDA, Plots, WebIO;
plotly()
   Resolving package versions...
  No Changes to `~/Documents/JuliaProjects/Swarm/Project.toml`
  No Changes to `~/Documents/JuliaProjects/Swarm/Manifest.toml`
Plots.PlotlyBackend()
df = DataFrame(CSV.File("$(datadir())/exp_raw/data.csv"; types=Float32))
df = df[:,1:3]

plot(df[:,1],df[:,2], df[:,3])

We can treat this 3D data as 3 separate 1D data points, matching the original TimeCluster data shape

plot(df[:,1], label="x")
plot!(df[:,2], label="y")
plot!(df[:,3], label="z")

Data Noramlisation and preperation

Normalise the data into the range [0, 1] as per the paper. We then create a sliding window using the defaults from the paper where stride = 1 and window_size = 60

function normalise(M) 
    min = minimum(minimum(eachcol(M)))
    max = maximum(maximum(eachcol(M)))
    return (M .- min) ./ (max - min)
end

normalised = Array(df) |> normalise

window_size = 60

data = slidingwindow(normalised',window_size,stride=1);

Define the encoder and decoder

We create the auto-encoder network as per the paper

function create_ae()
    # Define the encoder and decoder networks 
  encoder = Chain(
  # 60x3xb
    Conv((10,), 3 => 64, relu; pad = SamePad()),
    MaxPool((2,)),
    # 30x64xb
    Conv((5,), 64 => 32, relu; pad = SamePad()),
    MaxPool((2,)),
    # 15x32xb
    Conv((5,), 32 => 12, relu; pad = SamePad()),
    MaxPool((3,)),
    # 5x12xb
    Flux.flatten,
    Dense(window_size,window_size)
    )
  decoder = Chain(
    # input 60
    (x -> reshape(x, (floor(Int, (window_size / 12)),12,:))),
    # 5x12xb
    ConvTranspose((5,), 12 => 32, relu; pad = SamePad()),
    Upsample((3,)),
    # 15x32xb
    ConvTranspose((5,), 32 => 64, relu; pad = SamePad()),
    Upsample((2,)),
    # 30x64xb
    ConvTranspose((10,), 64 => 3, relu; pad = SamePad()),
    Upsample((2,)),
    # 60x3xb
  )
  return Chain(encoder, decoder)
 end
create_ae (generic function with 1 method)

Training

In keeping with the paper we use the Mean Square Error loss function and the ADAM optimiser

function train_model!(model, data, opt; epochs=20, bs=16, dev=Flux.gpu)
    model = model |> dev
    ps = params(model)
    t = shuffleobs(data)
    local l
    losses = Vector{Float64}()
    for e in 1:epochs
        for x in eachbatch(t, size=bs)
            # bs[(3, 60)]
            x  = cat(x..., dims=3)
            # bs x 3 x 60
            x  = permutedims(x, [2,1,3])
            # 60 x 3 x bs
            gs = gradient(ps) do
                l = loss(model(x),x)
            end
            Flux.update!(opt, ps, gs)
        end
        l = round(l;digits=6)
        push!(losses, l)
        println("Epoch $e/$epochs - train loss: $l")
    end
    model = model |> cpu;
    losses
 end
train_model! (generic function with 1 method)
loss(x,y) = Flux.Losses.mse(x, y)
opt       = Flux.Optimise.ADAM(0.00005)
epochs    = 100
loss (generic function with 1 method)
model     = create_ae()
losses_01 = train_model!(model, data, opt; epochs=epochs);

Epoch 1/100 - train loss: 0.018689
Epoch 2/100 - train loss: 0.004161
Epoch 3/100 - train loss: 0.002182
Epoch 4/100 - train loss: 0.001568
Epoch 5/100 - train loss: 0.001264
Epoch 6/100 - train loss: 0.001071
Epoch 7/100 - train loss: 0.000899
Epoch 8/100 - train loss: 0.000703
Epoch 9/100 - train loss: 0.0005
Epoch 10/100 - train loss: 0.000352
Epoch 11/100 - train loss: 0.00028
Epoch 12/100 - train loss: 0.000236
Epoch 13/100 - train loss: 0.000202
Epoch 14/100 - train loss: 0.000178
Epoch 15/100 - train loss: 0.000159
Epoch 16/100 - train loss: 0.000142
Epoch 17/100 - train loss: 0.000127
Epoch 18/100 - train loss: 0.000113
Epoch 19/100 - train loss: 0.000101
Epoch 20/100 - train loss: 9.2e-5
Epoch 21/100 - train loss: 8.3e-5
Epoch 22/100 - train loss: 7.5e-5
Epoch 23/100 - train loss: 6.8e-5
Epoch 24/100 - train loss: 6.3e-5
Epoch 25/100 - train loss: 5.7e-5
Epoch 26/100 - train loss: 5.2e-5
Epoch 27/100 - train loss: 4.8e-5
Epoch 28/100 - train loss: 4.4e-5
Epoch 29/100 - train loss: 4.1e-5
Epoch 30/100 - train loss: 3.8e-5
Epoch 31/100 - train loss: 3.6e-5
Epoch 32/100 - train loss: 3.4e-5
Epoch 33/100 - train loss: 3.2e-5
Epoch 34/100 - train loss: 3.0e-5
Epoch 35/100 - train loss: 2.9e-5
Epoch 36/100 - train loss: 2.7e-5
Epoch 37/100 - train loss: 2.6e-5
Epoch 38/100 - train loss: 2.4e-5
Epoch 39/100 - train loss: 2.3e-5
Epoch 40/100 - train loss: 2.3e-5
Epoch 41/100 - train loss: 2.2e-5
Epoch 42/100 - train loss: 2.1e-5
Epoch 43/100 - train loss: 2.1e-5
Epoch 44/100 - train loss: 2.1e-5
Epoch 45/100 - train loss: 2.1e-5
Epoch 46/100 - train loss: 2.0e-5
Epoch 47/100 - train loss: 2.0e-5
Epoch 48/100 - train loss: 2.0e-5
Epoch 49/100 - train loss: 1.9e-5
Epoch 50/100 - train loss: 1.9e-5
Epoch 51/100 - train loss: 1.9e-5
Epoch 52/100 - train loss: 1.9e-5
Epoch 53/100 - train loss: 1.9e-5
Epoch 54/100 - train loss: 1.9e-5
Epoch 55/100 - train loss: 1.9e-5
Epoch 56/100 - train loss: 1.8e-5
Epoch 57/100 - train loss: 1.8e-5
Epoch 58/100 - train loss: 1.7e-5
Epoch 59/100 - train loss: 1.6e-5
Epoch 60/100 - train loss: 1.5e-5
Epoch 61/100 - train loss: 1.5e-5
Epoch 62/100 - train loss: 1.4e-5
Epoch 63/100 - train loss: 1.3e-5
Epoch 64/100 - train loss: 1.3e-5
Epoch 65/100 - train loss: 1.3e-5
Epoch 66/100 - train loss: 1.3e-5
Epoch 67/100 - train loss: 1.2e-5
Epoch 68/100 - train loss: 1.2e-5
Epoch 69/100 - train loss: 1.2e-5
Epoch 70/100 - train loss: 1.2e-5
Epoch 71/100 - train loss: 1.2e-5
Epoch 72/100 - train loss: 1.2e-5
Epoch 73/100 - train loss: 1.1e-5
Epoch 74/100 - train loss: 1.1e-5
Epoch 75/100 - train loss: 1.1e-5
Epoch 76/100 - train loss: 1.1e-5
Epoch 77/100 - train loss: 1.1e-5
Epoch 78/100 - train loss: 1.0e-5
Epoch 79/100 - train loss: 1.0e-5
Epoch 80/100 - train loss: 1.0e-5
Epoch 81/100 - train loss: 1.0e-5
Epoch 82/100 - train loss: 1.0e-5
Epoch 83/100 - train loss: 1.0e-5
Epoch 84/100 - train loss: 1.0e-5
Epoch 85/100 - train loss: 1.0e-5
Epoch 86/100 - train loss: 1.0e-5
Epoch 87/100 - train loss: 1.0e-5
Epoch 88/100 - train loss: 1.0e-5
Epoch 89/100 - train loss: 1.0e-5
Epoch 90/100 - train loss: 1.0e-5
Epoch 91/100 - train loss: 1.0e-5
Epoch 92/100 - train loss: 1.0e-5
Epoch 93/100 - train loss: 1.0e-5
Epoch 94/100 - train loss: 1.0e-5
Epoch 95/100 - train loss: 1.0e-5
Epoch 96/100 - train loss: 1.0e-5
Epoch 97/100 - train loss: 1.0e-5
Epoch 98/100 - train loss: 1.0e-5
Epoch 99/100 - train loss: 1.0e-5
Epoch 100/100 - train loss: 1.0e-5
plot(losses_01, label="")
xlabel!("Epochs")
ylabel!("Mean Squared Error")

Lets see how well it's able to reconstruct a random segment of the data

input = rand(data)'
plot(input[:,1],input[:, 2], input[:,3], label="original")

# Plot the reconstructed data in red
output = model(Flux.unsqueeze(input, 3))
plot!(output[:,1], output[:, 2], output[:,3], label="reconstructed")
┌ Info: loss:
│   loss(input, output) = 6.2904514e-6
└ @ Main In[72]:6
@info "loss:" loss(input, output)
┌ Info: loss:
│   loss(input, output) = 6.2904514e-6
└ @ Main In[74]:2

Conclusion

This approach clearly is able to reconstruct the input data from a low dimensional representation. Before we go further (and use clustering on the latent space to identify parameters), lets try and scale this up to a 300 agent swarm and see what changes we need to make.