Well this is the first time I am posting about a subject other than VHDL, FPGA or verification. Now it is time for SOFTWARE and ALGORITHMS !!!

Nowadays (I am talking about 2020s), if you are a digital design engineer and working with HDL and FPGAs or ASICs, it is not possible to avoid software or algorithm designs. The funny thing is, C or C++ is considered as “low-level” according to SW guys, well hold on that, there is VHDL !!! Funny thing is now high-level synthesis tools are getting very poppular among digital designers and you know which language is considered as “high-level” according to FPGA guys: C or C++ hahaha 😊😊😊

I am working on Kalman filter implementation and actually try to accelerate the algorithm with FPGA utilization in my Phd Thesis. I first want to implement it in C++ in software approach, then use the same, or nearly same codes in HLS tools to get synthesible VHDL code and compare the results about latency. In this post, I will show how to implement Kalman filter in C++ using eigen library.

I use Microsoft Visual Studio 2022 Community to compile the code, which is free to download. I created a C++ project and first thing is to download eigen library. The first way is to go to eigen website and download latest release codes and save it to a folder. Then go to your project and define the path as library:

There is another way to use eigen library. In Visual Studio, go to Tools -> NuGet Package Manager -> Manage NuGet Packages for Solution. Then go Browse and enter “eigen”:

The manager will find Eigen, I downloaded Eigen3 library. There are different libraries in Eigen like “Dense” or “Sparse”, I will use “Dense” because the matrix that are used are not that ‘Sparse’.

Now, if I try to tell you about what “Kalman filter” is in this post, the post will be so long and believe me you don’t want to learn estimation concepts from me 😊 So I leave this important theoretical subject’s explanation to experts. I will only simply tell you how I implemented a 6 state Kalman filter in C++ with Eigen library and show the results. I can suggest a great free online book and Python implementation of Kalman filter from Roger R. Labbe:


I used “random” library to generate Gaussian(Normal) distributed simulated data for states and measurements. Adding “Eigen” namespace will be beneficial as we will use extensively data types and functions from Eigen library and at each time I don’t want to add “Eigen::” namespace. So the header’s section is as:

#include <iostream>
#include <Eigen/Dense>
#include <random>
using namespace Eigen;

I want the code is parametric as possible, I defined 7 constants:

#define T                   1       // second
#define MC_RUN              100	    // Monte-Carlo run number
#define SIM_NUM             1000	// simulation run number

#define STATE_VECTOR        6       // length of the state vector
#define MEASUREMENT_VECTOR  3       // length of the measurement vector
#define PROCESS_NOISE_SIGMA 0.5     // process std_dev
#define MEAS_NOISE_SIGMA    5       // measurement std_dev

So there will be 6 states to be estimated and 3 measurements. The states are 3-Axes position and velocity of a body, measurements are only the positions of 3-Axes. I assume the position measurements have 5 meters stdandard deviation and the model will be a constant velocity model with 0.5 process standard deviation. The sampling rate for measurements is 1 Hz, the simulation is for 1000 seconds and there will be 100 independent runs to take the average of each Monte Carlo runs. It will be easier to change the constant definitions to observe different results if needed.

Eigen library has Matrix class, which is defined in Matrix.h header file. You can read Eigen library manual, reading the code is also helpful. You can define Matrix variables static or dynamic. I will use static memory allocation instead of dynamic, because I will plan to use the same code in Vivado HLS (High-Level Synthesis) tool, where dynamic memory allocation is not available.

I defined 7 Matrix variables, their column number is SIM_NUM. These Matrix variables are needed to keep the results for each Monte Carlo run during simulation and calculate the deviation from the truth.

Matrix<float, STATE_VECTOR,       SIM_NUM> x;
Matrix<float, STATE_VECTOR,       SIM_NUM> x_est;
Matrix<float, MEASUREMENT_VECTOR, SIM_NUM> meas;
Matrix<float, MEASUREMENT_VECTOR, SIM_NUM> meas_ms_error;
Matrix<float, MEASUREMENT_VECTOR, SIM_NUM> x_est_ms_error;
Matrix<float, MEASUREMENT_VECTOR, SIM_NUM> rms_xest;
Matrix<float, MEASUREMENT_VECTOR, SIM_NUM> rms_meas;

You can define 1D Matrix variables (vectors) with the same Matrix class by just entering the column number as 1:

Matrix<float, STATE_VECTOR,         1> x_init;
Matrix<float, STATE_VECTOR,         1> gama;
Matrix<float, STATE_VECTOR,         1> v;
Matrix<float, STATE_VECTOR,         1> X_pre;
Matrix<float, MEASUREMENT_VECTOR,   1> meas_pre;
Matrix<float, MEASUREMENT_VECTOR,   1> MR;

Other nxm Matrix variables can be defined easily with Matrix class also:

Matrix<float, STATE_VECTOR,         1> x_init;              // initial state 
Matrix<float, STATE_VECTOR,         1> gama;                // process noise
Matrix<float, STATE_VECTOR,         1> v;                   // to add process noise
Matrix<float, STATE_VECTOR,         1> X_pre;               // state prediction
Matrix<float, MEASUREMENT_VECTOR,   1> meas_pre;            // measurement prediction
Matrix<float, MEASUREMENT_VECTOR,   1> MR;                  // measurement residual
Matrix<float, STATE_VECTOR,         STATE_VECTOR> F;        // state transition matrix
Matrix<float, STATE_VECTOR,         STATE_VECTOR> Q;        // process covariance matrix
Matrix<float, STATE_VECTOR,         STATE_VECTOR> P;        // state prediction covariance
Matrix<float, STATE_VECTOR,         STATE_VECTOR> P_pre;    // covariance matrix of the predicted state
Matrix<float, STATE_VECTOR,         MEASUREMENT_VECTOR> W;  // filter gaim
Matrix<float, MEASUREMENT_VECTOR,   STATE_VECTOR> H;        // measurement transformation matrix
Matrix<float, MEASUREMENT_VECTOR,   MEASUREMENT_VECTOR> R;  // measurement covariance matrix
Matrix<float, MEASUREMENT_VECTOR,   MEASUREMENT_VECTOR> S;  // measurement prediction covariance

Assignment of a Matrix variable can be done in several ways, the following one seems easy to me:

F << 1,T,0,0,0,0,

I created 2 normal distribution with different standard deviation parameters to add noise to the process and measurements:

// Gaussian distributed errors
std::default_random_engine generator;
std::normal_distribution<float> distribution_process(0, PROCESS_NOISE_SIGMA);
std::normal_distribution<float> distribution_meas(0, MEAS_NOISE_SIGMA);

Those terms need “random” library, which is a built-in C++ library.

Here is the simulated data generation part:

        // simulated data generation
        for (int j=0; j<SIM_NUM-1; j++)
            float v1 = distribution_process(generator);
            float v2 = distribution_process(generator);
            float v3 = distribution_process(generator);
            float w1 = distribution_meas(generator);
            float w2 = distribution_meas(generator);
            float w3 = distribution_meas(generator);
            v(0, 0) = v1*gama(0,0);
            v(1, 0) = v1*gama(1,0);
            v(2, 0) = v2*gama(2,0);
            v(3, 0) = v2*gama(3,0);
            v(4, 0) = v3*gama(4,0);
            v(5, 0) = v3*gama(5,0);
            x.block<6, 1>(0, j + 1) = F * x.block<6, 1>(0, j) + v;
            meas(0, j + 1) = x(0, j + 1) + w1;
            meas(1, j + 1) = x(2, j + 1) + w2;
            meas(2, j + 1) = x(4, j + 1) + w3;

The Kalman filter part with error calculation is:

        // Kalman filter iterations
        for (int j = 1; j < SIM_NUM - 3; j++)
            X_pre = F * x_est.block<6, 1>(0,j);         // State prediction k+1
            meas_pre = H * X_pre;                       // Measurement prediction k+1
            MR = meas.block<3, 1>(0, j) - meas_pre;     // Measurement Residual

            P_pre = F * P * F.transpose() + Q;          // State prediction covariance
            S = R + H * P_pre * H.transpose();          // Measurement prediction covariance
            W = P_pre * H.transpose() * S.inverse();    // Filter gain
            x_est.block<6, 1>(0, j+1) = X_pre + W * MR;
            P = P_pre - W * S * W.transpose();

            // error calculations
            meas_ms_error(0, j) = (x(0, j) - meas(0, j)) * (x(0, j) - meas(0, j));
            meas_ms_error(1, j) = (x(2, j) - meas(1, j)) * (x(2, j) - meas(1, j));
            meas_ms_error(2, j) = (x(4, j) - meas(2, j)) * (x(4, j) - meas(2, j));

            rms_meas(0,j) += meas_ms_error(0,j);
            rms_meas(1,j) += meas_ms_error(1,j);
            rms_meas(2,j) += meas_ms_error(2,j);

            x_est_ms_error(0,j) = (x(0,j) - x_est(0,j+1)) * (x(0,j) - x_est(0,j+1));
            x_est_ms_error(1,j) = (x(2,j) - x_est(2,j+1)) * (x(2,j) - x_est(2,j+1));
            x_est_ms_error(2,j) = (x(4,j) - x_est(4,j+1)) * (x(4,j) - x_est(4,j+1));

            rms_xest(0,j) += x_est_ms_error(0,j);
            rms_xest(1,j) += x_est_ms_error(1,j);
            rms_xest(2,j) += x_est_ms_error(2,j);

And finally I saved the results to text files:

    FILE* f0 = fopen("x_est_rms_error.txt", "wb");
    for (int i = 1; i < SIM_NUM - 3; i++)
        fprintf(f0, "%f,%f,%f\n", rms_xest(0,i), rms_xest(1,i), rms_xest(2,i));

    FILE* f1 = fopen("meas_rms_error.txt", "wb");
    for (int i = 1; i < SIM_NUM - 3; i++)
        fprintf(f1, "%f,%f,%f\n", rms_meas(0, i), rms_meas(1, i), rms_meas(2, i));

    FILE* f2 = fopen("meas.txt", "wb");
    for (int i = 0; i < SIM_NUM - 3; i++)
        fprintf(f2, "%f,%f,%f\n", meas(0,i), meas(1, i), meas(2, i));

    FILE* f3 = fopen("x_real.txt", "wb");
    for (int i = 0; i < SIM_NUM - 3; i++)
        fprintf(f3, "%f,%f,%f,%f,%f,%f\n", x(0, i), x(1, i), x(2, i), x(3, i), x(4, i), x(5, i));

    FILE* f4 = fopen("x_est.txt", "wb");
    for (int i = 0; i < SIM_NUM - 3; i++)
        fprintf(f4, "%f,%f,%f,%f,%f,%f\n", x_est(0, i), x_est(1, i), x_est(2, i), x_est(3, i), x_est(4, i), x_est(5, i));

The results show that our Kalman filter (actually a state estimator) is working correctly and get better results than measurement-only solution:

So from the last image it is clear that the measurements have position error around 5 meter and the Kalman filter estimation result error converges around 3 meter in a few seconds.

For the next post I plan to use the same C++ code in Vivado HLS tool and see the synthesis results, wait for it !!!


Mehmet Burak AYKENAR

You can connect me via LinledIn: Just sent me an invitation


Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir