Posts

  • What is a Kalman-Filter?

    A Kalman filter is a powerful algorithm used in statistics and control theory for estimating the state of a system from a series of noisy measurements.

  • How to tune a Kalman-Filter?

    Tuning a Kalman filter involves adjusting its parameters to optimize performance, specifically the process and measurement noise covariances. Here are some key points and methods for tuning a Kalman filter:

  • Joseph form

    One of the biggest challenges in implementing Kalman filters is that numerical errors can cause the covariances to lose symmetry and positive definiteness, causing the filter to diverge and no longer provide an estimation result. This happens especially in system models with large degrees of freedom. The solution is to use a numerically more stable equation. For the update equation this is the so-called joseph form, which has a higher computation time, but is less sensitive to numerical errors.

  • Pairs trading

    A Kalman filter can be used in a trading strategy that is known as “Pairs Trading” or “Statistical Arbitrage Trading”. Pairs trading aims to capitalize on the mean-reverting tendencies of a specific portfolio. The foundational assumption of this strategy is that the spread of co-integrated instruments is inherently mean-reverting. Therefore, significant deviations from the mean are viewed as potential opportunities for arbitrage.

  • Unscented Kalman Filter

    For the state estimation of nonlinear systems, shortly after the original publication of Rudolf Kalman, a variant was published by Kalman himself which allows the application of the Kalman filter also for nonlinear systems. Here, the nonlinear system is linearized at the current estimated value. For this the equations must be derived analytically. This procedure was for many years the solution for the nonlinear systems which occur more frequently in practice. An essential disadvantage of this method is that the filter can diverge easily since the estimated value does not correspond to the actual state and thus is always linearized at the wrong place. In the case of unfavorable nonlinearities, the estimated value diverges and thus linearization is performed at an even more incorrect position. This leads after some steps fast to a complete divergence.

  • Cubature Kalman Filter

    The Cubature Kalman Filter (CKF) is the newest representative of the sigma-point methods. The selection of sigma points in the CKF is slightly different from the Unscented Kalman Filter (UKF) and is based on the Cubature rule which was derived by Arasaratnam and Haykin [1]. As for the UKF, the CKF follows the idea that it is easier to approximate a probability function than to linearize a nonlinear function.

  • Estimating the state of a dc motor

    This example will focus on estimating the angular position \( \theta \), angular velocity \( \dot{\theta} \) and armature current \( i \) of a DC motor with a linear Kalman filter. When modeling DC motors, it is important to mention that certainly nonlinear models are superior to linear ones. For didactic purposes, the following widely used linear model is sufficient for the time being.

  • Multiple Model State Estimation

    Central for the performance of a model-based state estimator is how well the model corresponds to the process to be observed. In practice, the question arises whether the process to be estimated always behaves exactly according to one model or whether the process changes between different models. The solution is to take this into account by using multiple models. In the literature this aspect is called multiple model. The challenge is both how exactly, i.e. only one model or the mixture of several models, and according to which rules the process changes into the different models.

  • Normalized Innovation Squared (NIS)

    The Normalized Innovation Squared (NIS) metric allows to check whether the Kalman filter is consistent with the measurement residual \( \nu (k) \) and the associated innovation covariance matrix \( \mathbf{S}(k) \).

  • Root Mean Square Error (RMSE)

    To evaluate the performance of state estimators, the estimation error \( \tilde{\mathbf{x}}(k) = \mathbf{x}(k) - \mathbf{\hat{x}}(k) \) is evaluated. The root mean square error (RMSE), which is a widely used quality measure, is suitable for this purpose. The basis is the estimation error \(\tilde{\mathbf{x}}(k)\) for each time step \( k \in {1…K} \).

    In simulation

    For a simulation with a length of \( K\) time increments the RMSE is averaged by \( N\) Monte Carlo runs in order to achieve a high statistical significance \[ \text{RMSE}(\tilde{\mathbf{x}}(k)) = \sqrt{\frac{1}{N} \sum^N_{i=1} (\tilde{x}^i_1(k)^2 + … + \tilde{x}^i_n(k)^2)} \] where \( n = \text{dim}(\tilde{\mathbf{x}}(k)) \) is.

  • Normalized Estimation Error Squared (NEES)

    A desired property of a state estimator is that it is able to indicate the quality of the estimate correctly. This ability is called consistency of a state estimator and has a direct impact on the estimation error \( \tilde{\mathbf{x}}(k) \), i.e. an inconsistent state estimator does not provide the most optimal result. A state estimator should be able to indicate the quality of the estimate correctly, because an increase in sample size leads to a growth in information content and the state estimate \( \hat{\mathbf{x}}(k) \) shall be as close as possible to the true state \( \mathbf{x}(k) \). This results from the requirement that a state estimator shall be unbiased. Mathematically speaking, this is expressed by the expected value of the estimation error \( \tilde{\mathbf{x}}(k) \) being zero \[ E [ \mathbf{x}(k) - \hat{\mathbf{x}}(k) ] = E [ \tilde{\mathbf{x}}(k) ] \overset{!}{=} 0 \]

  • Discretization of linear state-space model

    In practice, the discretization of the state-space equations is of particular importance.

subscribe via RSS