Algorithms for Positioning with Nonlinear Measurement Models and Heavy-tailed and Asymmetric Distributed Additive Noise

    Research output: Book/ReportDoctoral thesisCollection of Articles

    1392 Downloads (Pure)


    Determining the unknown position of a user equipment using measurements obtained from transmitters with known locations generally results in a nonlinear measurement function. The measurement errors can have a heavy-tailed and/ or skewed distribution, and the likelihood function can be multimodal.

    A positioning problem with a nonlinear measurement function is often solved by a nonlinear least squares (NLS) method or, when filtering is desired, by an extended Kalman filter (EKF). However, these methods are unable to capture multiple peaks of the likelihood function and do not address heavy-tailedness or skewness. Approximating the likelihood by a Gaussian mixture (GM) and using a GM filter (GMF) solves the problem. The drawback is that the approximation requires a large number of components in the GM for a precise approximation, which makes it unsuitable for real-time positioning on small mobile devices.

    This thesis studies a generalised version of Gaussian mixtures, which is called GGM, to capture multiple peaks. It relaxes the GM’s restriction to non-negative component weights. The analysis shows that the GGM allows a significant reduction of the number of required Gaussian components when applied for approximating the measurement likelihood of a transmitter with an isotropic antenna, compared with the GM. Therefore, the GGM facilitates real-time positioning in small mobile devices. In tests for a cellular telephone network and for an ultra-wideband network the GGM and its filter provide significantly better positioning accuracy than the NLS and the EKF.

    For positioning with nonlinear measurement models, and heavytailed and skewed distributed measurement errors, an Expectation Maximisation (EM) algorithm is studied. The EM algorithm is compared with a standard NLS algorithm in simulations and tests with realistic emulated data from a long term evolution network. The EM algorithm is more robust to measurement outliers. If the errors in training and positioning data are similar distributed, then the EM algorithm yields significantly better position estimates than the NLS method. The improvement in accuracy and precision comes at the cost of moderately higher computational demand and higher vulnerability to changing patterns in the error distribution (of training and positioning data). This vulnerability is caused by the fact that the skew-t distribution (used in EM) has 4 parameters while the normal distribution (used in NLS) has only 2. Hence the skew-t yields a closer fit than the normal distribution of the pattern in the training data. However, on the downside if patterns in training and positioning data vary than the skew-t fit is not necessarily a better fit than the normal fit, which weakens the EM algorithm’s positioning accuracy and precision. This concept of reduced generalisability due to overfitting is a basic rule of machine learning.

    This thesis additionally shows how parameters of heavy-tailed and skewed error distributions can be fitted to training data. It furthermore gives an overview on other parametric methods for solving the positioning method, how training data is handled and summarised for them, how positioning is done by them, and how they compare with nonparametric methods. These methods are analysed by extensive tests in a wireless area network, which shows the strength and weaknesses of each method.
    Original languageEnglish
    PublisherTampere University of Technology
    Number of pages82
    ISBN (Electronic)978-952-15-3784-4
    ISBN (Print)978-952-15-3769-1
    Publication statusPublished - 26 Aug 2016
    Publication typeG5 Doctoral dissertation (articles)

    Publication series

    NameTampere University of Technology. Publication
    ISSN (Print)1459-2045


    Dive into the research topics of 'Algorithms for Positioning with Nonlinear Measurement Models and Heavy-tailed and Asymmetric Distributed Additive Noise'. Together they form a unique fingerprint.

    Cite this