## Measurements, errors and measurement models

No measurement result is exact

No measurement can be made without error. The measurand (the quantity intended to be measured) is $$\label{eq:measurement_error} Y = y - E \;,$$ the difference between $$y$$, the measurement result, and $$E$$ an unknown error. Although $$E$$ is never exactly known, some information will be available about the likely distribution of values. This can be used to assess the uncertainty of $$y$$ as an estimate of $$Y$$.

An equation relates the measurand to all the influence quantities

In general there will be a number of measured, or otherwise approximately known, quantities $$X_1,X_2,\cdots,X_l$$ that influence a measurement (including any undesirable effects that cannot be entirely eliminated from the measurement procedure). The relationship between these influences and the measurand can be expressed as an equation $$\label{eq:measurement_quantity_eqn} Y = f(X_1,X_2,\cdots,X_l) \; ,$$

which may be called a 'measurement equation' or 'measurement model'.

If $$X_1,X_2,\cdots,X_l$$ were known, equation \eqref{eq:measurement_quantity_eqn} would yield the measurand. But in practice only estimates $$x_1,x_2,\cdots,x_l$$ are available. So instead $$\label{eq:measurement_estimate_eqn} y = f(x_1,x_2,\cdots,x_l) \;$$ is obtained. The uncertainty of $$y$$ as an estimate of $$Y$$ then depends on the uncertainty of each estimate $$x_1,x_2,\cdots,x_l$$ and on the function $$f()$$.

The measurement model can be defined by a sequence of equations

Measurements usually comprise a number of steps, in which data is acquired or processed to obtain intermediate results. It can be challenging to express these steps as a single equation. It is more natural to express the relationship between $$Y$$ and $$X_1,X_2,\cdots,X_l$$ in sequence of equations.

Formally, the $$i^\mathrm{th}$$ step of a measurement model can be represented by an equation that obtains an intermediate result $$X_i$$ from a particular set of inputs $$\Lambda_i$$, which may include intermediate results from previous steps $$\label{eq:multi_eqn_step} X_i = f_i(\Lambda_i)\;.$$ If there are $$l$$ influences and $$m-l$$ intermediate steps, then $$Y = X_m$$ is obtained after evaluating $$X_i = f_i(\Lambda_i)\;$$ for $$i = l+1,\cdots, m$$.

## The GUM method of uncertainty propagation

Properties of the input quantities are propagated through the measurement equation

To make a statement about the uncertainty of $$y$$ as an estimate of $$Y$$, the GUM uses two properties of $$y$$: the standard uncertainty $$u(y)$$ and the effective number of degrees of freedom $$\nu(y)$$.

Different mathematical procedures are used to evaluate $$u(y)$$ and $$\nu(y)$$. The Law of Propagation of Uncertainty (LPU) takes the input uncertainties associated with $$x_1,x_2,\cdots,x_N$$ and propagates them through the measurement equation to obtain $$u(y)$$. The Welch-Satterthwaite equation (WS) propagates input degrees of freedom $$\nu(x_1),\nu(x_2),\cdots,\nu(x_N)$$ through $$f()$$ to obtain $$\nu(y)$$.

The Law of Propagation of Uncertainty

When a measurement model is expressed as a single equation, the LPU may be written as $$\label{eq:lpu_single} u(y) = \left[ \sum_{i=1}^{l} \sum_{j=1}^{l} u_i(y) \, r(x_i,x_j) \, u_j(y) \right]^{1/2} \;,$$ where the component of uncertainty in $$y$$ due to uncertainty in $$x_i$$ is $$\label{eq:u_component} u_i(y) = \frac{\partial f}{\partial x_i} u(x_i) \;.$$ and $$r(x_i,x_j)$$ is the correlation coefficient between estimates $$x_i$$ and $$x_j$$. Strictly speaking, the partial derivative in \eqref{eq:u_component} $$\left. \frac{\partial f}{\partial x_i} = \frac{\partial f}{\partial X_i} \right|_{X_1 = x_1, X_2 = x_2, \cdots} \;.$$

The LPU for multistep models

When the measurement model is expressed as a sequence of equations, the component of uncertainty in the result $$y = x_m$$ due to uncertainty in the influence estimate $$x_j$$ is obtained after evaluating $$u_j(x_i) = \sum_{x_k \in \Lambda_i} \frac{\partial f_i}{\partial x_k} u_j(x_k) \;,$$ for $$i = l+1,\cdots, m$$.

The Welch-Satterthwaite equation

The WS equation may be expressed as $$\label{eq:ws_real} \frac{ u(y)^4 }{\nu(y)} = \sum_{i=1}^l \frac{u_i(y)^4}{ \nu(x_i) }\;.$$ It is important to note that, for the WS calculation to be valid, all input estimates with finite degrees of freedom should be mutually independent.

## Extended forms of LPU and WS for complex quantities

Extended forms of the LPU and WS equation are needed for complex quantities

When measurements of complex quantities are made, different forms of the LPU and WS must be used.

As before, the relationship between a measurand and a set of complex influence quantities can be expressed as an model in which all quantities are now complex. This could be in the form of a single equation $$\label{eq:measurement_quantity_eqn_complex} Y = f(X_1,X_2,\cdots,X_l) \; ,$$ or a sequence of equations (for $$i = l+1,\cdots, m$$) $$X_i = f_i(\Lambda_i)\;.$$

The uncertainty of a complex measurement result depends on the uncertainty of the real and imaginary components of that estimate and on the correlation between them. This is conveniently represented by a symmetric $$2 \times 2$$ covariance matrix $$\mathbf{v}(y) = \begin{bmatrix} u^2(y_\mathrm{re}) & u(y_\mathrm{re},y_\mathrm{im})\\ u(y_\mathrm{re},y_\mathrm{im}) & u^2(y_\mathrm{im}) \end{bmatrix} \;,$$ where the covariance between the real and imaginary components $$u(y_\mathrm{re},y_\mathrm{im}) = u(y_\mathrm{re}) r(y_\mathrm{re},y_\mathrm{im}) u(y_\mathrm{im}) \; .$$

Single equation measurement models

For a single measurement equation, the uncertainty covariance matrix can be obtained as the matrix sum of products $$\label{eq:lpu_complex} \mathbf{v}(y) = \sum_{i=1}^l \sum_{j=1}^l \mathbf{u}_i(y) \mathbf{r}(x_i,x_j) \mathbf{u}_j(y)\; ,$$ where, the component of uncertainty in $$y$$ due to uncertainty in $$x_j$$ is a $$2 \times 2$$ matrix \begin{align} \mathbf{u}_j(y) &= \left[ \frac{\partial f}{\partial x_j} \right] \mathbf{u}(x_j) \\ &= \begin{bmatrix} \frac{\partial f_\mathrm{re}}{\partial x_{j \cdot \mathrm{re}}} & \frac{\partial f_\mathrm{re}}{\partial x_{j \cdot \mathrm{im}}}\\ \frac{\partial f_\mathrm{im}}{\partial x_{j \cdot \mathrm{re}}} & \frac{\partial f_\mathrm{im}}{\partial x_{j \cdot \mathrm{im}}} \end{bmatrix} \begin{bmatrix} u(x_{j\cdot \mathrm{re}}) & 0 \\ 0 & u(x_{j\cdot \mathrm{im}}) \\ \end{bmatrix} \; \end{align} and the matrix $$\label{eq:r_matrix_complex} \mathbf{r}(x_i,x_j) = \begin{bmatrix} r(x_{i \cdot \mathrm{re} }, x_{j \cdot \mathrm{re} }) & r(x_{i \cdot \mathrm{re} }, x_{j \cdot \mathrm{im} })\\ r(x_{i \cdot \mathrm{im} }, x_{j \cdot \mathrm{re} }) & r(x_{i \cdot \mathrm{im} }, x_{j \cdot \mathrm{im} }) \end{bmatrix} \;$$ contains the correlation coefficients between the components of the complex quantities.

Components of uncertainty for multi-step measurement models

Calculation of component of uncertainty matrices for a multi-step model is straightforward. The component of uncertainty in $$y = x_m$$ due to uncertainty in the estimate $$x_j$$ is obtained after evaluating $$\mathbf{u}_j(x_i) = \sum_{x_k \in \Lambda_i} \left[ \frac{\partial f_i}{\partial x_k} \right] \mathbf{u_j}(x_k) \;$$ for $$i = l+1, \cdots, m$$.

Degrees of freedom when influence estimates are independent

To evaluate the number of effective degrees of freedom, we must take into account the correlation between estimates of the real and imaginary components of each influence. The following method assumes, however, that the estimates of different influence quantities are independent.

For each influence $$x_j$$, we calculate the symmetric $$2 \times 2$$ matrix $$\label{eq:w_i_matrix_complex} \mathbf{w}_j = \mathbf{u}_j(y) \, \mathbf{r}(x_{j \cdot \mathrm{re} }, x_{j \cdot \mathrm{im} }) \, \mathbf{u}_j^\prime(y) \;,$$ where the prime symbol indicates a transposed matrix. From these matrices the following sums are obtained \begin{align*} A &= \sum_j (w_{j \cdot 11})^2 \\ D & = \sum_j w_{j \cdot 11} \sum_j w_{j \cdot 22} + \sum_j (w_{j \cdot 12})^2 \\ F &= \sum_j (w_{j \cdot 22})^2 \end{align*} and \begin{align*} a &= \sum_j w_{j \cdot 11}^2/\nu_j \\ d & = \sum_j ( w_{j \cdot 11} w_{j \cdot 22} + w_{j \cdot 12}^2)/\nu_j \\ f &= \sum_j w_{j \cdot 22}^2 /\nu_j \;. \end{align*} Finally, the degrees of freedom associated with the uncertainty in the estimate of $$Y$$ is $$\nu(y) = \frac{2A + D + 2F}{2a + d + 2f} \;.$$

## Extension to the WS calculation when estimates are correlated

Sometimes correlation among estimates can be handled by an extension to the WS calculation

The Welch-Satterthwaite calculation is derived under the assumption that all input estimates are independent and the extended WS for complex quantities was also assumes that individual complex estimates are independent. Fortunately, these calculations can be extended to handle a source of correlation that occurs readily in measurements.

When $$n$$ samples of data for several different quantities have been collected together during the same experiment, the contribution to the calculation of degrees of freedom from these samples can be reduced to a single term, with degrees of freedom equal to $$n-1$$. If there are several different sets of such multidimensional samples, then each can be reduced to a single term for the purposes of calculating the effective degrees of freedom.

This extension applies, for example, when estimating the parameters of a straight line by ordinary least-squares regression. The uncertainties of the intercept and slope are based on the same sum of squares of the residuals. Similarly, it applies when measuring a function of a complex quantity from the results of $$n$$ simultaneous measurements of its real and complex parts, and where the variances are estimated using the spread of the results.

There is a more general of WS form for real-valued quantities

When dealing with real-valued quantities, the generalised form of the effective degrees of freedom calculation can be written down in the same way as \eqref{eq:ws_real}, although the summation over $$i$$ and the meaning of the $$u_i(y)$$ now depend on the situation $$\label{eq:ws_gen_real} \frac{ u(y)^4 }{\nu(y)} = \sum_{i} \frac{u_i(y)^4}{ \nu(x_i) }\;.$$ There are three possibilities:

1. The basic situation is where each set of measurements is obtained in an independent experiment. In that case, the summation in \eqref{eq:ws_gen_real} runs from $$i=1$$ to $$i = l$$ and the $$u_i(y)$$ are all defined by \eqref{eq:u_component}.
2. The second possible situation is where the first $$h$$ estimates are obtained from $$n_{m+1}$$ measurements in the same experiment (so $$n_{m+1} = n_1, \cdots, n_h$$). In that case we set $$u_{m+1}(y) = \mathrm{w}_{m+1}(y) \, \mathrm{r}_{{m+1},{m+1}}\, \mathrm{w}_{m+1}^\prime(y) \;,$$ where $$\mathrm{w}_{m+1}(y)$$ is a vector of components of uncertainty $$\mathrm{w}_{m+1}(y) = [u_1(y),u_2(y),\cdots,u_h(y) ]^\prime \;$$ and $$\mathrm{r}_{{m+1},{m+1}}$$ is a matrix of correlation coefficients $$\mathrm{r}_{{m+1},{m+1}} = \begin{bmatrix} r(x_1,x_1) & \cdots & r(x_1,x_h) \\ \vdots & \ddots & \vdots \\ r(x_h,x_1) & \cdots & r(x_h,x_h) \end{bmatrix} \;.$$ Then the summation in \eqref{eq:ws_gen_real} runs from $$i=h+1$$ to $$i = m+1$$.
3. In a more general situation, the first $$h_1$$ estimates are obtained from $$n_{m+1}$$ simultaneous measurements in the same experiment, then the next $$h_2$$ are obtained from $$n_{m+2}$$ simultaneous measurements, and so on. If there are $$k$$ such groups of estimates, we can set $$u_{m+i}(y) = \mathrm{w}_{m+i}(y) \, \mathrm{r}_{{m+i},{m+i}}\, \mathrm{w}_{m+i}^\prime(y) \;.$$ Now the summation in \eqref{eq:ws_gen_real} runs from $$i=\sum_{j=1}^k h_j+1$$ to $$i = m+k$$.

There is a more general of WS form for complex-valued quantities

When dealing with complex-valued quantities, the generalised form of the effective degrees of freedom equation can be expressed as $$\label{eq:ws_gen_complex} \nu(y) = \frac{ 2(\sum_i w_{i \cdot 11})^2 + \sum_i w_{i \cdot 11} \sum_i w_{i \cdot 22} + (\sum_i w_{i \cdot 12})^2 + 2(\sum_i w_{i \cdot 22})^2 }{ \sum_i( 2w_{i \cdot 11}^2 + w_{i \cdot 11}w_{i \cdot 22} + w_{i \cdot 12}^2 + 2w_{i \cdot 22}^2 )/(n_i-1) } \;,$$ where $$w_{i \cdot jk}$$ represents the $$jk^\mathrm{th}$$ element of a matrix $$\mathbf{w}_i$$. As above, there are three situations to consider:

1. The basic situation is where each set of measurements of complex quantities is obtained in an independent experiment. In this case, $$\mathbf{w}_i$$ is defined by \eqref{eq:w_i_matrix_complex} and the summation in \eqref{eq:ws_gen_complex} is over $$i=1$$ to $$i=l$$.
2. The second situation is where the first $$h$$ estimates are obtained from $$n_{m+1}$$ measurements in the same experiment (so $$n_{m+1} = n_1, \cdots, n_h$$). We set $$\mathbf{w}_{m+1} = \begin{bmatrix} \mathbf{u}_{m+1}(y) \end{bmatrix} \, \begin{bmatrix}\mathbf{R}_{m+1}\end{bmatrix} \, \begin{bmatrix} \mathbf{u}_{m+1}(y) \end{bmatrix}^\prime \;,$$ with the $$2 \times 2h$$ matrix $$\begin{bmatrix} \mathbf{u}_{m+1}(y) \end{bmatrix} = \begin{bmatrix} \mathbf{u}_1(y)& \cdots & \mathbf{u}_h(y) \end{bmatrix}$$ and the $$2h \times 2h$$ matrix $$\begin{bmatrix}\mathbf{R}_{m+1}\end{bmatrix} = \begin{bmatrix} \mathbf{r}(x_1,x_1)& \cdots & \mathbf{r}(x_1,x_h) \\ \vdots & \ddots & \vdots \\ \mathbf{r}(x_h,x_1)& \cdots & \mathbf{r}(x_h,x_h) \end{bmatrix} \;,$$ in which the submatrices $$\mathbf{r}(x_i,x_j)$$ are defined by \eqref{eq:r_matrix_complex}. The summation in \eqref{eq:ws_gen_complex} is over $$i=h+1$$ to $$i=m+1$$.
3. In the most general situation, the first $$h_1$$ estimates are obtained from $$n_{m+1}$$ simultaneous measurements in the same experiment, then the next $$h_2$$ are obtained from $$n_{m+2}$$ simultaneous measurements, and so on. If there are $$k$$ such groups of estimates, we can set $$\mathbf{w}_{m+1} = \begin{bmatrix} \mathbf{u}_{m+i}(y) \end{bmatrix} \, \begin{bmatrix}\mathbf{R_{m+i}}\end{bmatrix} \, \begin{bmatrix} \mathbf{u}_{m+i}(y) \end{bmatrix}^\prime$$ and carry out the summation in \eqref{eq:ws_gen_complex} from $$i=\sum_{j=1}^k h_j+1$$ to $$i = m+k$$.