24/7 writing help on your phone

Categories:
Technology

Analysis,
Pages 6 (1404 words)

Views

5

Save to my list

Remove from my list

Error analysis, an indispensable aspect of scientific research, aims to quantify the uncertainty inherent in measurement processes. This paper delves into the methodology of estimating measurement uncertainty, focusing on the propagation of uncertainty through calculations, the distinction between systematic and random errors, and the application of error theory. Our exploration is contextualized within the objective of minimizing error in the measurement of ThS samples, employing both theoretical and practical error analysis techniques.

Some numerical statements are exact: 1+3=4.

However, all measurements have some degree of uncertainty that may come from a variety of sources. The process of evaluating the uncertainty associated with a measurement result is often called uncertainty analysis or error analysis.

An alternative, and sometimes simpler procedure, to the tedious propagation of uncertainty law, is the upper-lower bound method of uncertainty propagation. This alternative method does not yield a standard uncertainty estimate (with a 68% confidence interval), but it does give a reasonable estimate of the uncertainty for practically any situation.

The basic idea of this method is to use the uncertainty ranges of each variable to calculate the maximum and minimum values of the function. You can also think of this procedure as examining the best- and worst-case scenarios.

For example, suppose you measure an angle to be: θ = 25° ± 1° and you needed to find f = cos θ, then:

fmax = cos (26°) = 0.8988

fmin = cos (24°) = 0.9135

∴ f = 0.906 ± 0.007

where 0.007 is half the difference between fmax and fmin.

Note that even though θ was only measured to 2 significant figures, f is known to 3 figures.

By using the propagation of uncertainty law:

σf = |sin θ|σθ = (0.423) *(π/180) = 0.0074 (same result as above).

The uncertainty estimate from the upper-lower bound method is generally larger than the standard uncertainty estimate found from the propagation of uncertainty law, but both methods will give a reasonable estimate of the uncertainty in a calculated value.

The upper-lower bound method is especially useful when the functional relationship is not clear or is incomplete. One practical application is forecasting the expected range in an expense budget. In this case, some expenses may be fixed, while others may be uncertain, and the range of these uncertain terms could be used to predict the upper and lower bounds on the total expense [2].

This method estimates the maximum and minimum values a function could take based on the uncertainties of its variables. For example, given an angle θ measured as 25° ± 1°, the uncertainty in its cosine can be calculated as follows:

- fcos(26°)=0.8988fmax=cos(26°)=0.8988
- fcos(24°)=0.9135fmin=cos(24°)=0.9135
- f=0.906±0.007f=0.906±0.007

In our previous example, the average width x is 31.19 cm. The deviations are:

The average deviation is: d = 0.086 cm.

The standard deviation is: s = (0.14)2 + (0.04)2 + (0.07)2 + (0.17)2 + (0.01)25 − 1 = 0.12 cm.

The significance of the standard deviation is this: if you now make one more measurement using the same meter stick, you can reasonably expect (with about 68% confidence) that the new measurement will be within 0.12 cm of the estimated average of 31.19 cm. It is reasonable to use the standard deviation as the uncertainty associated with this single new measurement. However, the uncertainty of the average value is the standard deviation of the mean, which is always less than the standard deviation (see next section).

Consider an example where 100 measurements of a quantity were made. The average or mean value was 10.5 and the standard deviation was s = 1.83. The figure below is a histogram of the 100 measurements, which shows how often a certain range of values was measured. For example, in 20 of the measurements, the value was in the range 9.5 to 10.5, and most of the readings were close to the mean value of 10.5. The standard deviation s for this set of measurements is roughly how far from the average value most of the readings fell.

For a large enough sample, approximately 68% of the readings will be within one standard deviation of the mean value, 95% of the readings will be in the interval x ± 2 s, and nearly all (99.7%) of readings will lie within 3 standard deviations from the mean. The smooth curve superimposed on the histogram is the gaussian or normal distribution predicted by theory for measurements involving random errors. As more and more measurements are made, the histogram will more closely follow the bell-shaped gaussian curve, but the standard deviation of the distribution will remain approximately the same.

When we report the average value of N measurements, the uncertainty we should associate with this average value is the standard deviation of the mean, often called the standard error (SE).

σx = s/√N

The standard error is smaller than the standard deviation by a factor of 1/√N.

This reflects the fact that we expect the uncertainty of the average value to get smaller when we use a larger number of measurements, N. In the previous example, we find the standard error is 0.05 cm, where we have divided the standard deviation of 0.12 by 5.

The result should then be reported as: Average paper width = 31.19 ± 0.05 cm.

Independent and correlated errors affect the resultant error in a calculation differently. For example, you made one measurement of one side of a square metal and found it to be 1.001 in. Furthermore, you find that the error in this measurement is 0.001 mm. To find the area we multiply the width (W) and length (L). The area then is:

L x W = (1.001 mm) x (1.001 mm) = 1.002001 mm2 which rounds to 1.002 mm2. This gives an error of 0.002 if we were given that the square was exactly super-accurate 1 inch aside.

This is an example of correlated error (or non-independent error) since the error in L and W are the same. error in L is correlated with that of in W. Now, suppose that we made an independent determination of the width and length separately with an error of 0.001 in each. In this case where two independent measurements are performed the errors are independent or uncorrelated. Therefore, the error in the result (area) is calculated differently as follows (rule 1 below). First, find the relative error (error/quantity) in each of the quantities that enter to the calculation, the relative error in width is 0.001/1.001 = 0.00099900.

The resultant relative error is:

Relative Error in area=(LΔL)2+(WΔW)2

Therefore, the absolute error is:

(relative error) x (quantity) = 0.0014128 x 1.002001=0.001415627. which rounds to 0.001.

Therefore, the area is 1.002 mm2± 0.001mm2.

This shows that random relative errors do not simply add arithmetically, rather, they combine by root-mean-square sum rule (Pythagorean theorem). Let’s summarize some of the rules that apply to combine error when adding (or subtracting), multiplying (or dividing) various quantities. This topic is also known as error propagation.

Error propagation for special cases:

Let σx denote error in a quantity x. Further assume that two quantities x and y and their errors σx and σy are measured independently. In this case relative and percent errors are defined as Relative error = σx/ x, Percent error = 100 (σx/ x)

Systematic errors resulting from inadequacies of the measuring devices always falsify a measurement result by the same amount in the same direction. You cannot track them down by repeating the measurement, and they cannot be reduced by calculation. They must be determined differently and eliminated using equipment. In contrast, the statistical or random error (random uncertainty) can change the result in both directions. It has its cause in the observer himself. In principle, it is unavoidable, but its size can be estimated by repeated measurements and suitable evaluation methods [4].

The aim of all considerations must therefore be to find suitable approximate values and uncertainty intervals to determine the measured quantities and indicate them with the measurement result. This allows us to estimate how far we are aware of the true value of our measured quantity (and this true value is given by the object to be measured with certainty). We can precisely determine this true value not - but an approximation by measurement works. All this is part of the culture of every experiment - even if it is difficult. From a whole complex of possibilities, the appropriate methods for each experiment are to be selected and applied specifically. There is a certain 'discretionary scope' in this respect of the experimenter.

Tools available for this purpose are:

- Error estimation
- Error statistics
- Linear error propagation
- Gaussian error propagation

Error theory provides a framework for choosing appropriate methods to estimate and report uncertainties in measured quantities, emphasizing the importance of conveying how closely the measured values approximate the true values.

Live chat
with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment