When we want to determine the length, mass, time, temperature, or any other property of an object, we need to measure it. But even with the most precise instruments, our measurements will always have some degree of error. This refers to the difference between the measured value and the actual value.
When taking measurements, it’s important to understand the difference between precision and accuracy.
Accuracy refers to how close a measured value is to the actual or true value. When a measurement is inaccurate, it can be due to various factors such as faulty equipment, inadequate data processing, or even human error.
Precision refers to how closely the individual measurements match each other, or how consistent they are. For example, a measurement that is precise will yield similar results if taken repeatedly under the same conditions.
No matter how careful we are, there will always be errors in measurement. There are two main types of errors that can occur in measurements:
Random errors are caused by unknown or unpredictable changes in the measurement process. This can include variations in environmental conditions or inconsistencies in the position or technique used to take a measurement. For example:
These errors can have an unpredictable effect on your measurements, making them either higher or lower than the true value. However, you can reduce random errors by repeating the measurement process multiple times and taking an average.
Systematic errors arise from consistent biases or flaws in the measurement process itself, usually due to equipment issues or procedural errors. For example:
Unlike random errors, systematic errors will consistently skew your measurements in the same direction. This means that it will consistently produce results that are either consistently higher or lower than the true value.
To reduce systematic errors, it’s important to improve the measuring equipment or make adjustments to the experimental procedure. It’s also important to identify the sources of error in your experiment, so you can take steps to minimise their impact on your results.
Being aware of the sources of error can also help you to determine the appropriate number of significant figures to use when reporting your results.
When expressing a measured or calculated quantity, it’s important to determine the appropriate number of significant figures. The number of significant figures refers to the number of digits used to represent a quantity.
There are several rules for determining which digits are significant:
For example, the number 13.2 has three significant figures because all three digits are non-zero. The number 13002 has four significant figures because the zeros between the non-zero digits are also significant. Similarly, the number 0.00230 has three significant figures because the trailing zeros after the decimal point are significant.
Using the appropriate number of significant figures can help to communicate the level of precision in a measurement or calculation. It’s important to be consistent in the use of significant figures and to understand the rules for rounding and approximation.