Why understanding pH is key to water purity

Paul Boughton

Mark Bosley explains why pH is critical to effective management of process water quality

Many process applications depend on high purity water with a known pH value. Although this method of measuring the acidity or alkalinity of a solution has existed since 1909, it is often misunderstood by process engineers. As a result, water purification systems are not always optimised for best performance or water quality, which in turn can lead to increased operating costs.

What is pH?

The pH scale has been in use for over a century, ever since it was first defined in 1909 by the Danish chemist, Søren Sørensen.

In simple terms, the pH scale provides a standardised method of measuring the acidity or alkalinity (basicity) of a solution. This is determined by the concentration of Hydrogen ions; the higher the concentration, the greater the levels of acidity. The term pH is generally taken to be the abbreviated form of ‘the power (p) of the concentration of the Hydrogen ion (H)’.

The pH scale is logarithmic and calculated using the negative logarithm of the concentration of Hydrogen ions, where a lower pH represents a higher concentration of Hydrogen ions. The formula is: pH = ⁻log [H⁺] and is shown on a scale of 0-14.  A solution with a pH less than 7 is defined as an acid, 7 is neutral, while a solution with a pH higher than 7 is an alkali.

Because the scale is logarithmic, a pH of 3 is 10 times more acidic than a pH of 4 and one hundred times more acidic than a pH of 5. To put this into context, at a temperature of 25°C pure water has a pH of 7, lemon juice 2.4 and household bleach 12.5. As Sørensen was head of the Carlsberg Laboratory’s Chemical Department when he introduced his scale of measurement, it’s appropriate to note that the pH of beer is normally around 4.5.

Some years after his initial definition, in 1924, Sørensen realised that the pH of a solution was actually determined by the activity of the Hydrogen ions, not their concentration, and that activity levels are affected by changes in temperature.

An improved definition of pH is therefore pH=⁻log[aH⁺], where aH⁺ denotes the thermodynamic activity of the ions. This has since become the basis on which pH measurements, generally taken by special handheld or in-line meters, are now specified by different industry bodies.

pH and conductivity

Measuring the pH of water used in process applications can be one of the quickest methods of determining purity.  For example, dissolved gaseous contaminants such as air or carbon dioxide, as well as the presence of ionic salts, will all affect pH readings.

One of the most effective methods of measuring pH accurately is to capitalise on the ability of dissolved ions in a sample to conduct electricity.  Using a conductivity meter it is possible to measure the flow of electricity through a fluid, with the subsequent readings being directly proportional to the concentration of ions, their charge and levels of activity, or mobility. The higher the readings – or the higher the conductivity – the greater the concentration of ions.

Ions can be positively charged cations (H⁺, Na⁺ and Mg⁺⁺) or negatively charged anions (OH⁻, Cl⁻ and SO42⁻).  Although different ions move in solution at different velocities the most mobile of the common ions are Hydrogen [H⁺] and Hydroxyl [OH⁻]; as a result, highly acidic or alkaline solutions will normally produce the highest conductivity readings.

Conductivity and its reciprocal, resistivity, can both be used to give an accurate indication of pH. Conductivity is measured in Microsiemens per centimetre (mS/cm) and resistivity in Meg Ohms centimetre (MΩ.cm).  Ultrapure or deionised water, with a natural pH of 6.998, has a conductivity value of 0.055 mS/cm, or 18.2 MΩ.cm at 25°C.

In practice, conductivity is generally used to measure water with a high concentration of ions, while resistivity is used to measure water with low levels of ions.

Changes in temperature can significantly affect conductivity.  In a typical mains water sample, conductivity readings can change by around 2% per degree Centigrade, with the neutral pH of 7 only being accurate at 25°C; at freezing point, the neutral value is 7.5 and at 100°C the pH value is 6.2.  As a result, conductivity measurements are internationally referenced to 25°C to allow for comparisons of different samples.

Measurements are normally carried out using handheld conductivity or resistivity meters, with specially designed probes being placed into the liquid sample.  Older designs of instrument use two probes, while more recent meters have a combined sensor.  In each case, the method of operation is similar, using a sensor that measures the concentration of Hydrogen ions against a reference source, with a separate temperature reading taken to allow the meter accurately to calculate the pH value.

For many process applications in-line systems are used, as they provide continuous monitoring, give higher levels of accuracy and can be connected to higher level control systems.

What can go wrong?

Relying solely on pH to measure water purity can be unreliable for a number of reasons.

Perhaps the most significant factor is that as soon as a sample of purified water is drawn from the system to be tested, and exposed to air, it will absorb carbon dioxide.  This reacts with the water to form carbonic acid in solution, which disassociates to release conductive ions. The result of a few ppm of CO2 being dissolved in a sample of ultrapure water can be significant, reducing the pH level to around 4.0, even though the resistivity of the water – and its purity – is still at 18.2 MΩ.cm.

A pure water sample cannot therefore be held in a container with air, or exposed to air, without affecting the accuracy of measurement.  The rate of contamination and effect on conductivity and pH levels will generally be rapid and is a function of the surface area exposed and the time elapsed.

For the same reason, samples should not be stored before pH levels are measured, while the sample containers used should be clean and manufactured from materials that will not leach contaminants. Similarly, sampling and measurement procedures should be consistent to eliminate the risk of errors.

Sample contamination is one of the most common – and overlooked – causes for the incorrect interpretation of pH measurements.  This can often lead to an erroneous assumption that as pH readings have changed between samples that a malfunction has occurred with purification or process equipment.

Ideally, pH measurements should be made on closed, flowing samples using in-line instruments. For laboratory sampling, best practice is to take a portable instrument to the source and ensure that the probe is fully immersed at the bottom of the sample container, with the sample being allowed to overflow.

A further problem is that the performance of pH probes or electrodes can be influenced by the contaminants or ions in the water and by temperature. These factors can be especially problematic in in-line applications, where fouling, scaling or chemical poisoning can affect the accuracy of readings and require instruments to be frequently recalibrated.

pH meters should also be used only for the applications for which they have been developed; in particular, they are generally calibrated for samples containing high levels of contamination, which makes them unsuitable for use for accurately measuring the quality of ultra-pure water unless they are fitted with specialised probes.

It is also important to maintain consistent sampling and testing procedures.  For example, something as simple as a build-up of salts on probes, a fluctuation in sample temperatures, or a change in personnel or test laboratory, can quickly lead to inaccurate and inconsistent readings.

By comparison, conductivity probes offer a far more reliable method of determining water quality. These instruments are simple to use, less prone to the environmental effects that influence pH devices, and automatically compensate for fluctuations in temperature.

pH can be a valuable tool for effective process control in many different applications. Used and interpreted correctly it will give an accurate indication of water purity, helping to improve the efficiency, consistency and reliability of purification and treatment systems, which in turn can enhance levels of quality while reducing operating costs.

Mark Bosley, Business Support Divisional Manager at SUEZ Water Purification Systems UK.