Jonathan Wilkins discusses the rise, fall and rise again of the concept of one centralised computer power
It's often said that trends go in cycles. Surprisingly, even the winkle picker, the shell suit and the majestic perm have all had their day more than once. Some people would even argue these trends never really went away, but that they've been working hard behind the scenes, biding their time and waiting to become recognised once again.
Thin client computing refers to a network of machines with limited inbuilt processing power and memory, which run using a central mainframe. Traditionally, this involved working at an output device, or display monitor called a dumb terminal. These had very little computational power to do anything except display, send and receive text; they were controlled by a monolithic cabinet containing a central processing unit and memory.
A dumb terminal was unable to save things locally and the user was pretty much limited to inputting data. That was it.
Similar to the perms of 80s footballers, thin client computing was rugged and required infrequent and inexpensive maintenance. Terminals had no hard drive, fans, motherboards or inputs that could break down, so maintenance wasn't costly.
Modern PCs in comparison, even industrial PCs, are expensive to maintain. Even if properly secured and administered, things can go wrong for a multitude of reasons including operating system vulnerabilities, constant patching, patches that break other applications and users downloading viruses. Not to mention the added cost of anti-virus and firewall licenses, office suites licenses, operating system licenses and the list goes on.
Throughout the 1970s and 80s, thin client computing was booming until the 90s brought with them Britpop and the rise of PC networks. These had their own processing power and the humble dumb terminal and mainframe took a backseat, with many predicting their obsolescence by the time the millennium rolled around.
However, unbeknown to many, thin client computing never completely died out and it continued to be used behind the scenes in sectors from finance to government services - lying in wait for its next moment of glory.
Until now. Whereas in the past, thin clients were not quite able to deliver the power needed for high performance demands, improvements in thin client hardware, software, connection protocols and server technology has meant a resurgence of thin client computing in a cross section of industries.
Forget mainframes and dumb terminals, we are now starting to see high-performance cloud-based workstations that you could carry around in your pocket if you wanted.
When smartphones, tablets and thin client computers are combined with the central processing power of the cloud, they enable employees at multinational companies to send files to each other like they were hand delivered. Or work on live documents where there's no confusion about the latest version or revision.
Security is covered too. Encrypted access to private clouds also means thin client computing can actually be safer than PCs.
Furthermore, there's no issue of an employee connecting an infected USB to their device and taking down the entire network - because there are no USBs where we're going. Or rather, no ports.
What this highlights is that you should never dismiss a technology just because it's obsolete, or considered to be. The technology columnist Stewart Alsop famously stated that the last mainframe would be turned off in 1996, but they're still going strong today and Alsop was later forced to eat his words.
Similarly, the world of industrial automation still uses many obsolete motors and inverters that meet current energy efficiency standards and can fit into your Industry 4.0 applications, if you give them a chance. Sometimes you have to go backwards to go forwards.
Jonathan Wilkins is with European Automation.