Back

How green is "green" hardware?

July 2016

Early computing

In the pre-consumer computer era, computers were extremely expensive to build, buy and maintain. Many toxic materials and production methods were used, although hardware was usually repaired rather than replaced. Many hardware upgrades were also performed without changing the whole computer. Although the industrial revolution had already started to cause massive pollution worldwide, only a fraction of that pollution originated from computers and electronic waste. Consumerism on a large scale existed with vehicles, home appliances, building and related materials, plastics, detergents, city infrastructure, the food industry, and eventually audio and multimedia consumer electronics, etc.

As electronic components became smaller, more ubiquitous and less expensive, it appeared promising that the home computer would ultimately become reality. In the 70s, various home computers appeared on all continents. Some of those had tremendous success. In some countries, the family computer became a reality. In the 80s, in some countries, the home family computer eventually would even become the personal computer, many homes having more than one, and small business and home administration was revolutionized by accounting and database software.

The era of game consoles also started, where children and adolescents would own their own computer, albeit usually limited to running read-only software, games in the form of cardridges. We could consider that at this point, the computer industry was already widely deployed. But that continued to increase over the years.

Desktop computers

Although desktop computers evolved to increasingly difficult to repair, or disposable components, long lived office desktop workstations usually have less toxic components and a smaller energy footprint than they used to, and because of their expansion ports, it is often possible to replace or update individual components instead of replacing the whole computer. An aspect which on the other hand did not help was the migration to increasingly heavy and bloated software, often forcing hardware upgrades before the devices actually failed.

There are a number of reasons why software became heavier over time. More powerful hardware allowed the development and grand-scale adoption of less CPU-efficient programming languages, allowed to often free developpers from always needing to use the most efficient (and often more complex) algorithms to achieve their end results. Scripting languages designed for administrative, educational or research tasks were also used to develop more complex applications; the increase of data to process due to larger disks and databases also posed new software challenges. The need to process higher quality sound, images and eventually animations, was also important and resource-costly.

Software having become less expensive to produce than specialized hardware in many scenarios, hardware components were often simplified while the software needed to compensate, an example being software modems. The development of safer and simpler programming languages also took place, where more runtime checks are performed for basic operations, or where an interpreter replaced previously native code. It required many years for those new language implementations to become efficient, but they often gained immense popularity, shortening the time needed to code applications and allowing easier entry for a new generation of programmers who did not need formal engineer training and were less expensive to hire.

A particular PC market which did not stop to increase its energy consumption, and which requires frequent upgrades for performance are gaming systems. Although the reduction of components continue to take place, gamers must constantly upgrade to more powerful hardware to keep up with new title releases. It is not uncommon to have more than one powerful GPU card in addition to the main multi-core processor in the same computer, and to as necessary even use fluid cooling in order to be able to overclock the chips. The Power Supply Units (PSUs) consequently also increased their output. The popular game styles being somewhat limited, with style or interface revolutions rare, an emphasis is usually put towards better graphics and larger virtual worlds, which is hardware resource-intensive.

The outsourcing of manufacturing to countries with less regulations, where workers can also be more exploited, reduced considerably the cost of hardware. Having become a cheap upgradable facility for a number of countries, software bloat did not cause challenges which would ruin the industry. Software development itself started to be outsourced increasingly. Computers became cheap and nearly everyone in the economically developped world could afford at least one.

Although a second-hand market exists for working-order components which are considered slow after upgrades, a huge quantity of discarded hardware finds itself in landfills, and large quantities are also often illegally shipped to other countries by boat where exploited or needy people attempt to extract value at their own health risks. Whatever remains is rarely completely recycled or reused, and ends up in the environment.

When it ends up in someone else's backyard or water, we tend not to care much. It's another matter when sea life and other ecosystems begin to massively be affected. Eventually we realize how much pollution is a serious issue, but that's when it affects us at home. When this occurs, it's an indication that we have waited way too long before taking the matter seriously.

System on-a Chip (SoC) consumer devices

But what about increasingly power-efficient devices designed to prolong the battery autonomy of mobile devices?

These devices are now sold in larger quantity than any previous computer type ever was. Almost everyone now owns a cell phone, usually containing such a miniature computer chip. These are acquired cheaply and massively, as well as frequently discarded for software compatibility and upgrades or for mere fashion reasons, when they are still in working condition. As for desktop computers, the software requirements may also precipitate faster hardware upgrades.

And sometimes, devices are so ubiquitous, cheap and disposable, that quality itself is less of an issue, and a number of devices actually die before what would be their expected lifetime, or are replaced immediately after shipping because of low testing standards increasing the number of defects, with the additional shipping cost and pollution this entails.

Various manufacturers encourage the return of old devices against cheaper upgrade prices, allowing to resell a portion of those to other nations, and sometimes to recycle a portion of the components of those which cannot be sold, which may help to mitigate pollution up to some point.

On the other hand, these products are so massively produced and discarded worldwide, including in many places where no proper regulations exist against pollution, or where none of the regulations are enforced, resulting in a similar situation to the remains of desktop computers, with reports of many illegal shipments to other countries where unfortunate people attempt to recover any value they can while sacrificing their health. And this happens at an even larger scale because of the ubiquity of such devices.

Such small computers are also increasingly used in a number of applications such as city light control, "smart" electric grids with auto-reporting counters linked to a mesh network, security cameras, cars, toys as well as in more and more consumer appliances. The massive consumption of SoCs is scary from an environmental standpoint.

Another interesting point is that these do not normally replace the computers people already used. Servers, desktop workstations, laptops, netbooks, subnetbooks, tablets, surfaces, PDAs, readers and phones remain in use, but we add more computers to our collection. And the world population has doubled in the last fifty years, meaning an even larger consumer market then ever before.

"Cloud" services

We have come to commonly call "the cloud" what are technically redundant server clusters. These clusters allow the duplication of user data, as well as the management and hosting of services, usually running under "virtual machines" (VMs) which permit to offer the impression of dedicated operating system and application installation instances, optimized to fit in less hardware than if they were individual collocated servers. Virtual machine monitors also allow precise resource monitoring, management, separation, throttling and billing.

A disadvantage of this model is that a lot of resources are wasted on data duplication and code execution duplication. These can be mitigated up to some point using complex deduplication software. However, with proper end-to-end encryption and user-side encrypted data (which should be the norm for security and privacy), what can be deduplicated is limited, as the servers have no means to recognize the similarity of duplicate data which appears to be unique random data. Massive deduplication, backup management, virtual machine management and high speed redundant internet connectivity are also costly processes. Redundancy, for efficiency and data integrity also implies that resources are duplicated and ideally decentralized, copied over various data centers.

Large data centers were always big energy consumers, because they host a lot of hardware, storage and network equipment, and use intensive air conditioning to maintain those devices safe and in operation. This means that "the cloud", also composed of such large servers, cannot be considered "green" technology. Moreover, the massive increase of data storage required to scale with the increasing number of consumer devices which by default store user data and configuration on those remote servers, as well as the regular hardware replacements and upgrades necessary at those data centers, "the cloud" consists of an incredible pollution industry.

Fortunately, over time, the power requirements per-rack and per-account are somewhat being reduced by upgrades to more efficient hardware. On the other hand, the need for data centers is growing and as such, there are more and more.

Conclusion

Reducing the power consumption of devices has the potential to reduce utilization costs as well as lower energy demand. As a side effect the pollution footprint may be reduced for the same amount of computing power. Regulating the materials used in hardware can mitigate some health and environmental issues.

On the other hand, what allows the development and deployment of low-cost low-energy technology consists of sustained, and often increased, massive consumption and production levels, in part for economic reasons and in part because of greed or because we have grown accustomed to our way of life and devices and consider them granted. For the same reasons, the available technology is limited, and many vendors end up using chips designed for ubiquitous mobile phones and tablets for other applications despite their drawbacks, because those chips are already massively produced and are the cheapest to buy.

The industry is actually advocating and pushing the adoption of a greater number of devices per person. Those devices are too cheap to replace to be worth reparing, and their surfaced-mounted components are too small to allow servicing without expensive specialized equipment.

Low power mobile, "Internet of Things" (IoT) and other such small embedded devices, often require batteries. Batteries, like those devices, are not eternal, require resources and generate pollution to produce, and are not universally properly disposed of and recycled, usually ending up into the environment.

Considering the unsustainable number of consumers, the unsustainable level of consumption of every consumer, the unsustainable massive production level using unsustainable methods and materials, the unsustainability of those devices, the unsustainable massive discarding of all those products and the unsustainable fiat, non-resource-based economy, we could hardly realistically call those devices "green".

I am not advocating a return to a pre-industrial way of life or Homo Sapiens depopulation. But it is obvious that there is a systemic unsustainability problem which we can only attempt to somewhat mitigate as part of the anthropocene. This problem is currently accentuating dramatically, to a point where if this continues, the mitigation measures will unfortunately need to suddenly become radical.

The effects of our footprint becomes a serious challenge to our societies, and even to the environment from which we derive and on which we depend for survival. Anthropogenic climate change and its effects are rapidly being acklowledged as very critical by climatologists. A number of hugely populated cities, and even some countries, are being imminently threatened by the frequency of abnormal severe weather events, water level changes and forest fires. The arctic is almost gone, and there are serious concerns about the state and ongoing changes in the antartic, as well as about the amount of methane stores which already began to slowly bubble up to the atmosphere, as feared.

We seem to be in a critical time, when we have developped powerful and widespread enough communication networks and education projects to help a significant amount of people understand the gravity of these issues. Ironically, we needed this accelerated consumption and industry in order to get here. But the way we have been heading is unsustainable, so it comes with great responsibility and its serious consequences.