20 mins
You have 0 further articles remaining this month. Join LeadDev.com for free to read unlimited articles.

Climate scientists have demonstrated that society needs to take immediate action to avert catastrophic increases in global temperature from greenhouse gas emissions.

Utility computing, excluding cryptocurrency mining, contributes up to 3.8% of total human carbon emissions, and cryptocurrency contributes a further 1%. Yet, much of the discussion has focused on individual consumer choices rather than upon changing industrial practices. As technologists, we possess much higher leverage through addressing the impact of our industry upon global warming.

First, let’s quantify the effects of the decisions we make as individuals outside work, so that we have a baseline compared to the kinds of changes we will be discussing in the rest of the article. Replacing a gasoline car with an electric car reduces emissions by approximately 2.5 tons of CO2 per year. Switching to a vegetarian diet decreases emissions from food production by around 1.5 tons of CO2 per year. Switching your entire household to solar electricity avoids somewhere around 2-4 tons of CO2 per year after manufacturing costs are accounted for.

Even if we each individually make the best possible choices for the environment regardless of the upfront costs, we can at best reduce non-aviation carbon emissions by about 20 tons of CO2 per year per household.

These individual choices are nowhere near the magnitude of carbon emissions that we as technologists oversee. Each 96 vCPU / 48-core c5.metal instance running in us-east-1 will result in emission of 3.6 tons of CO2 per year as of 2021, according to estimates by Teads. An equivalent 64-core c6g.metal instance emits only 1.2 tons per year, for a difference of 2.4 tons per year. Multiply that by the number of instances in your employer's fleet, and you can see that the carbon cost of the services you develop and run at scale dwarfs the impact of your individual choices.

And working on ecologically destructive technology such as oil extraction or cryptocurrency has a far worse impact. If your work causes Bitcoin to become 0.001% more valuable, you will have incentivized emission of 370 tons of CO2 per year downstream. And if you are one of 1000 people who make it possible to create a single new oil well producing 100 barrels of oil per day, then over a year your proportion of the emissions will be 15 tons of CO2.

Stop: Use your talents for good, not destruction

Say no to petroleum exploration

Given that we are at the precipice of global temperature increases, we know that further extraction of coal, oil, and gas from the ground and from the sea is irresponsible and will prolong the era of widespread carbon emissions. The price of oil must rise to account for its true cost. Lowering the cost of oil encourages its further consumption, rather than migration away from it. Thus, as technologists, we should not devote our talents towards making otherwise marginal hydrocarbon extraction projects more practical or efficient.

It doesn't matter how carbon-efficient a geological model is when it runs on a cloud, if the purpose of the geological model is to pump up oil from the ground for combustion! As of 2010, the cost of exploration was $3 per barrel, with $50 per barrel as the break-even point and oil prices of $75 per barrel. And discovery has become more difficult since then and the pipeline of new deposits has dried up. Thus, we can make a meaningful dent in the net profitability of oil by allowing exploration costs to rise. And a less profitable product will experience less investment and less growth.

In order for renewables to be financially compelling in the open market, they cannot compete against subsidized fossil fuels, least of all fossil fuels subsidized by our labor as technologists. If you want to work on an earth science model, work on predicting which houses or regions will most benefit from solar panel installation, or how to design at-scale pumped storage. We owe it to future generations to work on the solution, rather than further contributing to the problem.

Say no to blockchain hype

Proof-of-work decentralized financial solutions contribute 1% of anthropogenic carbon emissions, once both their electricity consumption and their cost to produce and decommission the hardware they use are combined. While there may be some value in them, it is hard to argue that such systems contribute 20% of the total value of all computing to human society.

Very few problems genuinely benefit from the use of fully decentralized, zero-trust solutions like proof-of-work blockchains. And where proof-of-work can solve issues with trusting counterparties, its inherent risks of theft or breach may still outweigh the benefits. Such a system still either relies upon trust in the creators of the smart contract to update it without ‘pulling the rug’, or relies upon keeping the contract immutable and hoping that there is not a single bug in its code.

The less trendy solution of a centralized database is more secure, more practical, and far, far less wasteful. Give money directly to commission artists for bespoke work rather than buying NFTs. If you're worried about hosting platform centralization, run your own VMs on DigitalOcean rather than using a central host. None of these things require zero-trust and wasteful computation to function.

This is not a problem space where we can blithely proceed forward, and wait for proof-of-stake or another alternative to follow on. The damage caused by irresponsible use of proof-of-work is happening today, and proof-of-stake research is in a perpetual state of ‘it's just three to five years out!’. And even if proof-of-stake succeeds, it often must be bootstrapped from proof-of-work for initial distribution, incentivizing the waste of the original work. Even alternatives such as proof-of-storage wound up bricking NVMe storage (which incurred emissions to produce) and causing supply shortages and price increases for consumers.

Any action that increases the economic value of Bitcoin or Ethereum increases demand for it, which creates incentives for miners to spend more electricity and more physical resources producing it. And making efficiency improvements to mining results in hardware turnover (such as CPUs to GPUs or GPUs to ASICs), but no net decrease in energy consumption since the hash rate will quickly rise to account for the increased efficiency.

While some might argue that blockchain technology can incentivize development of renewable technology, for instance by paying a portion of its profits to subsidize renewables, the reality is more grim. Technology like Ethereum and Bitcoin makes electricity globally fungible and results in a race to the bottom. Workloads will migrate towards the least costly regions such as Xinjiang where power production comes from dirty coal, or towards reactivation of otherwise uneconomical gas wells in Canada, and only centralized mechanisms would be able to arbitrate whether energy is sustainably sourced, defeating the entire thesis of a decentralized network.

If a system is neither fit for its intended purpose, nor environmentally sustainable, then it should be rejected. Autonomous, decentralized systems are an interesting research topic, but employing them at scale in their current state leads to ecological destruction well exceeding their societal benefit. Thus, those readers who are conscious about their environmental footprint should generally decline to use decentralized blockchain systems, and should educate others about the security and environmental dangers of using such systems without good reason.

Save: Run the least amount of code to accomplish your workload

The most efficient code is the code that is never executed. But the next most efficient code is code that avoids wasteful computation. Especially as the applications and environments we manage become more complex, it’s easy to have dozens, if not hundreds or thousands, of unseen inefficiencies hidden beneath the busy interaction of innumerable moving parts.

One way to combat this is by implementing profiling and tracing solutions to identify which code is taking the longest to execute, or being executed most frequently. These data points, in turn, provide a way to take the guesswork out of determining where bottlenecks live across even vastly distributed microservice architectures.

It will save your business computational resources, decrease emissions, and decrease user-visible latency to eliminate these bottlenecks in your code. Profiling is easy to run one-off with the help of tools such as eBPF profiling or language runtime-specific techniques. And you might be surprised to find that you are wasting 20-30% of CPU on an easily cacheable computation. Or repeatedly hammering your database instead of batching up requests to be more efficiently executed.

Of course, sometimes things just take time, even when code is behaving in exactly the way you expect. This doesn’t necessarily mean that it’s the only way to perform that particular task, or even strictly necessary. Identifying where you don’t need to implement a solution can be just as valuable as recognizing when you do. For example, do you spend increasing amounts of time and resources in building out staging and test environments that closely mirror live environments? How much of that cost could be decreased or removed entirely by improving your ability to continuously test and update production? There’s no one right answer, and tactics to improve our ability to push code into production more quickly, while minimizing and mitigating disruptions to our end users vary significantly. But by simply asking these questions, we’re ensuring that each task we ask of our systems is doing a meaningful job, and our ability to answer them improves our ability to make informed decisions across our estate.

Furthermore, we need to contemplate the idea of what ‘good enough’ performance means, and weigh the ecological costs of chasing perfection. Does it really make sense to consume ten times the resources in order to improve the accuracy of our models by 0.1%? Research by Bender, Gebru, et al shows that there are severely diminishing returns to use of large language models and other machine learning techniques. If you can use simple heuristics and statistics instead of training a neural network, do so. Use the simplest thing that achieves close to the result you want. Don't pick technology to fill a buzzword, consider whether it genuinely will solve your problem in the most efficient way possible.

Switch: Use the most efficient underlying technology

Architecture matters

There are options other than the ‘default’ x86_64 architecture that you should consider for running your workloads. I know, it’s been quite some time since CPU architecture has been something we needed to think much about in computer science circles. Back in the 1990s, Intel-based processors became ubiquitous in desktop computing, so much so that their Pentium brand became a household name. The dominance of Intel’s x86 architecture only grew as the 90s gave way to the 2000s, first with AMD releasing a 64-bit version to better support the growing memory needs of evolving workloads, and then with Apple switching from PowerPC to Intel processors in 2005. In the 2010s, nearly all desktop computing was done with a uniform architecture under the hood, be they Windows, macOS, or *nix, with a notable and growing exception: mobile phones.

The introduction of smartphones brought with them a new set of problems and priorities. Instead of optimizing solely for computing power and complex operations, the portable, multifunction nature of smartphones required manufacturers to prioritize power-light, efficient processors that would run ‘well enough’ to bring computing to our pockets. As a result, ARM32, an architecture that had existed since the 1980s, but which never reached the market saturation of x86/amd64, became the de facto standard for mobile devices,  as it provided a compelling compromise between power and efficiency.

With the advent of arm64, ARM’s 64-bit evolution, in tandem with rapidly growing online infrastructure needed to serve an increasingly parallelized and distributed world, it started to become clear that the efficiencies gained in mobile technologies were also well suited to the datacenter. The barriers to adopting arm64 further lowered as more and more software stacks were built from source using open source toolchains, rather than precompiled proprietary tools tightly coupled to their native environments.

Then, in 2019, AWS announced their Graviton2 line of ARM-based EC2 instances with the promise of improved cost, performance, and environmental impact. Cloud availability of ARM processing removed one of the final remaining hurdles to wide adoption. Instances could now be easily provisioned and terminated on demand via familiar mechanisms, greatly reducing the upfront cost of evaluating your existing software stack. This isn’t to say that ARM will necessarily be a panacea for your cost overages and performance bottlenecks, the possibility of improved on-demand pricing and resource efficiency at the cost of a few afternoon’s worth of you or your engineers’ time would seem a fair trade. At the scale of cloud-native applications, even a modest improvement can lead to huge savings, and with that, huge environmental impact. It’s a win-win.

Even if you have legacy workloads that must remain on the x86_64 processor architecture, the latest generations of hardware will offer efficiency improvements over older generations. Ensuring that you have a strategy for migrating workloads between machine types and rolling over your fleet will allow you to benefit from Intel and AMD's investment in R&D rather than remaining static.

I'm proud that at Honeycomb I sparked the shift of workload equivalent to 5,000 Intel vCPU onto 3,300 ARM64 cores, allowing us to avoid generating approximately 120 tons of carbon emissions per year. That's the equivalent of 80 people switching to a vegetarian diet!

Location matters

In combination with using more power-efficient processor technology, using genuinely renewable-powered energy dramatically reduces carbon footprint and allows the limited carbon offset capacity to be used for use cases that cannot avoid emissions through fully electrification such as offsetting aviation, steel, and concrete production.

Picking cloud regions that are powered purely by hydroelectric such as datacenters in Washington or Oregon instead of Virginia and the Carolinas will enable your footprint to not need to be offset in the first place. And using cloud providers with the best PUE (Power Use Effectiveness) datacenter and server designs will mean that less energy is used for cooling the same amount of computation on the same processors.

Conclusion

It’s easy to underestimate our ability to impact our industries’ ecological footprint, since as individuals, it’s likely we’ve felt powerless in the face of seemingly insurmountable forces. What’s more, thinking of the performance of our code or the computing power of our servers in terms of environmental impact is rarely what’s driven organizations to evaluate a change in their working practices. However, each strategy outlined here improves the sustainability and efficiency of our organizations in addition to the net-positive impact to the world around us.

After all, reducing waste in compute time spent, and opting for power-efficient technology both serve to decrease the cost of doing business overall. Even opting not to pursue environmentally harmful technologies generates goodwill from our communities, and ensures we’re not competing for the inherently dwindling resources required by more wasteful options.

The end result is the best of both worlds: our responsibilities as global citizens and our responsibility to our employers are not at odds, but directly aligned. As technologists, we’re uniquely positioned not only to implement environmentally responsible solutions, but also to tie those solutions to what drives our businesses. The only catch is, once we realize we have this power, it is incumbent upon us to use it.