Advertise here with Carbon Ads

This site is made possible by member support. โค๏ธ

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

๐Ÿ”  ๐Ÿ’€  ๐Ÿ“ธ  ๐Ÿ˜ญ  ๐Ÿ•ณ๏ธ  ๐Ÿค   ๐ŸŽฌ  ๐Ÿฅ”

kottke.org posts about Gerard Holzmann

How NASA Writes Space-Proof Code

When you write some code and put it on a spacecraft headed into the far reaches of space, you need to it work, no matter what. Mistakes can mean loss of mission or even loss of life. In 2006, Gerard Holzmann of the NASA/JPL Laboratory for Reliable Software wrote a paper called The Power of 10: Rules for Developing Safety-Critical Code. The rules focus on testability, readability, and predictability:

  1. Avoid complex flow constructs, such as goto and recursion.
  2. All loops must have fixed bounds. This prevents runaway code.
  3. Avoid heap memory allocation.
  4. Restrict functions to a single printed page.
  5. Use a minimum of two runtime assertions per function.
  6. Restrict the scope of data to the smallest possible.
  7. Check the return value of all non-void functions, or cast to void to indicate the return value is useless.
  8. Use the preprocessor sparingly.
  9. Limit pointer use to a single dereference, and do not use function pointers.
  10. Compile with all possible warnings active; all warnings should then be addressed before release of the software.

All this might seem a little inside baseball if you’re not a software developer (I caught only about 75% of it โ€” the video embedded above helped a lot), but the goal of the Power of 10 rules is to ensure that developers are working in such a way that their code does the same thing every time, can be tested completely, and is therefore more reliable.

Even here on Earth, perhaps more of our software should work this way. In 2011, NASA applied these rules in their analysis of unintended acceleration of Toyota vehicles and found 243 violations of 9 out of the 10 rules. Are the self-driving features found in today’s cars written with these rules in mind or can recursive, untestable code run off into infinities while it’s piloting people down the freeway at 70mph?

And what about AI? Anil Dash recently argued that today’s AI is unreasonable:

Amongst engineers, coders, technical architects, and product designers, one of the most important traits that a system can have is that one can reason about that system in a consistent and predictable way. Even “garbage in, garbage out” is an articulation of this principle โ€” a system should be predictable enough in its operation that we can then rely on it when building other systems upon it.

This core concept of a system being reason-able is pervasive in the intellectual architecture of true technologies. Postel’s Law (“Be liberal in what you accept, and conservative in what you send.”) depends on reasonable-ness. The famous IETF keywords list, which offers a specific technical definition for terms like “MUST”, “MUST NOT”, “SHOULD”, and “SHOULD NOT”, assumes that a system will behave in a reasonable and predictable way, and the entire internet runs on specifications that sit on top of that assumption.

The very act of debugging assumes that a system is meant to work in a particular way, with repeatable outputs, and that deviations from those expectations are the manifestation of that bug, which is why being able to reproduce a bug is the very first step to debugging.

Into that world, let’s introduce bullshit. Today’s highly-hyped generative AI systems (most famously OpenAI) are designed to generate bullshit by design.

I bet NASA will be very slow and careful in deciding to run AI systems on spacecraft โ€” after all, they know how 2001: A Space Odyssey ends just as well as the rest of us do.