Self direction is the key to happiness at Galois, which requires us to put a lot of time into creating trust with one another and into building a shared understanding of what we want to achieve. Only when we trust our own understanding of what’s important to us collectively, can we coordinate and pull in the same general direction, all the while relying on self-directed excellence.
One of the things that is important to us is working on projects we’re proud of—projects that try to contribute to the good in the world. We are quite intentional about this, and we have captured our intentions in something we call the “boundary policy.”
The boundary policy refers to how we set up boundaries for the work we feel good about doing vs. that which we would rather not do. This might sound strange coming from a company that works with the Department of Defense, but that is actually why it’s so important.
If we are to succeed, we have two primary efforts that require our shared trust, those being, offer building and research engineering. Offer building involves finding, proposing, and winning great R&D projects. This takes a lot of effort, expertise, and listening. Our computer scientists doing this work need guidance on the sort of work that the team will feel good about doing on projects taken on for clients.
On the other hand, research engineering involves inventing, designing, and building the prototypes that we deliver for the opportunities we have won. This takes a lot of work, years sometimes. The computer scientists doing this work want to know that the projects we pursue will contribute to the general good in the world. When they are “heads down” on current projects, they need to trust that their colleagues are pursuing work they can be proud of.
“We draw the line on projects where we believe the intended use of our work is primarily non-damaging.”
The boundary policy is our attempt to create trust around those two concerns. In its shortest form, we draw the line on projects where we believe the intended use of our work is primarily non-damaging. We then reason through it like this:
We spent a lot of time crafting the way we think about this challenge. We explored shorthand ideas like “defensive vs. offensive” or “white hat vs. black hat” and these have been useful parts of our conversations. But they are not the core principles that we sought to guide our decisions. We ultimately settled on the core statement that the intended use of our work must be “predominantly non-damaging.”
Next, we then spent a lot of time developing a shared understanding of how we think through that in relation to things like: vulnerability research, vehicle systems (cars, trucks, airplanes, UAVs, ships), cryptography, etc. When we consider taking on a project, we think about whether or not we will be able to publicly publish our work, which is important to us. We also consider the extent to which the work can be used for genuinely good things (meaning, ensuring that systems work as they were originally intended, and not exploited). Lastly, we consider how the tools we build might result in simple engineering exploits (bad) rather than enabling the creation of correct and high-assurance systems (good).
The hard part is that, as with many scientific endeavors, our efforts can be used in a multitude of ways, but we consciously reason through this on every single project we pursue. Amongst ourselves, we share the reasons why we might pass on some projects as well as why we might pursue projects that were harder to agree on. We’re not always right, and we don’t always agree. But we are always intentional.
Our obsession with self-direction also allows people to individually choose the projects they do or don’t won’t work on, without any sanction, either informal or formal. But because we have this policy, we all know that, in the long run, we will end up with work that we are proud to deliver on.
This process may not perfect, it’s just our best effort to try to be good on purpose.