Radix Engine Code Audit | The Radix Blog | Radix DLT
Radix Publishing is pleased to announce that the results of two in-depth code audits of the Radix Engine code base can now be shared publicly.
After extensive analysis and rigorous testing, the comprehensive audit has been concluded, showcasing exceptional results in security, documentation, code quality and architecture. Conducted by a team of seasoned cybersecurity auditors, Hacken, this deep dive into the protocol’s inner workings has highlighted the strengths of the technology driving the Radix vision.
This process wasn’t merely about ticking boxes; it involved starting from first principles, identifying key risk areas, and conducting bottom-up analysis of every piece of the engine, just as a well-informed attacker would do while probing for weaknesses to exploit.
Hacken completed two code audits on the Radix Engine code base. The first began in early April 2023, while the core engine work of Babylon was still in progress, and resulted in a total score of 9.4 out of 10. The second began in August as part of launch preparation and resulted in a score of 10 out of 10.
First Audit Goals & Philosophy
Due to the fact that the Radix asset model, transaction model, and virtual machine are completely different from anything else in crypto, the initial audit had some unusual goals:
- Audit the auditor: Auditors in the crypto space do the overwhelming majority of their work against the Ethereum virtual machine (EVM) and EVM-alikes, and accordingly develop experience with recognizing common errors in that space. There was considerable concern within the development team as to whether it would be possible to find an auditor who could identify problems from first principles and core system understanding, rather than just trying to apply lessons learned within the EVM architecture.
- Get the auditor up to speed for a later audit: Again, given the novelty of the Radix model, it was expected that a significant amount of time would be needed for any auditor to properly internalize the architecture and behavior of the system. Conducting this ramp-up period during a first audit would enable the auditor to hit the ground running on a subsequent audit after stabilization had begun.
- Validate correctness of Scrypto Binary Object Representation (SBOR): The Radix Engine extensively makes use of a custom data serialization format in order to transmit information between parts of the system, and such implementations are frequently prone to gremlins. Thorough third-party review and testing was a critical step to ensure the correctness and safety of SBOR.
- Validate fuzz testing capability: “Fuzzing” is an automated testing method that exhaustively injects invalid or unexpected payloads into various parts of a system, and is a very effective method for empirically testing complex systems with a lot of integration points and moving parts. RDX Works had just completed a first implementation of a fuzzer and needed a comparison point with an externally-written fuzzer to see if both would identify the same issues.
- Architecture review: The development team had applied a rigorous design philosophy for all parts of the stack which clearly delineated the different layers of abstraction, areas of responsibility, and assumptions of each subsystem. Now it was time to find out if any of this made sense to a team coming in with fresh eyes, and get an outside perspective on the decisions.
You will note that nowhere in that list is anything related to identifying the kinds of safety/liveness issues that you might expect would be the usual goals of such an audit. At this point in development (early April 2023), there were already known examples of both problems in internal bug & task tracking, due to areas which hadn’t been fully implemented or were awaiting a larger refactor and hence weren’t having bugs addressed. Of greater importance in the short term was making sure that a capable auditor was found and vetted, while gaining validation on some important foundational elements.
First Audit Methodology
Conversations and proposals were requested with several vendors, and Hacken was settled upon as the audit partner. Hacken exhibited a very strong interest in the workings of Radix and had a particular expertise in fuzzing, with a very open and flexible audit process that wasn’t built around any EVM-based preconceived notions.
Hacken explicitly requested not to have orientation sessions explaining the architecture and thinking, preferring to learn exclusively using the code and documentation in the source repository. They quickly (and independently) zeroed in on SBOR as a potential hotbed for bugs that needed a deep dive, and started working through the stack layer by layer, building up a comprehensive fuzzing library as they went.
Bearing in mind that one of the key goals was to validate the performance of the auditor, the development team elected not to share any list of known issues or areas suspected of harboring bugs, preferring to see if Hacken would discover them autonomously.
Each week, the teams met to discuss the latest findings and next areas of investigation.
First Audit Results
Hacken exceeded every expectation, exhibiting an incredibly strong capacity to comprehend and deeply think through an unfamiliar architecture. They independently discovered multiple known issues that had not been revealed to them, in addition to identifying bugs which were not yet being tracked. SBOR passed an exhaustive set of tests and investigation with flying colors, and the in-house fuzz testing wound up comparing well to Hacken’s extensive fuzzing effort.
Perhaps most important was the deep theoretical and empirical validation of Radix’s novel design. Hacken’s team approached the system with considerable skepticism, questioning the efficacy of several design choices and raising multiple concerns. One by one, the doubts fell away as they vetted each part and found that it withstood their scrutiny and testing, and by the end of the audit their opinion had shifted to qualified praise for the approach taken and the quality of implementation. Given that this was during a period of heavy development, with plenty of spots in the code exhibiting TODO comments and much still to be written, qualified praise from a thorough audit was a good starting point.
Here is the full audit report, conducted against the code base in early April 2023. Individual results were as follows:
- Documentation Quality: 8
- Code Quality: 7
- Architecture Quality: 9
- Security: 10
- Overall: 9.4
Second Audit Goals & Philosophy
Not long after the conclusion of the initial audit, development work on the engine transitioned completely to stabilization and production readiness for Babylon, and goals for the second audit shifted accordingly:
- Validate correctness of native blueprints: Multiple native blueprints, such as the Account required by all users, and the AccessController that enables easy multifactor recovery, were now deemed production-ready and needed a thorough outside review.
- Validate recoverable error handling: During most of development, many code paths could result in a validator node crashing. Crashing right at the point of failure was actually a desired state of affairs for the test networks, as it allowed for easier debugging of issues. However, in production, allowing a validator to crash in a recoverable situation opens up a liveness attack, so error handling had been updated to prevent this, and these changes needed vetting.
- Regression testing: Hacken developed extensive test libraries during the first audit, and would run a comprehensive re-test to catch any bugs which crept into previously-tested areas.
- Production-readiness review: A focused review of several attack vectors and search for any bugs by which the correctness or liveness of the network could be compromised.
In August 2023, a second audit to pursue these goals began.
Second Audit Methodology
At this point, Hacken was already fully up the curve on how Radix operated, and could immediately start on careful review of critical pre-production areas. Unlike the first audit, any issues identified by the development team during stabilization were promptly shared with Hacken to avoid them spending time on known problems.
With Babylon nearing launch, expectations as to code quality and documentation were higher, and everything was fair game for review and criticism.
Second Audit Results
Hacken quickly identified a critical regression in an area of the code which had just been refactored but hadn’t yet been fully tested, and then proceeded to validate the readiness of multiple native blueprints before continuing on through the rest of the audit. That initial regression proved to be the only serious issue identified by the audit, with only two other low impact issues discovered.
Sentiment from Hacken throughout the second audit was very favorable, with the code now in a proper state of review readiness rather than the mid-implementation state that they saw the first time around, which is certainly reflected in the scores:
- Documentation Quality: 10
- Code Quality: 10
- Architecture Quality: 10
- Security: 10
- Overall: 10
The audit process was tremendously beneficial, meeting or exceeding every goal and providing welcome outside validation of Radix’s architecture and implementation. Watching the Hacken team start from a place of profound skepticism and gradually work their way into enthusiasm over the Radix way of doing things was rewarding in itself, as the toughest audience to win over is one that’s literally being paid to look for every flaw.
Two thorough code audits: thousands of dollars and many team hours. Delivering a Radix revelation to your auditor: priceless.
→ Hacken first audit report – code from April 2023
→ Hacken second audit report – code from August 2023