A case for a “Trustless Computing Group”
Is it possible to imagine a Trustless Computing Group that deploys the same kind hardware-level security standards deployed to-date by the (un)famous Trusted Computing Group – but (a) open and verifiable in the source design of all critical components (b) subject to extreme levels of verification in relation to complexity (c) extended to manufacturing process oversight (d) citizen-accountable in its governance – in order to guarantee both much higher levels of users privacy AND higher protection of the content rights of their owners?
The Trusted Computing Group (TCG) is the dominant and undisputed paradigm and standard for IT security in the West. It deeply influences all major IT security standards and certification bodies, and it is supported by a powerful consortium. It has deployed over the last decade alone has deployed 2,121,475,818 devices to date in the area of communications, military, and conditional access systems for TVs. It is the basis of today’s Digital Rights Management (DRM) systems. It has worked mostly fine in the financial sector of smart cards, but it has totally failed in the DRM and protecting the privacy of human communications.
This week, in response to the fall out of Snowden revelations, they claimed that their model is the right model “to solve today’ most urgent cybersecurity problems” such as those that have emerged since Snowden revelations, as for example those caused by vulnerabilities in widely-used critical free software like OpenSSL. This is an almost paradoxical statement, given their track record.
TCG paradigm assumes that there are a set of providers and systems that are trusted (i.e. trustworthy) by default, and the goal is to minimize the chances that anyone can modify in any way compromise such systems, or even study them or analyze them (!). Such systems contain hardware, firmware and software technologies that cannot, in their entirety, be legally (in US) and/or practically be openly verified in their source designs verified openly by third parties. This model has been completely debunked by Snowden revelations. It has come more and more clear how this failing. It has emerged how there are all reasons to believe such systems are full of vulnerabilities resulting from, security through obscurity, way too much complexity, willful subversion of the supply chain (by NSA and many other parties) incompetence and/or from the lack of open public oversight and testing.
Why are content vendors not revolting? Its DRM keeps on getting hacked, but, content owners are fine since its technical weakness was solved by Apple and similar strategies that made their entire platforms a DRM systems (what Schneier calls feudal security model) and/or by making it impractical enough for the average user to widely consume pirated content on commercial entertainment computing devices;
Is there an alternative?
We need a complete paradigm shift.
We need trustless computing instead of trusted computing.
We need “trustless computing” in the primary meaning “trustless” i.e. of “untrusting” and “distrustful”, i.e. lacking the need or assumption of trust in anything and anyone, as it is the case of proper democratic election systems, commercial aviation certification (FAA) and socio-technical safety systems for critical weapon systems. The only way to ensure IT systems that are meaningfully trustworthy is to be completely trustless.
We need computing whereby the user or experts he trusts can verify or analyze all critical hardware and software, and lifecycle and manufacturing processes, and do so on systems that not built on a complexity that is beyond verification. We need an approach that carries to the ultimate conclusion the proposed Trust No One model by US security expert Gibson.
In our vision, the current state-of-the-art high-assurance paradigms epitomized by Trusted Computing will be replaced by the model of Trustless Computing, in which zero trust is assumed in any actor or feature of a computing service, except in self-guaranteeing transparent and accountable organizational processes that underlie its operation, lifecycle and certification governance, whose quality can be assessed by moderately educated and informed citizens.
What if instead we flipped it over, and created a standard body named Trustless Computing Certification Body based on free software and hardware-based security-through-transparency paradigm, that would use the same user-verifiable processes to guarantee (1) unprecedented privacy and freedom to user, and (2) unprecedented security for the content owners!? Why can’t the same assurance socio-technical processes guarantee both users data and content owners data?!
We finally decided to change the name of our R&D project from User Verified Social Telematics to TRUSTLESS. It better describes our project paradigms and aims and better aligns with our Trustless Computing Certification Body Initiative.