The Economic Development, Science and Innovation Committee has reported back on the Digital Identity Services Trust Framework Bill. The bill is one of those boring administrative ones, establishing a regulatory framework for providers of "digital identity services" - people who validate your identity online. Which normally isn't the sort of thing anyone outside that industry would bother with, except for two things: as originally drafted, the bill included a (weak) secrecy clause, and it included a very expansive immunity from legal liability. The good news is that the first is gone entirely, the committee having agreed that it was unnecessary, while the latter has had its biggest hole plugged, by ensuring that providers are still liable for breaches of the Privacy Act.
But problems remain. While leaking private information is the most likely harm caused by digital identity services, there are other harms as well. James Ting-Edwards gives some examples near the end of this article, and of course extending immunity to cover interactions and communications with users as well as the actual provision of the service means it also covers straight-up discrimination and racism by the provider, both of which would normally attract liability under the Human Rights Act. The exemption only covers "good faith" actions, and arguably racism and discrimination can never be in good faith, but it would be safer to make that explicit by stating that the Human Rights Act still applied. And hopefully the government can be convinced to fix this at the committee stage.