Insights from the Regulatory Roundtable on AI Social Readiness

For social workers, AI is being described as a new way of working. A way to handle more information, spot patterns, and maybe even predict risks before they happen. But there’s another side to that story. When technology steps into spaces that involve vulnerable people, as regulators we have to ask: what does this mean for safety, for fairness, and for public protection? This event was a valuable opportunity to look at how as regulators we can respond, and work with our registrants to maximise the benefits and opportunities that AI brings, but in ways that do not leave them, or the people they work with exposed. It was great to work with the IOR on such a key area and, I hope, the start of an important ongoing conversation.
— Sarah Blackmore (Social Work England)

London, January 2026 – In collaboration with the Centre for Collective Intelligence at the UK innovation hub NESTA, we had the opportunity at the end of last month to welcome a dozen regulator‑member representatives to explore AI from a less commonly addressed yet crucial angle: AI Social Readiness.

The regulatory panel discussion focused on public‑sector AI tools and how these are perceived by members of the public, including from a regulatory perspective. The Centre for Collective Intelligence shared findings from a pilot study in which members of the UK public completed a ‘social readiness’ assessment of two AI tools developed for use in the public sector. Consult, developed by the government’s AI Incubator to support the analysis of public consultations, and Magic Notes, created by the social enterprise Beam, to support note‑taking and summarisation by social workers.

Facilitated by IoR Chair Marcial Boo, the roundtable welcomed esteemed contributors, including Sarah Blackmore (Executive Director for Professional Practice and External Engagement, Social Work England) and Michael Farrington (Deputy Director, Technology, Ofsted).

The roundtable discussion explored the value of public engagement in evaluating and assuring AI tools, and also touched on the different ways regulators are currently exploring or utilising AI:

  • Many regulators are using AI tools to help write minutes of meetings.

  • Some are exploring AI tools to support the drafting of inspection reports or the preparation of papers for regulatory decision‑making committees.

  • Others are using AI tools to sift and triage complaints.

  • A few larger regulators are beginning to look at how AI might assist in reviewing lengthy compliance documents and providing high‑level summaries to inform further internal analysis.

A discourse on AI can’t be complete without shedding light on the risks it may pose. Some of the concerns raised included:

  • Fairness, including diversity considerations and the handling of vulnerable groups;

  • Security;

  • The pace at which the technology is evolving.

Participants agreed that there is good practice to learn from, and that a range of assurance methods are needed - from the technical to the social. Nevertheless, there are still various hidden risks that we must navigate. For these very reasons, community support and peer learning become especially valuable tools for sharing both best practices and the challenges regulators face.

In fact, there are various ways the Institute of Regulation aims to support its members to grasp the vital topic of AI in regulation from various perspectives:

  • All Special Interest Groups (SIGs) have discussed AI from their own perspectives over recent years, and the Digital & Technology SIG provides a dedicated space for the technological aspects of AI to be explored among regulators.

  • The Regulation Podcast’s 18th episode, Regulators’ Response to Artificial Intelligence, featuring two global experts, Prof. Julia Black (LSE) and Joey Conway (Deloitte), explores what AI is, the challenges and opportunities it presents, and how regulators should respond.

  • A Regulatory Roundtable designed for Non‑Executive Directors examined the topic ‘What NEDs Need to Know About Cyber Security and Cyber Resilience’ last year.

  • As part of IoR’s member‑exclusive webinar series, members had the opportunity to learn about ‘GenAI Adoption in UK Regulators’ through an insightful presentation by Rob Holtom (Executive Director – DDaT and Customer Experience, ICO) in early 2026.

  • A dedicated track in the Annual Conference 2026 programme will offer a deep dive into the AI Faultline through an immersive experience, organised by Silver Conference Sponsor PA Consulting.

And we don’t stop here. We continue to organise meetings, webinars, and regulatory roundtables on the topic of Artificial Intelligence in the sector.

IoR members are kept informed about upcoming opportunities through our member‑only newsletter and the Regulation Hub’s events calendar. If you are a member and haven’t yet joined our member platform or signed up for our newsletter, you can do so here.

Not yet part of the IoR membership network? We’ll be happy to provide you with information about your potential membership with us. Get in touch at: membership@ioregulation.org.

Learn more about NESTA’s AI Social Readiness Assessment or get in touch at collective.intelligence@nesta.org.uk.

Next
Next

IoR Evidence to the House of Lords on Regulation and Growth