Data Security and Artificial Intelligence
All organisations that have access to NHS patient data and systems must use the Data Security and Protection Toolkit to publish an assessment against the National Data Guardian’s 10 data security standards.
We have completed a Data Security and Protection Toolkit self-assessment to demonstrate we are practising good data security and that personal information is handled correctly.
You can see more details here
NHS Secure Email Standard
We have acheived the NHS Secure Email Standard.
Emails sent to and from health and social care organisations must meet the secure email standard (DCB1596) so that everyone can be sure that sensitive and confidential information is kept secure.
You can see more details here
Our approach to AI
Artificial intelligence, or AI, refers to computer systems that can carry out tasks which usually need human thinking, such as recognising patterns, understanding language, or offering suggestions based on information they’ve been given.
You might come across AI in lots of everyday places, often without realising it. It can appear in a range of tools like Microsoft Copilot, email filters, online search engines, or software that helps analyse data or manage diaries. AI also shows up in things like mobile phone assistants, sat‑nav routing, or even systems that recommend films or music.
We want staff to use AI with confidence, curiosity, and care, safely, legally, and in line with our values.
We are using artificial intelligence (AI) as part of our strategic priority of having a culture of innovation. Please read our Innovation Protocol for further information. We want to embrace AI and the potential it has to improve the way we work. There are wide opportunities for us in using AI but we know there are also possible problems and issues with it.
Core principles of using AI
To mitigate as far as possible some of the downsides of AI we have a set of core principles for its use.
AI must be used in line with our existing policies and protocols and legal duties
This protocol is not an exhaustive guide to AI use and our existing policy and protocol framework and any legal duties must be adhered to.
Special consideration should be given to our information governance policies, which ensure that we keep data safe. Using unapproved tools would mean sharing company information and data onto systems over which we have no control in breach of our policies and the Data Protection Act.
We have a duty under the Equality Act 2010 to prevent discrimination and bias and AI must be used in an ethical manner and we should be mindful of potential bias at all times.
People using AI should be mindful of our policies and protocols and legal duties at all times and should speak out and raise any concerns about its use for them to be addressed. Our existing procedures should be followed, for example if there is a data breach that must be reported immediately.
Breaches of our policies and protocols and our legal duties through the use of AI could result in disciplinary action being taken in line with our disciplinary policy.
Only use approved AI tools
You should only use AI tools that have explicitly been approved for work purposes. If you’re not sure if a system has been approved then please ask before you use it. AI is rapidly developing but at the time of writing Microsoft Copilot is the main approved AI tool which has been agreed for use by Digital Services and Information Governance.
Using unapproved tools would be a breach of our policies and data protection law as it means sharing information outside our network to systems over which we have no control. Unapproved tools may also have flaws such as bias or other issues we are unaware of. In line with other systems on our network our approved AI tools allow for audit and assurance to make sure they are used appropriately and function properly.
External software companies are looking to deploy AI tools into their systems and if you are responsible for a system and this happens you should check if this tool should be used. For example, if the supplier of our Estates system CAFM said they were adding AI features that would need to be flagged and checked that it was compliant with our policies.
The person using AI is responsible for how they use the outputs from AI
Individuals who use AI are responsible for their own usage. It is an assistant and the user is responsible for what they do with what it generates. AI can and does make mistakes and output needs to be checked carefully. If you use the output of AI and it has made a mistake the mistake is your responsibility.
AI should be used in line with our values
Our values are important to us and we use them to guide our every day work. Our values mean we are caring and compassionate, respectful, and honest and transparent.
Here are some examples of how our values might be used in practice:
Being caring and compassionate means we never ask AI to carry out a task that should be done by a person such as making decisions which require human judgement and empathy.
Being respectful of others means we are aware that AI can have bias and we make sure that we do not use it without being aware of that and making sure we do not carry any bias into our work.
We will use AI in a way which is honest and transparent and we tell people where we have used AI, including through this policy, and we will never use it in a way that is not honest, for example generating images that could be deceptive in any way.