Around 19% of IT professionals faced an ethical challenge in their work in 2023, according to a BCS survey.
Every UK technologist working in a high-stakes AI role should be licensed and meet independent ethical standards, according to BCS, the professional body for computing.
A public register of AI professionals, held to an ethical code of conduct, will make an ‘AI version’ of the Post Office Horizon scandal less likely, said BCS, the Chartered Institute for IT.
In its new research report, BCS also recommends strong and safe whistleblowing channels to allow tech experts to call out unethical management.
Around 19% of IT professionals faced an ethical challenge in their work in 2023, according to a BCS survey.
CEOs and Directors making decisions on the resourcing and use of AI, should share in the accountability. That could be achieved by requiring large organisations to publish their policies on ethical use of tech, BCS suggested.
BCS said the measures would rebuild public trust and help the UK set a world-class standard in ethical AI, following the AI safety summit at Bletchley in autumn last year.
The paper ‘Living with AI and emerging technologies: Meeting ethical challenges with professional standards’ led by BCS’ Ethics Specialist Group recommends that:
- Every technologist working in a high-stakes AI role should be a registered professional meeting independent standards of ethical practice, accountability, and competence.
- Government, industry and professional bodies should support and develop these standards together to build public confidence and create the expectation of good practice.
- UK organisations should be required to publish their policies on ethical use of AI in any relevant systems– and those expectations should extend to leaders who are not technical specialists, including CEOs and governing boards.
- AI professionals should have clear and visible routes for ‘whistleblowing’ if they feel they are being asked to act unethically or deploy AI in way that harms colleagues, customers or society.
- The UK government should aim to take a lead in and support UK organisations to set world-leading ethical standards.
Rashik Parmar MBE, chief executive of BCS, The Chartered institute for IT said: “We have a register of doctors who can be struck off. AI professionals already have a big role in our life chances, so why shouldn’t they be licenced and registered too?
“CEOs and leadership teams who are often non-technical but still making big decisions about tech, also need to be held accountable for using AI ethically. If this isn’t happening, the technologists need to have confidence in the whistleblowing channels available within organisations to call them out; for example, if they are asked to use AI in ways that discriminate against a minority group.
“This is even more important in the wake of the Post Office Horizon IT scandal where computer generated evidence was used by non-IT specialists to prosecute sub postmasters with tragic results.
“By setting high standards, the UK can lead the way in responsible computing, and be an example for the world. Many people are wrongly convinced that AI will turn out like The Terminator rather than being a trusted guide and friend – so we need to build public confidence in its incredible potential.”
Georgina Halford-Hall, CEO WhistleblowersUK & chair of strategy & policy for the All Party Parliamentary Group (APPG) for Whistleblowing said: “AI and the rapid advances being made in the technology sector have left whistleblowers open to abuse, bullying, harassment and victimisation. We fully support BCS as it looks to build a transparent culture around ethical practice which gives tech professionals the confidence to challenge when those standards are not met.”
Source: DIGIT