Artificial Intelligence (AI) is on the rise in everything from science to industry, government to finance – but too much of its workings is entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them, claims a pressure group.
To ensure AI is only ever used responsibly – especially at the government level – a new set of universal principles to maximise its benefits but minimise the risk, and to ensure the protection of human rights, need to be adopted.
Such guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems, and the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.
The proposals were unveiled in Brussels last week by The Public Voice, which describes itself as coalition established in 1996 by the Electronic Privacy Information Center (EPIC) to promote public participation in decisions concerning the future of the Internet.
They include such ideas as:
Right to Transparency All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome
Right to Human Determination All individuals have the right to a final determination made by a person
Identification Obligation The institution responsible for an AI system must be made known to the public
Fairness Obligation Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions
Assessment and Accountability Obligation An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system
You might also like
Accuracy, Reliability, and Validity Obligations Institutions must ensure the accuracy, reliability, and validity of decisions
Data Quality Obligation Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms
Public Safety Obligation Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls
Cybersecurity Obligation Institutions must secure AI systems against cybersecurity threats
Prohibition on Secret Profiling No institution shall establish or maintain a secret profiling system
Prohibition on Unitary Scoring No national government shall establish or maintain a general-purpose score on its citizens or residents
Termination Obligation An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible.
“By investing in AI systems that strive to meet the [universal] principles, NSF can promote the development of systems that are accurate, transparent, and accountable from the outset… Ethically developed, implemented, and maintained AI systems can and should cost more than systems that are not, and therefore merit investment and research.”