European commission issued a code of ethics for artificial intelligence on April 8 to boost trust in the industry. The commission also announced the start of a pilot phase of an ethics code for artificial intelligence, inviting businesses, research institutes and government agencies to test the code.
Drafted by the eu’s high-level expert group on artificial intelligence, the guidelines set out seven key conditions for “trustworthy artificial intelligence”, namely:
- Human initiative and supervision: artificial intelligence systems should achieve a fair society by supporting human initiative and basic rights, rather than reducing, restricting or wrongly guiding human autonomy.
- Robustness and security: trustworthy ai requires algorithms to be safe, reliable and robust enough to handle errors or inconsistencies at all stages of the ai system’s life cycle.
- Privacy and data management: citizens should have complete control over their data, and the data related to it will not be used to hurt or discriminate against them.
- Transparency: the traceability of ai systems shall be ensured.
- Diversity, non-discrimination and equity: ai systems should take into account the overall range of human capabilities, skills and requirements and ensure accessibility.
- Social and environmental well-being: artificial intelligence systems should be used to promote positive social change, sustainability and ecological responsibility.
- Accountability: mechanisms should be established to ensure accountability and accountability for ai systems and their outcomes.
The EU defines “artificial intelligence” as “systems that display intelligent behaviour” that can analyse environments and exercise a degree of autonomy to perform tasks.
Responding to the code, Andrew ansip, vice-president of the European commission and vice-president of the eu’s single digital market strategy, said: “ai that meets ethical standards will be a win-win. It can be a competitive advantage for Europe, and Europe can be a trusted, people-centred ai leader.”
Of course, the EU move has also been questioned in some cases.
Matias Spielkamp, co-founder of the nonprofit algorithm Watch, argues that while it is a good idea to set the guidelines, the definition of the concept of “trustworthy Artificial Intelligence”, which revolves around the guidelines, is unclear, and it is unclear how future regulation will be implemented; Still have the personage inside course of study to worry, the code is too detailed can make a lot of companies especially small and medium-sized enterprise is operated hard; In addition, Thomas messinger, a philosophy professor at the university of mainz in Germany who helped draft the guidelines, criticised the eu for not banning the use of artificial intelligence to develop weapons.