The Federal government recognises that ‘Artifical Intelligence (AI) has the potential to provide social, economic and environmental benefits. For Australia to realise these benefits, it’s important for citizens to have trust in how AI is being designed, developed and used by business and government.’
In order to seek public input, CSIRO’s Data61 was engaged to draft the Artifical Intelligence, Australia’s Ethics Framework, A Discussion Paper.
Data Governance Australia participated in the Future AI Forum hosted by KPMG which comprises a cross section of interested parties that provided a joint submission to this Discussion Paper – see here.
As with most innovations, Al has the potential to deliver tremendous benefits, as well as potential risks and challenges. We should all be excited by the possibilities it holds for areas as diverse as health, education, security, social services, finance, agriculture and transport to drive job growth, productivity and a better quality of life for every Australian. Investment in AI will likely see job creation and breakthroughs that lead to new solutions for some of the largest and most intractable problems we face.
The Future AI Forum focussed on two key questions. Firstly, what organisations developing and implementing AI can do in the context of standards and regulations, and secondly, what organisations should do, in the context of their social license to operate (assuming an inevitable delay between development and legislation).
Some of the themes worth considering:
- The challenge of putting principles into practice
- Organisations’ need for proactive and transparent strategies for remedy
- If ethics is to be an integral part of AI, there needs to be a broader representation to validate the reflection of national values
- AI should deliver shared prosperity beyond net benefit
- How can regulation adequately reflect values
- Victims should not bear the sole onus of contesting AI
- Education will be key to building public trust in, and engagement with AI (ie increased data/digital/privacy literacy)
- We need a proactive plan to re-skill workers displaced by AI
This Federal government initiative coincides with Australia recently signing on to a new set of OECD global principles for developing ethical AI together with over 40 other countries.
‘The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.
Complementing existing OECD standards in areas such as privacy, digital security risk management, and responsible business conduct, the Recommendation focuses on AI-specific issues and sets a standard that is implementable and sufficiently flexible to stand the test of time in this rapidly evolving field.
The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI and calls on AI actors to promote and implement them:
- inclusive growth, sustainable development and well-being;
- human-centered values and fairness;
- transparency and explainability;
- robustness, security and safety; and
In addition to and consistent with these value-based principles, the Recommendation also provides five recommendations to policy-makers pertaining to national policies and international co-operation for trustworthy AI, namely:
- investing in AI research and development;
- fostering a digital ecosystem for AI;
- shaping an enabling policy environment for AI;
- building human capacity and preparing for labour market transformation; and
- international co-operation for trustworthy AI.’
While this global discussion on ethics and AI is to be applauded, it raises the question of why ethics and data more broadly has not, to date, been more central to building data governance frameworks. Organisations need to start having these discussions without delay – ‘just because we can doesn’t mean we should’.
And out of these underpinning ethical questions strong data governance cultures, frameworks, policies and procedures can be built.
You may also be interested in