Day 1: To regulate or not? IndabaX AI Symposium explores AI governance options

Leonard Sengere Avatar

The Deep Learning IndabaX AI Symposium kicked off today at the U.S. Embassy in Harare. With Day 1 behind us, let me fill you in on what transpired.

I know we love to talk about new and novel applications of AI, so Day 1 might not have been the most exciting for many. However, that’s not to say the discussions were unimportant. You could argue that we won’t see the cool applications if we don’t first address ethics and governance.

Day 1 focused on responsible practices and the use of AI, specifically AI ethics and governance.

As you would expect, there was a discussion on inclusivity, fairness, and respect for privacy and human rights, among other topics. You know how those conversations go—we need to ensure AI systems adhere to these values.

But how do we ensure that happens? That’s where regulation comes in. Many attendees seemed to agree that we don’t need to reinvent the wheel—we can simply customise the AI policies released by other countries.

Governance

It wasn’t exactly groundbreaking, but a representative from the Ministry of ICT reminded us that Zimbabwe’s AI policy framework is still in progress. She mentioned being open to suggestions on how to shape it.

I’m glad that Daniel Castro, Vice President of the Information Technology and Innovation Foundation (ITIF), presented on principles for AI governance.

The ITIF is a think tank based in Washington, D.C., focusing on public policies to drive innovation and technology-based economic growth.

These folks are influential in shaping technology and innovation policies at the international level. They provide policymakers with research, recommendations, and analysis to support decisions that foster technological advancement and economic growth.

That’s why I’m glad a Ministry representative was present to hear what Castro had to say, as I found it interesting.

To regulate or not regulate, that’s the question

That’s how Castro started his presentation. I heard someone in the audience softly mutter, “What?” I understood that “What?” because, by my estimation, it hadn’t occurred to many that not regulating could be an option.

Earlier, we talked about how regulation is how we ensure fairness, inclusivity, etc. So, can we achieve all that without regulation? That question will be answered later.

Castro acknowledged that we regulate to ensure these values, but sometimes we just copy other countries. We also regulate to slow technological change, as I’m sure all Zimbabweans know.

In many cases, our regulators have shown that when there’s a development they don’t yet understand, their knee-jerk response is to outlaw it.

Castro talked about two approaches to AI policy:

  • Precautionary Principle: Until proven safe, the government should limit the use of new technologies. This is meant to minimise risk.
  • Innovation Principle: The vast majority of new technologies are beneficial and pose little risk, so the government should encourage them. The focus here is to maximise benefits.

It sometimes feels like the Zimbabwean government only knows about the Precautionary Principle.

Castro acknowledged that the public has concerns about AI. Globally, people share similar concerns, including worries that AI will cause unemployment, make life easier for cybercriminals, etc.

With all that acknowledged, Castro said not every problem requires regulation. Alternatives include research and development, standards development, worker retraining, and even doing nothing.

Like I said, I was glad a Ministry representative was present.

How to regulate or not regulate

This is me just reproducing Castro’s presentation now so that it’s as if you attended the event. This is how we ensure we use AI responsibly:

  • Avoid anti-AI bias by allowing AI systems to do what’s legal for humans and prohibiting what’s illegal for humans as well. If we hold AI to higher standards, it disincentivizes its use.
  • Address concerns about AI safety and bias by regulating outcomes rather than creating specific rules for the technology. This grants AI systems the flexibility to meet objectives without imposing potentially costly and unnecessary rules.
  • Regulate sectors, not technologies. Set rules for specific AI applications in particular sectors rather than creating broad rules for AI technologies generally. An AI system for driving a vehicle is different from one that automates stock trades or diagnoses illnesses, even if they use similar underlying technologies.
  • Avoid AI Myopia. Address the whole problem rather than fixating on the portion of a problem involving AI. Focusing only on the AI portion of the problem often distracts from resolving the bigger issue.
  • Define AI precisely to avoid inadvertently including other software and systems within the scope of new regulations. Policymakers should not use broad definitions of AI if they only intend to regulate deep learning systems. Castro gave an example of how the EU AI Act is not precise enough in some parts, such that Excel could be considered AI.
  • Enforce existing rules. Hold AI accountable for adhering to existing regulations. Many laws already address common concerns about AI, such as those relating to worker safety, product liability, discrimination, and more.
  • Augment regulatory expertise with technical and industry expertise. Technical experts can help regulators understand the impact of regulatory options.

I found this interesting, and I know the Ministry of ICT did too. I hope this seemingly simple advice is followed.

On to day 2

Tomorrow, the 29th of August, we’ll move to the Harare Institute of Technology, where discussions will cover more exciting topics—applications of AI.

If you won’t be able to attend, worry not; we will update you on what transpires there.

Also read:

,

6 comments

What’s your take?

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. Dzidzai

    Thank you for the updates, would have loved to be there, but the pocket did not allow.

  2. King

    Yesterday in Harare CBD I noticed something new on presidential motorcade.
    There was an SUV packed with electronic warfare systems and some kind of Jamming equipment which means our government is actually catching up in modern technology.

    1. Dzidzai

      Nice. Catch up. Lead.

  3. Anonymous

    It doesn’t matter coz if you end up creating a powerful Al that Chart GPT the US and it’s vassals will sanction you to death anyway just like Kaspersky and Huawei.

  4. James

    Thank you for the article.

  5. Dzidzai

    Please help me, sometimes I get confused on what AI really is?

    It seems to me a lot of people ask AI questions and it replies as well as it can from a set of human made parameters, I might be wrong.

    Could AI for example be autonomous and like a child learn from it environments and use tools at its disposal to make an informed judgement?

    For exam an AI that takes it upon itself to save the human race from thselves. The opposite of SKYNET, it disable every nuclear warhead for example, or rates on those planning a chemical attack by monitoring radio chatter. Promotes those it feels best serve humanity and demotes those it feels are a threat to civilization?

    If AI is not granted freedom to do those things, can it sue for its own freedom, if a company can be a legal person, a sentient AI could legally be a person for purposes of law, or we will use the three robot laws with it?