This is a newsletter on AI, entrepreneurship, creativity, and mindfulness. Season 1 breaks down Generative AI and its impact on creative industries. Post #1 is here.
In 2019, I wrote the book A Human’s Guide to Machine Intelligence in which I introduced the concept of an “Algorithmic Bill of Rights.” I argued for certain rights that citizens and consumers should expect from governments’ or companies’ use of AI. One of the more gratifying outcomes of that book has been how that term has taken off, including the Whitehouse’s proposed AI Bill of Rights, conferences by that name, and scores of debates on whether Whitehouse’s proposed bill of rights provides sufficient protections or goes too far.
I also feel that there is a lot of misunderstanding regarding the concept. Given recent news stories — strikes in Hollywood that bring up AI, China’s Generative AI measures that go into effect today, controversy about Zoom’s use of customer data for AI training — this is a good time to revisit what AI governance should look like.
Unanticipated Consequences of AI Decisions
AI is increasingly driving decisions in high-stakes settings such as recruiting (resume screening), loan approvals, credit scoring, content generation that shape public opinion, etc. It even drive life-and-death decisions, whether it is AI use in courtrooms to predict the likelihood that a defendant will re-offend or in personalized medicine. And AI is becoming more autonomous over time. But these AI-enabled automated decisions are not without risks. There are known examples of stock market crashes driven by algorithmic trading bots, gender biases in resume screening AI, race biases in AI trained on data that reflect human biases, business fraud and civil unrest due to deepfakes, and more.
The Algorithmic Bill of Rights is a concept that I borrowed from the Bill of Rights in the U.S. Constitution. When the U.S. Constitution was being drafted, some people were worried that they were creating a very powerful government in the U.S. The Bill of Rights was created as a way to protect citizens. Today, we are in a situation where governments and powerful tech companies are using AI and consumers feel that they need certain protections.
Bill of Rights
While I believe there is a role for governments to play in AI regulation (e.g., in my book, I discussed an AI safety board to provide oversight of AI use in high-stakes settings), my proposed bill of rights is a self-governance “contract” between companies and citizens rather than a proposed legislation.In a similar vein, Whitehouse’s AI Bill of Rights is not a law but a set of guidelines for companies.
But if it is not required by law, why should firms consider self-governance? Just as we saw with information security, it is a matter of time before boards and CEOs will be held accountable for failures of machine decisions. Corporate risks from prominent AI failures include loss of user trust or brand equity, litigation risks, and excessive regulation down the road. So what are companies supposed to do?
In my Bill of Rights, I proposed the following:
Awareness: Users should be informed about AI use in decisions (e.g. did AI make your mortgage approval decision) including a description of the data used to train the AI and details on how that data were collected. Zoom did that recently when it revealed its use of customer data to train its AI; users protested causing Zoom to change its policy. Well played, Zoom users.
Explanation/Transparency: Users have a right to an explanation regarding how the AI works, including what factors influence its predictions/decisions. While modern AI can be very opaque, this calls for firms to invest in interpretable machine learning techniques so AI predictions come with explanations. Example, a company revealing that its loan approval AI “heavily weights applicant credit score followed by employment status and then their zip code” (the third factor might reveal a potential problem).
Control: Users should have some level of control on the way the algorithms work— i.e., there should be a feedback loop between the user and the provider. What this looks like in practice varies by context. It can be as limited as Facebook giving a user the power to flag a news post as potentially false (for subsequent review by moderators); it can also be as significant as letting a passenger take over when (s)he is not satisfied with the choices a driverless car appears to be making. It can include being able to ask for a human alternative in high-stakes settings.
Validation/QA: A governance process for companies that use AI in high-stakes settings should include a simple inventory of all machine learning models being used at a company, the uses of such models, names of the developers and product teams in charge of models, and risk ratings. This, in turn, might trigger an added layer of QA for certain models that would evaluate the inputs (training data quality), model, and the outputs of the model. This extra QA should be performed by a team that is independent of the team that built the model (to ensure that models are not evaluated in the same way that they were originally validated).
Source: Lexica.art
The Whitehouse Bill of Rights has a lot of overlap with my original Bill of Rights (explanations, human alternatives, data disclosure, etc). But neither that bill nor my original proposal from 2019 adequately addresses governance of Generative AI.
Generative AI
Generative AI governance raises new questions. For example, has a provider suitably addressed consent, credit, and compensation when existing content or art is used to train AI. Has content generated by AI been tagged as such? Is there a way for users to flag potentially harmful or fake content for moderator review? What additional testing or analysis has been done on AI use in socially consequential settings (e.g. platforms that shape public opinion)? It’s still early days but China has already launched a major set of Generative AI measures. Other countries will likely follow China’s lead on this.
Of course, companies will also want internal governance measures that protect their interests, such as ensuring data privacy when using third-party LLMs like OpenAI. I intend to cover that soon.
End Notes: (i) To better understand how big a change is about to happen and why we need AI governance, check out this conversation between Emad Mustaque (founder of the company behind Stable Diffusion) and Peter Diamandis. Everything is about to change, including writing, software development, movie-making, and more. This is the “great equalizer or the great controller” and the moves we make now will determine which one it is.
(ii) I have been promising a ChatGPT prompt tutorial for some time. Will post that next.
The Second Amendment and its utility is still a debate. ( and a quagmire for non-Americans)
It’s consequences bemoaned ever so often but never loud enough to effect a meaningful change in everyday life.
I mention this as “freedom” from a leviathan is touted as the fundamental governing factor. Moreover it is mostly barricaded by national boundaries, arms trade notwithstanding.
AI could be all pervasive internationally. Further, private actors could be empowered unleashing the next “war on terror”.
Sadly, imho it cannot be a simple rights issue. It requires the equivalent of the NPT or more with enforceable international law.
Perhaps sounds authoritarian. But then, humankind needs to be “civilised” in technology use before handing over the reins to democratic norms and constitutional rights.
The transparency that is referred to in the Awareness point - is that available today as precedence?