Psinq submission to the OpenAI Democratic Inputs to AI Grant Program
Context for our OpenAI Grant Program submission
Psinq has completed a submission for OpenAI’s democratic AI Grant Program. Our innovative democratic process seems very well suited to contribute solutions to some very challenging problems faced in the AI industry.
Brief Intro
You likely haven’t heard of Psinq. We’re a newly-formed charitable nonprofit organization, carrying on work in what we call fractal democracy. Fractal Democracy describes a mindset for the design of democratic processes that aims to address many of the problems of traditional governance. We’ve brought together thinking from many disciplines to construct a simple, trustworthy, and scalable system for enabling a community to come to consensus.
In previous iterations of the organization, we’ve spent over 2 years evolving our frameworks before deciding to become a nonprofit to share these ideas. Before that, the team had years of experience with blockchain governance mechanisms (and their associated challenges). Collectively the team has over 60 years of combine experience in governance mechanisms.
From Psinq’s Mission: “We want to help communities build trust, discover leaders, make decisions together, and coordinate in pursuit of the best possible outcome.” One of the communities we’ve had our attention on as of late is the AI community, which faces some very unique and very thorny problems in governance. We’ve made a submission for one of the 10 grants OpenAI will be rewarding to teams addressing what they are referring to as “Democratic Inputs to AI”.
Intro to the OpenAI Grant Program
With the introduction of OpenAI’s ChatGPT, Artificial Intelligence (AI) has taken a truly remarkable leap forward, becoming a revolutionary tool that’s part of more and more people’s daily workflows. AI suddenly—it seems—is becoming indispensable to daily professional productivity. At the same time that it offers undeniable benefits, it also brings with it significant risks and challenges.
The OpenAI grants seek democratic means of determining “inputs to AI”. Simply put, OpenAI is looking for fair ways to ensure AI serves all people. This goal arises out of OpenAI’s admirable self-awareness that, without such input, employees of OpenAI could influence its development toward specific ends that seem like natural, good ideas, but that end up not serving other groups of people with very different motivations and needs. Read more in their blog post describing the Grant Program.
Problems Faced by the AI Industry
There are 2 categories of challenges faced by the AI industry: current-day challenges and future risks.
Some Current-day Challenges We Face with AI
Hallucinations, Not Facts
Hallucinations are times when an AI model produces false information but passes it off as if it is fact. Without personally verifying the information, users risk thinking something is a fact when it is not. As people use AI in place of search engines and AI’s responses in place of actually reading articles, it is likely that AI’s “hallucinations” work their way into the “facts” used in reports, writings, and decisions.
Fairness and Safety
Large language models (LLMs) are trained on a massive corpus of training data, and the content and quality of that data affects the answers the model produces. From a fairness perspective, perhaps the training data is largely in English and from United States sources. There will be prejudices, cultural norms, etc. present in the data. There will also be certain information, perspectives, etc. that will be missing from the data. The result is a model that might serve English speakers in the U.S. quite well but doesn’t equally serve a French speaker in Morocco or an Indonesian person in Jakarta.
LLMs also may have been trained with data that contains dangerous information, leading to the need for safety filters. For example, say a user asks how to build a bomb or how to murder someone. There are certain safety filters we may want to add to AI models to ensure safety.
Future Risks
Before we discuss any of the following, a couple important caveats:
The area of AI risks can get dark. The risks faced are real possibilities. We don’t know for sure any of them will go badly, but there are many risks, any of which, if it did go badly, could mean some pretty serious consequences.
To keep this article somewhat light and accessible, we will only explore a few risks and only in a little detail. The internet is full of blogs and videos on the topic if it interests you.
Unknown Abilities
AI often develops emergent capabilities before we’re aware it has them [for additional context, YouTube link]. For instance, the large LLMs are generally quite good at programming. Perhaps they have or will soon develop the ability to execute cyber attacks.
Autonomy
AI tool builders are working with Agents. Where most of us interact with LLMs as we would a friend in a chat app, Agents are a means of allowing LLMs to act somewhat autonomously. Given how much we don’t know about the systems we’re building, perhaps unchecked autonomy becomes dangerous.
Human Morals
AI doesn’t have a human set of moral values. When we interact with other people, we can generally safely assume behavior based on a largely shared moral code. 1) AI doesn’t naturally share that with us; 2) it is nonobvious how to specify and integrate such guideposts; 3) the industry is still unsure how to test and verify that a model is honoring such rulesets.
Super-human and Nonhuman Intelligence
Lastly, AI will—very soon—be significantly more intelligent than humans. In some narrow fields, we already can’t measure its capabilities because it’s acing the human tests. AI’s intelligence will continue to grow, beyond all ability for humans to test or check its performance. The very nature of AI’s intelligence is also—notably—foreign to ours. There are certain characteristics of being human, some of which seem present in AI models (e.g., intelligence, creativity, self-contradiction, erring), some of which do not (e.g., biological urges like hunger, indecision, generally knowing right from wrong), and some of which we really can’t say (e.g., the experience of love, the ability to “intuit” something, paralysis resulting from a moral contradiction).
To simplify all that, let’s consider an AI that’s far more intelligent than humans but was trained only on text data from the internet and has no capacity other than intelligence, i.e., no emotional awareness, no particular artistic creativity, nothing we could identify as an “experience”, etc. The thoughts such an AI model would generate could be very foreign to us. There are thoughts most of us don’t consider because an emotion gets in the way, or a moral rule stops us. What “calculation” might be made by a massively intelligent AI that doesn’t know how it feels to inhabit a physical human body, love other humans, and experience inevitable mortality?
Overview of how we think we can contribute to solutions
Numerous challenges mentioned above could benefit from Psinq’s frameworks and tools.
Psinq builds tools for people to coordinate their efforts and come to consensus. Communities could form to tackle some of the above problems and solve them, confident that the democratic process they’re using to coordinate and make decisions together was fair and faithfully executed.
Psinq’s unique approach gives different communities the opportunity to say for themselves what their AI model is capable of. A community built on a particular spiritual or philosophical mindset could decide together—truly democratically—what data their AI model gets trained on, what safety filters it applies before providing responses, and how the model’s responses are checked for quality (lack of hallucination).
Going forward, we’ll present how we think we can contribute to solving or mitigating some of these challenges, starting with the specifics of our OpenAI grant proposal.
Be sure to subscribe to follow along with this journey.
Thumbnail Image Source: OpenAI