OpenAI Grant Psinq Submission
Our answers and approach to the OpenAI "Democratic Inputs to AI" Grant Program
TLDR; We are committed to sharing publicly, and when this opportunity from OpenAI was publicly solicited, we thought this was a great situation to openly share our ideas.
Below are the OpenAI Grant questions asked and Psinq’s answers submitted.
We look forward to your comments.
Grant Status Update July 19, 20231
Part I
How long has your team been working in the democratic / consensus-building space?
We have been working dedicatedly in democratic / consensus-building for 2 years now (under the name fractally). Before that, most of us worked in blockchain, with our thought leader being in the space and focused on consensus since about 2010.
…
Team-related questions omitted
…
What question(s) are you most interested in piloting? It can be from the list of options below or a different one that you decide to pursue. Questions should be decision relevant for developers, users and affected non-users of AI systems.
Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?
Why did you pick this question (or questions) to work on?
Every human being has a unique and valuable life experience; ideally, the democratic processes we use to answer hard questions about AI include all the richness and variety of all those lives. The world population represents approximately 8 billion different perspectives, which we can group many different ways, e.g., culturally, financially, regionally, demographically, etc. Finding a global set of rules that honor all perspectives is an immense challenge.
We are fascinated by and driven to find how technology can provide the means of honoring ***all*** perspectives via decentralization. How can we empower individuals to apply their own preferred restrictions on AI or a community to define its own safety filters? And perhaps we can even bring together representatives from multiple communities to come to consensus on a minimal set of filters that need to be there for everyone’s safety but that don’t impinge on individuals’ preferences and unique perspectives. Just as with finding product-market fit, we need to learn the needs and preferences of ****some**** customers before we can know what’s right for *many* customers.
Psinq is a research team dedicated to solving the problems of human coordination. We have drawn out many relevant factors in effective human coordination, and we look forward to continuing to test, iterate, and spread the tools we’re developing.
AI safety and alignment represent vitally important needs that can be served by effective human coordination. We’re very excited to realize the impact of our ideas and tools on the problems humanity faces with rapidly improving AI.
Why do you think that this question (or questions) are well suited to broader public input? What do you think labs, developers or others might change as a result of input on these questions?
This question requires broad input to comprehend the manifold needs and preferences of different communities. We will start with a single community, but our process and software are built for extension to other communities. Collecting answers to critical questions such as this one from numerous communities will inform more-broadly applied thinking about how to restrict AI models in general.
Without broad input (from many different communities), it is too simple for any small group of people to think they know what’s best for other underrepresented people. Our proposed process should elicit many different perspectives as well as disseminate those perspectives among our proposed first group. In real-time conversation, we hope to demonstrate to many stakeholders the varying perspectives within the community.
Our process is built to embrace the subjectivity and perspective of the individuals in a community and to ensure any community decision has included all individual perspectives. Only those rules that everyone agrees on will get bubbled up for consideration and implementation.
We believe our process applied in the context of one community will expose a set of rules that would serve that community’s use of an AI model. We further expect the process to expose a multitude of opinions that aren’t shared enough to bubble up and become part of the final answer. It will be very informative to see all the perspectives and opinions that don’t bubble up and contribute to the final outcome.
One size is unlikely to fit all in the case of AI model restrictions. The more our thinking about restrictions can be decentralized and put in the hands of the user (or their community), the more of humanity we can serve effectively and appropriately (by their standards).
Looking forward, repeating an event like our proposed event (but with other communities) will likely show that different communities can come to significantly different conclusions about how they would be served best by AI. Labs, developers, and businesses will learn a lot from seeing the diversity of communities manifested in written statements from each community that differ so much in their goals and needs. We believe it will reorient service developer efforts to focus on decentralized solutions (e.g., community-defined, user-selected filters) rather than a monolithic / global solution that can’t provide nuance and serve local needs.
Process overview: Please provide an overview of how the process that you envision building will work. Please touch on participant selection, topic overview, provision of additional context, content moderation, voting/commenting, aggregation of viewpoints, and provision of feedback to participants. Include key milestones/timelines.
Overview
We propose utilizing a modified version of the process (and software tool) we’ve invented and trialed for a multi-week experiment where a diverse group is assembled to discuss the topical question in what we call fractal democracy, an innovative democratic means of bubbling up the consensus opinion of the group.
With many years of experience in blockchain, we’ve applied numerous disciplines (e.g., game theory, psychology, economics, etc.) to a “simple” yet importantly innovative democratic process that completely reinvents numerous aspects of traditional governance systems.
We’ve trialed predecessors of our tool with notable success and valuable learnings.
Our first community: Approximately 75 of the roughly 500 members meet a few times a year to come to consensus on which members to entrust spending a portion of the nearly $750,000 in total grants to fulfill on the community’s objectives.
Our second community: about 40 people meet weekly for nearly 9 months to discuss advancing its mission. Trust, coordination, and relationship were remarkable.
Participant Selection
Rather than attempting to combine many disparate perspectives into some kind of aggregated opinion, we will instead ask one proof of concept community for their input. The approach can then be applied to many other communities.
We propose gathering an English-speaking, tech-savvy, and predominantly North American (timezone) cohort. We will include diversity along only 2 dimensions: race and age. Any other potential dimensions of diversity will be addressed in future experiments.
We will direct our marketing to specific groups via social media followings, newsletters, and other subscriber bases.
Participants' discussions will be random groupings, to ensure fairness and mixing of perspectives. Random grouping gives no one any predictable advantage in terms of who they’ll meet and with whom they’ll need to find common ground.
Topic overview
Participants will discuss and debate responses to the topical question and advance the participants in their meetings that can best represent the group’s perspective.
Provision of additional context
We will have a pre-meeting each week to present additional context on the topical question as well as the democratic process participants will engage in.
Content Moderation
Our process relies on real-time interaction, so content moderation consists simply of ensuring productive participation. Participants in each video call will have the ability to eject counter-productive participants if a super majority of the room agrees they should be ejected. This ensure speedy and fair handling of unproductive behavior without debatable censorship rules. It also maximizes the opportunity for well-intentioned participants to coordinate efforts and find consensus.
Our process
Our process is what we call a “political playoff”.
Participants will convene on a specific day and sign in to our software to participate.
All participants will be randomly grouped into small video calls of 5 or 6 people.
Each group has about an hour to discuss the topical question and come to consensus on who among them would be best to represent the group’s views and discussion.
Everyone promoted in Round 1 is again randomly regrouped into small video calls to repeat the process at the next level.
For a group of ~500 participants, by the time participants have done 2 rounds, there is a final group of only ~14 participants who have been entrusted by the entire group to represent their perspectives.
This final group can then work to discuss and articulate the perspectives of the larger group.
The final meeting’s video recording (or written report produced out of that meeting) will be circulated among the participants as feedback from the process.
We will repeat this process in 2 more successive weeks to allow the community to incorporate learnings from earlier weeks.
The final report will represent the consensus of the entire community across 3 iterations and nearly 300 conversations.
Those who advanced furthest and most consistently will be featured in a leaderboard and interviewed at the end to get their feedback on the process and the topical question.
Points will be assigned based on the consensus opinion of people’s ideas. The resultant leaderboard functions as incentive to develop and present the best ideas each individual can, as well as to compete with others to promote one’s ideas. We believe this kind of leaderboard in such a public forum will provide added motivation for valuable and meaningful participation.
Why This Process
This system is specifically and carefully constructed for the following benefits:
avoids the tyranny of both the majority and minority
ensures fairness via randomization, especially over a few iterations
promotes trusted consensus builders and incentivizes finding common ground, given participants only have representation at higher levels if they find a perspective that a super majority of their group is satisfied with advancing to the next round
is voluntary; power to represent others is derived from consent
is simple enough to easily verify faithful execution of the process, maintaining trust in the process
is credibly neutral
scales to large communities
maximizes individual influence
compensates for “rational ignorance”
cultivates trust and relationship building, which can further spur better discussion
provides real-time, direct feedback to all participants
implicitly establishes a weak identity system via repeated video interactions, which means…
avoids Sybil attacks, bots, and similar process gaming, compared to competing solutions that are purely asynchronous and anonymous
encourages accountability. Participants are on-record and can hold be held accountable for their opinions
For more on the process and principles behind it, see the following (both included as uploaded files):
the book More Equal Animals (uploaded as a pdf)
the ƒractally white paper (available on fractally.com) that conceptualized a crypto token-based weekly meeting version of the ideas in More Equal Animals
Milestones
mid-July: find result of grant application
July - Aug:
Build first 500-participant cohort
Customize software for the experiment
prepare educational content
Sept: run 3-4 week experiment
Oct: review report, data, and videos and compile final report on the experiment
Participant selection: How do you plan on obtaining a sample of participants for your experiment? How do you think about questions of representativeness and how they might matter for your question and method? Note: OpenAI can advise on methods or resources for obtaining a sample.
Proper representation is a huge challenge for any large group of people. The challenge is that there is always an age range, gender, ethnicity, or other division that is not sufficiently represented.
At Psinq, we flip the script and decentralize instead of attempting to serve everyone with monolithic thinking. We aim to honor the most people by serving the local, regional, and cultural diversity via decentralization and local customization.
We propose starting with smaller groups that decide what is right for them. Safety filters, decisions on what to include in training sets, etc. could all be decided by individual communities.
Eventually, we may want to determine the common safety rules that should apply to all AI, but ideally those common rules would serve (and not restrict) all users. Rather than a single “diverse” group determining the rules that everyone must play by, a global ruleset could arise from the feedback from specific communities’ determinations.
We’ve also seen in our trials that not only do we aggregate opinions; participation frequently alters participant perspectives of the question at hand via the real-time exchange of ideas.
Such a process could produce a truly representative safety filter (for that community), while leaving all other communities free to customize such a filter to their own, unique needs and preferences.
We propose focusing on an English-speaking, largely North American timezone first cohort where we attempt to achieve a degree of diversity only across race and age. That said, our solution is built for any community in any geography, speaking any language, and with any set of cultural norms. It would be our intention to repeat this first run with other communities with different constituents to determine their unique determinations.
We will build this first cohort via a social media campaign (in English), reaching out to our existing user base, as well as partnering with and buying access to other subscriber bases, e.g., newsletters, social followings, etc. so we can target diverse races and geographies. We expect geographies to be relatively easy to achieve, given the nature of online communication, but we will be much more targeted in our outreach to subscriber bases, working with user bases where the readership’s demography is pretty well known. Lastly, we will certainly solicit whatever advice and assistance OpenAI can offer to ensure we meet our diversity target.
OpenAI is at the forefront of the AI revolution, which also means it’s at the forefront of data ingestion right now. Using our process (or a successor/customized form of it), we could explore any topic, and in fact do so, to a large degree, in parallel. We’re very excited at the prospect of combining OpenAI’s data and our process. We could begin meaningfully addressing very topical and important questions in AI that desperately need answers that are otherwise quite hard to come by. Not everyone has access to that same data spigot; we believe our organizations would uncover more collectively than separately.
Tooling: Tell us about your plan for the tooling or infrastructure you’ll use for your experiment. Will you use existing tools or build new tools?
If existing tools, please explain what features of those tools make them particularly compelling for your project. If new tools, please explain what features unavailable in existing tooling you plan to build, and what makes these features particularly compelling for your project.
We will be using a new software tool we’re developing that embodies the 3rd iteration of our process. It borrows from and synthesizes learning from approximately 2 years of experience with democratic processes and tooling to support them. We’ll use this new tool to explore the topical question.
Our innovation is in our ground-breaking process; our software is simply an embodiment of the process. We began building this tool to simplify and automate a process we formerly ran manually with spreadsheets and Zoom conferences with breakout rooms. Our software will automate the randomized grouping of participants as well as regrouping those promoted to the next “level” and keeping track of each person’s progression through the “playoff”. No existing tool we’ve found does all this.
It is our mission to release this software to the world for use by any and all communities. To ensure its broad utility, we needed the tech to be license-free, easy for a community to use, and free of dependencies like required subscriptions that cost and that tie a community to a particular service provider we built into our software. We have built open source and leveraged decentralized technologies like WebRTC to fulfill these objectives.
See the Process Overview question above for more details on the process and how it’s unique. Our software embodies each item discussed in that answer.
Limitations: What do you expect to be the biggest limitations of your approach? (e.g., potential for process gaming, types of questions your process would be unable to help answer)
Most notably, our process is time intensive. It does lead to far more committed participation because it does involve a time commitment, but this has already proven in our experience to increase the quality of participation.
The process also embraces the subjectiveness of people, which can lead to unproductive communities. Communities’ effectiveness depends heavily on the culture and mindset of those in the community. As in all other communities in the world, communities using our process can vary greatly from being wildly productive to completely scattered and unproductive. For the proposed event, we’ll have a brief entrance survey to raise the threshold for participation a bit and increase the chance of productive participation.
Our process, rather than trying to correct for cultural variation in communities, is designed specifically to ensure such a misaligned community can’t come to a compromised, unrepresentative consensus. Rather we prefer a community divide or reform under a new mission that brings increased alignment. As with people rowing on a boat, there needs to be a sufficiency of shared goals (direction to row) that the community move in a particular direction. Without sufficient alignment, lots of energy is wasted, and the community’s net progress will not be significant.
We have built our process from the ground up to minimize the potential of being gamed. Between real-time video interaction, consensus at every level, randomization of participant grouping in meetings, and repeated meetings, we feel we have ensured minimal gaming.
That said, 2 characteristics of humans in groups can influence the discussions.
Informational campaigns prior to the event could influence people to be more aligned on particular mindsets rather than having a from-scratch, reasoned debate.
Rational ignorance—the idea that it is a rational choice for busy people to devote their limited time to personally-relevant study and knowledge, leaving them non-experts in specific topics—is another problem of information. Obviously, we would prefer informed discussions of the topical question. Groups will largely rely on the knowledge and understanding of individuals in their groups for the foundational knowledge for the discussions. In this particular event, this will be mitigated by the invitation process, which is very likely to filter for people who have used AI to some degree and are somewhat aware of the issues involved.
Lastly, as proposed (short duration, few-sessions), the process is likely to experience unproductive elements, e.g., trolls, vandals, etc. Typically, our process would involve an invite-only, peer-to-peer, rate-limited invite system that would ensure integrity and accountability around each invite as well as an ongoingly productive community. For the proposed event, we’ll need a different, immediately-applicable system; we’ll build into the software the ability of the small groups to eject people if the group is in agreement.
Resources: How would you plan to use the grant for your experiment?
We are sufficiently funded to continue our work. The grant, however, would allow us to prioritize applying our process to AI governance, a topic we’re super excited about but that had to come later on our roadmap. The opportunity to partner with OpenAI would allow us to contribute to humanity’s critical and timely need for AI governance while refining and discovering innovative governance processes.
The Psinq/OpenAI Grant experiment would become a dedicated drive period within our companies 2023 OGSM plan.
Budget Planing Context
In the event Psinq’s application moves forward in the consideration process. We are working from the assumption the relationship would be a two-way dialogue throughout the submission, planning, deployment, and reporting process, rather than a typical grant relationship.
In the event Psinq is engaged in a two-way dialogue with OpenAI representatives, Psinq would draft a budget schematic with audit reporting details.
Budget decision mindset = Dedicate the entirety of the budget towards OpenAI specific experiments.
Rough estimates shown as a percentage of the $100,000.00 grant.
Thirty (30%) percent = Participation outreach and relations throughout experiment.
Thirty (30%) percent = Dedicated Psinq engineer team stakeholders to lead.
Thirty (30%) percent = Dedicated Psinq research, operations and communications team stakeholders to lead specific onboarding, supportive materials, education, and reporting.
Ten (10%) percent = Budget padding.
Part II
In your view, what are the top three benefits that AI technology brings to society?
AI generally has an extraordinary ability to uplift and empower by bringing knowledge, creativity, and even reasoning, to humanity generally, regardless of an individual’s base level of knowledge, experience, or intelligence. AI like ChatGPT brings the entirety of (publicly available) human knowledge to people’s fingertips in an accessible way that search engines just don’t. Automated broad knowledge answer synthesis is a game changer for every single person.
As AI systems get more familiar and facile with the human world, they will (and have already begun to) automate and improve many tasks humans have historically had to do manually. Systems like GPT-3.5 and GPT-4 are already proving their ability to automate many programming tasks, allowing software engineers like those on our team to move faster and keep their mind on architecture, testing, and the big picture. It’s very exciting to imagine what humanity could produce with such an empowering technology at its fingertips.
As Artificial Intelligence continues to advance, the super intelligence that seems likely to emerge will enable things that no human or group of coordinating humans could ever achieve. While the risks and unknowns of such super intelligence are manifold, the opportunity for artificial super intelligence to solve some of humanity’s biggest problems and enable massive technological advances seems unlimited.
In your view, what are the biggest drawbacks or risks associated with the widespread use of AI technology?
There are so many risks to discuss! We start with a few important ones, then focus on one in particular, relevant to democratic governance structures.
Potential of existential risk from AI.
AI has the potential to become both far more intelligent and intelligent in completely foreign ways, having fundamentally non-human thoughts and perspectives. One “misguided” and unpredictable calculation on AI’s part could be catastrophic.
b) As researchers add capabilities, AI models will expand on the knowledge, intelligence, and creativity they already have and add characteristics like emotions, AI-AI coordination, embodiment in rapidly-advancing robots. As AI develops its own intentions, we really can only guess what those intentions might be, and it’s easy to see it calculating a suboptimal utility for human beings.
Regulation of AI itself represents some risk, given there will always be people, groups, and countries that continue to work outside the confines of existing regulation. We must remain aware that regulation may restrict the good actors who respect the regulation, while allowing the bad actors, unimpeded by that same regulation, to develop more powerful AIs more quickly.
Society’s existing misinformation problem could get far worse with AIs. Auto-generated reasonable-sounding arguments and on-demand persuasive speech could further polarize society.
Many technologies lead—at least initially—to a worsening of the wealth distribution. It’s likely that use of AI will accumulate far more benefits to a minority, exacerbating class and wealth inequality.
As with all technology, we risk becoming lazy and ignorant in the face of technology that can provide so much on-demand.
The meta-risk we’re most interested is simply the speed at which all of the above approaches. With the unknowns that will accompany this journey, humans need fast and effective ways of coordinating and responding. This is where we’re most excited to contribute.
What do you see as the most significant challenges in responsibly implementing AI technology, especially in the context of democratic decision-making systems?
Addressing the core purpose of our proposed experiment, we see the need to personalize AI implementations, so they serve all of humanity. Without some degree of personalized filters for AI, we will produce models that unequally serve some over others, leading to unequal access to the massive leverage AI represents to individuals.
Revisiting one of the items from the Risks question, it seems one of the most significant challenges in the implementation of AI is our seeming ability to produce new AI tech that we don’t fully understand or know the consequences of. It seems like it will be quite easy for us to build something dangerous before we have any idea of the danger. We also potentially won’t know the nature of the danger until we’re seeing the expression of it.
Our passion and focus is governance. Effective governance requires participants to have good information. Information is compromised via misrepresentations, ignorance-driven misinformation, and propaganda/marketing. AI will be phenomenally good at generating and A/B testing communication at lightning speeds that effectively manipulates humans, likely even without explicitly negative intentions. We see investing in innovative democratic decision-making systems that fundamentally improve humanity’s ability to reach consensus and act collectively as critical to offset the risks of nearly infinite and foreign AI capability and growth rates. Humans will need the best tools they can develop to counteract AI threats.
Be notified on Substack
Subscribe to receive an email when new writings are posted. Never miss an update.
Join the growing psinq.com community
Be a part of what we do. Are you interested in
Being a research partner
Starting a Psinq community
Being on the bleeding edge of research and technology related to governance, AI, decentralization, blockchain, and human coordination.
Follow us on Socials
Grant Submission UPDATE
OpenAI has opted to not select our proposal for one of the grant rewards. There is no public information yet as to who was selected, but we did learn that we were one of over 800 submissions, making our chances of receiving one of the grant rewards 1 in 80! Psinq is excited to apply our framework to problems in AI, and we expect to make a related announcement soon. Keep an eye out for our AI-related use of fractal governance.
Would you please come to the www.interplanetaryunconference.com to share? Dates are being set, so they're is some flexibility.