There has been growing speculation about the potential impact of artificial intelligence, in particular machine learning, on how society operates at the macro level.
Some have suggested that an artificial intelligence will have a major impact in economics, providing the “smart hand” of the market to replace the problematic dysfunctions of both capitalism and centralized economies. Others have suggested that artificial intelligence would be better at organizing society than human policymakers - politicians - who seemed mired in poor decision-making. Could artificial intelligence systems replace our current models of government and democratic representation?The possibility is exciting, but also alarming in its Big Brother overtones. Can artificial intelligence truly play a role in a functioning democracy? The idea seems inconsistent with the very definition of the word “democracy” which means “self-governance.” Self-governance requires that the individual be the last voice of authority on what they want and want to happen. It might be imagined that an artificial intelligence could do a better job organizing society than our often chaotic political system - but this is just another variation on the “benevolent dictator” idea which has failed so many times. There is a Hegelian notion of “becoming” that underlies the very concept of democracy, where both the definition of the good as well as the members themselves are in a constant reflexive evolution. Just as the empowerment of individuals is necessary for them to actualize their highest potential - to fail or succeed - similarly for a society; it cannot evolve and reach its full potential without democratic empowerment, and self-accountability of its members.
However, this necessary limitation on the role of artificial intelligence in democracy does not mean that it cannot play an important, and perhaps even crucial role in the evolution of self-governance. It simply means that the roles must be well defined. Deciding what should happen, the normative component of conscious decision-making, must always ultimately reside in self-willing individuals. However, artificial intelligence systems can make an enormous impact in facilitating that process; explaining what is happening, predicting what will happen, and helping to make things happen. They may even act as a cognitive extension on the act of willing itself - kind of “meta-cortex.” While there are fears of artificial intelligence in democracy, with the right approach it may in fact be a great liberator.There is very clearly a large and growing potential role for machine learning in improving the machinery of democracy.
This article will explain some of those opportunities in the context of Ethelo’s eDemocracy technology.
A Backgrounder to the Ethelo Technology: Ethelo is a digital decision-making platform where participants collectively solve problems with the support of a “multi-attribute” decision algorithm. Complex decisions are broken down and represented as a set of decision components - options, issues, rules, constraints, criteria etc. These decision components can be reassembled combinatorially to create a (often large) space of potential outcomes, each with a unique, logical description. Participants use the Ethelo platform to proceed through a workflow where they discuss and provide preference feedback on those decision components. They receive real-time feedback as they do, because the Ethelo algorithm applies their preferences to search the space of potential solutions and identify an optimized outcome unique to each person. Each participant can modify the tools to vary the decision components until they are satisfied with the result. The Ethelo algorithm then uses the information provided by all the participants to describe the support distribution for each potential outcome. It searches this scenario space to find a “best” outcome that maximizes average support while minimizing polarization - a utilitarian outcome that represents an “optimized consensus.” See Rawls and Ethelo. This optimized outcome will often have approval levels above 90%, as it is able to capture trade-offs and compromises that are generally not surfaced through traditional democratic decision-making.
The Big Data of Ethelo
A typical Ethelo decision process will occur over 2-3 weeks, involving hundreds and often thousands of participants. In the course of such and engagement, we gather:
- “Vote data”. This is a broad term to capture all quantitative decision preference information expressed by participants using Ethelo tools, which can include; scoring options, weighting issues, setting constraints, defining relations, applying criteria, making trade-offs etc. This also includes the configuration settings describing the decision components and their relations.
- Comment data. Generally there are between 20 to 50 comment threads on an Ethelo project capturing feedback to specific questions or themes. Participants will on average provide 2-3 comments each, and can like and reply to other comments. Comment data can be further broken down into proposals (ideas), questions, answers, statements of fact, and value statements, each with an emotional valence.
- Demographic data. This includes all information used to describe participants, usually gathered using survey tools. It can include standard demographic information (age, gender, etc) but also answers to questions about the participants’ relationship to the community and issues at hand.
- Metadata. This covers all information generated through the activity of engaging with the platform itself. It includes time on platform, referral source, departure point, completion rates, IP address, device fingerprint, login ID. It can also include feedback to the process and user profile information.
- Ethelo analysis. As each participant goes through the process, Ethelo applies the decision rules to provide real-time feedback showing their personal, favourite outcome. As this is a NP complete problem, Ethelo uses a mixed integer nonlinear solver, created in partnership with the University of Waterloo. This solver is able to give near-instant solutions for personal results. On an hourly schedule, the Ethelo engine applies the decision rules to aggregate the vote data across all participants, and identify a top scenario for the group.
- Supplementary analysis. These are other types of analysis enabled by data generated on the platform, including barcharts and pie charts of all kinds.
Ethelo provides a custom 20-30 page report for each client which includes a broad variety of analysis on both quantitative and qualitative data. This can include security analysis and demographic reweighting, where the distribution of influence can be adjusted to create representative samples. Here is an example of an Ethelo report
The Ethelo platform generates enormous amounts of data. It is very structured data, including correlations between different data types (comments, votes, demographic etc) which can be used to generate new understandings of political identity and preference. Moreover, there can be a further level of pattern analysis across different Ethelo projects, as for example our Citizen Budget platform which is used by more than 100 local governments, often annually.
It has been said that the future winners in artificial intelligence will do so not because they are developing intelligent machines or learning algorithms, but because they have proprietary data that can be used to guide the development and training of those machines and algorithms. See The Most Important Thing for the Future of Artificial Intelligence. If this is true, then Ethelo is well-positioned.
The following are some of the applications we can see for machine learning to Ethelo data:
Prediction
Training AIs with the Ethelo dataset will generate very useful insights and predictions about decisions. For example, this information can be aggregated to enable modeling of participant “archetypes” which can help simplify understanding of the composition and views of a community. With this information, we can make predictions about how residents of other communities might respond to similar processes, transferring learnings and reducing the time to insight. An Ethelo-driven AI could help governments and organizations make accurate and insightful high-value public policy and business decisions, drawing on this database of how diverse groups of individuals do tradeoffs and make decisions.
Natural Language Processing
The Ethelo platform routinely gathers thousands of comments in a single public consultation process. When clients require in-depth qualitative analysis of comments, we undertake the work by hand, hiring researchers to undertake the coding and theming necessary. No AI tool we’ve found can approach human intelligence and contextual understanding necessary to identify emotional tone, meaning and ideas.
Here is an example of an Ethelo comment analysis report.
Although we do not see AI tools approaching human capabilities in this area anytime soon, we can nevertheless see many ways that AI can lighten the load significantly, especially once a sample set of comments have already been themed and coded. Applied within a proper framework alongside human analysis there is an enormous opportunity for AI to speed the analysis and identify larger patterns across very large datasets.
Data Structuring
Ethelo method approaches each decision as a set of decision components interacting according to processes and rules. Complex - and generally important - decisions will have many moving parts. How can we automate, or decentralize, the process of decision-making? For example, the decision process includes; question; convening; ideation; structuring; evaluation; aggregation; ratification. Currently, Ethelo’s team of fulfillment specialists fills this role, using different templates for different processes but also a lot of custom analysis. Could it be possible, with the support of AI, to be able to invoke and manage a complex, structured decision process through a decentralized process? We believe so, especially in combination with roles delegated to specific participants selected by the group to perform certain cognitive functions.
See: the structure of decentralized decision-making
Data Validation
Public consultation processes hosted on Ethelo are often contentious (ex. location of waste disposal plant, regulation of woodlots) and there are incentives among participants to influence the outcomes of the processes by “stuffing the ballot” box with multiple submissions.
Ethelo currently uses passive processes to monitor participant activity and assign risk thresholds to certain accounts displaying signs of repeated submissions, either automated or manual. While ultimately this problem will be solved - one hopes - with digital identities, still there will be a need to monitor for potential fraudulent activity. When the stakes are high - such as public policies that affect many people - history has proven that extremely robust systems are needed. AI support will be needed to identify such suspicious activity and provide validated datasets for analysis.
Here is an example of an Ethelo security report.
Engine Optimization
The Ethelo algorithm is an NP complete problem, which means that the time to solve it conclusively grows exponentially with the size of the option set. We currently address this using non-linear programming techniques which are practically but not provably complete as there is the chance of optimal solutions hiding in local minima.
We can use AI to help identify these regions of local maxima and minima, and increase the thoroughness of the NLP approaches. We can also use AI to predict the number of solutions, which is useful to know and does not need to be exact.
There is reason to believe that machine learning will be more effective than other solvers, such as Bonmin. This is because we are essentially modelling a neural network composed of the neural networks of the individual decision-makers. Human decision-making, whether individual or collective, is designed around neural network architecture. For example, the human mind has been optimized through evolution to be expert in modelling future scenarios and how to achieve them, which is the basis of design for the Ethelo algorithm.
A Backgrounder on Trust Networks: Traditional democracy counts the influence of voters as scalars - 1 person, one vote. There are various ways of aggregating this vote - majority vote, plurality, run-off - but the essential result is the same; the result with the most votes wins. Ethelo on the other hand represents the intention of voters as vectors; composed of real numbers between -1 and 1 for each potential scenario, with the unit length of that vector being 1. This (often very large) vector - called the influent function - is constructed heuristically based on the scores and weights provided by the participant to the different options, issues, criteria and other decision components as they move through the Ethelo decision process.This multi-dimensional nature of the influent function allows voter influence to be delegated in interesting and new ways. That is, a voter can participate indirectly by voting on only a subset of the decision components, drawing from the influent functions of others to supply the missing data. For example;
- A voter can assign a “trust weight” to each of a number of delegates (a delegate is a person who has shared their vote with others). Then by weighted vector addition and renormalization, we can build a new influent function which is essentially the weighted sum of the votes of the chosen delegates, with the level of trust accorded to each delegate determining how much they contributed to the new influent function.
- The delegates can be assessed (perhaps collectively) by their expertise in one or more of the issues or criteria used in the decision. Then, rather than voting on the delegates themselves, the participant can simply weigh the importance of the issues or criteria, and a new influent can be created based on the expertise of some or all the delegates on those issues or criteria.
In liquid democracy, voters can delegate their votes to specific people, depending on the issue, and retract it if they are unsatisfied with the performance of the delegate. This highly flexible and responsive mode of voting is a significant forward from the static nature of voting as currently practiced. Ethelo’s multi-attribute decision model allows the principles of liquid democracy to be extended and applied in a highly customizable way, among different groups of delegates depending on the issue or criteria.
Click here for a deeper explanation of Ethelo Trust Networks.
Political Avatars
There are various ways that machine learning and artificial intelligence systems can integrate into Ethelo’s Trust Network system.
The Ethelo algorithm includes simple heuristics for identifying other participants whose voting patterns are similar to a given participant, for the purposes of providing recommendations and predictions. With machine learning, we can increase the effectiveness of this heuristic, looking at a broader variety of factors and unstructured data sets. This ability to make good predictions of participant preferences will enable us to provide each participant with a personalized intelligent avatar. That avatar would be trained to represent a participant in complex participatory processes by asking a optimized set of questions (eg. value-based or demographic questions) and using that information to identify correlations and make predictions. Machine learning can also identify which questions will reduce the dissonance of uncertainty in prediction most rapidly - always asking the next-most-revealing question.
It should be possible that, by asking a small set of questions, it is possible to predict with high accuracy participant responses to the key factors in complex decision processes. Of course, the accuracy would increase the more questions were answered, so each participant could weigh in as deeply - or not - as they wished on a particular issue while preserving the equality of their votes. Moreover, these “avatars” could represent participants over time, learning more about the person they are representing as they go through different processes.
In other words, each person can be personally represented by an AI avatar that they personally train, and which negotiates to advance their political interests in whatever system emerges. Note this would not be a completely automated system; as it would still require expert participants to act as “delegates,” sharing their votes as they apply their expertise directly to aspects of the decision.
Conclusion
The potential of trust networks and artificial intelligence to expand the effectiveness of group decision-making is not limited to democracy. In fact, there is a higher-level function occurring, a constituent function of living consciousness generally, and that is decision-making. The description of "volition" as a fundamental aspect of consciousness goes back to the Buddha. This inevitable and critical nature of this function of consciousness is essentially the same at the individual as collective level. To live is to will; and “we” are just as alive as “I” am and for exactly the same reasons.
Concretely, this means that improved group decision-making will affect all aspects of life. It will not stop at democratic governance, it applies equally to the private world. Economic historian Robert Heilbroner predicted the end of market capitalism, and its potential replacement by what he called “participatory economics.” Indeed, there is potential for a powerful synergy between human and artificial intelligence which will animate a true "smart hand," a hand that is guided by an emergent collective consciousness enabled by our new technologies.
We can consider our current democratic system in terms of "bandwidth." What is the flow rate of information among the group in formal decision-making? In our traditional democracy, while there are many informal channels, the formal channel of power where we “act as one” is incredibly narrow; one election every 4 years. That is a bandwidth of 10s of bytes per year. Channels like Facebook are moving information on the 4 petabytes (1000^5 bytes) per day. This bottleneck is the reason for our current crisis of governance and decision-making. The world has speeded up drastically, and our old systems prevent us from responding effectively to elements as diverse as climate, inequality, tax evasion, lobbying and the whole wealth of legislative, financial and regulatory decision-making that takes place.
We simply cannot solve society's current problems using the same decision-making processes that created them.
The high bandwidth combination of human and artificial intelligence will enable us to address these problems - to "get in front of them" so to speak, by increasing the computational flowput of cognitive processing, engaging individuals most effectively in the important decisions. In this world, AI won't be separate from human consciousness - it will be fully integrated.