“So be sure when you step, Step with care and great tact. And remember that life's A Great Balancing Act. And will you succeed? Yes! You will, indeed! (98 and ¾ percent guaranteed).” ― Dr. Seuss, Oh, the Places You'll Go!
AI systems take an incredible amount of time to build and get right - I know because I have helped scale some of the largest AI systems in the world, which have directly and indirectly impacted billions of people. If I step back and reflect briefly - we were promised mass production self-driving cars 10+ years ago, and yet we still barely have any autonomous vehicles on the road today. Radiologists haven’t been replaced by AI despite predictions virtually guaranteeing as much, and the best available consumer robot we have is the iRobot J7 household vacuum.
We often fall far short of the technological exuberance we project into the world, time and time again realizing that producing incredibly robust production systems is always harder than we anticipate. Indeed, Roy Amara made this observation long ago:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
Even when we have put seemingly incredible technology out into the world, we often overshoot and release it to minimal demand - the litany of technologies that promised to change the world but subsequently failed is a testament to that. So let's be realistic about AI - we definitely have better recommendation systems, we have better chatbots, we have translation across multiple languages, we can take better photos, we are now all better copywriters, and we have helpful voice assistants. I've been involved in using AI to solve real-world problems like saving the Great Barrier Reef, and enabling people who have lost limbs to rediscover their lives again. Our world is a better place because of such technologies, and they have been developed and deployed into real applications over the last 10+ years with profoundly positive effect.
But none of this is remotely close to AGI - artificial general intelligence - or anywhere near ASI - artificial superintelligence. We have a long way to go despite what the media presages, like a Hollywood blockbuster storyline. In fact, in my opinion, these views are a bet against humanity because they gravely overshadow the incredible positives that AI is already, and will continue, providing - massive improvements in healthcare, climate change, manufacturing, entertainment, transport, education, and more - all of which will bring us closer to understanding who we are as a species. We can debate the merits of longtermism forever, but the world has serious problems we can solve with AI today. Instead, we should be asking ourselves why we are scrambling to stifle innovation and implement naively restrictive regulatory frameworks now, at a time when we are truly still trying to understand how AI works and how it will impact our society?
Current proposals for regulation seem more concerned with the ideological risks associated with transformative AGI - which, unless we uncover some incredible change in physics, energy, and computing, or perhaps uncover that AGI exists in a vastly different capacity to our understanding of intelligence today - is nowhere near close. The hysteria projected by many does not comport with the reality of where we are today or where we will be anytime soon. It's the classic mistake of inductive reasoning - taking a very specific observed example and making overly broad generalizations and logical jumps. Production AI systems do far more than just execute an AI model - they are complex systems - similarly, a radiologist does far more than look at an image - they treat a person.
For AI to change the world, one cannot point solely to an algorithm or a graph of weights and state the job is done - AI must exist within a successful product that society consumes at scale - distribution matters a lot. And the reality today is that the AI infrastructure we currently possess was developed primarily by researchers for research purposes. The world doesn't seem to understand that we don't actually have production software infrastructure to scale and manage AI systems to the enormous computational heights required for them to practically work in products across hardware. At Modular, we have talked about these challenges many times, because AI isn't an isolated problem - it's a systems problem comprised of hardware + software across both data centers and the edge.
Regulation: where to start?
So, if we assume that AGI isn't arriving for the foreseeable future, as I and many do, what exactly are we seeking to regulate today? Is it the models we have already been using for many years in plain sight, or are we trying to pre-empt a future far off? As always, the truth lies somewhere in between. It is the generative AI revolution - not forms of AI that have existed for years - that has catalyzed much of the recent excitement around AI. It is here, in this subset of AI, where practical AI regulation should take a focused start.
If our guiding success criteria is broadly something like - enable AI to augment and improve human life - then let's work backward with clarity from that goal and implement legislative frameworks accordingly. If we don't know what goal we are aiming for - how can we possibly define policies to guide us? Our goal cannot be "regulate AI to make it safe" because the very nature of that statement infers the absence of safety from the outset despite us living with AI systems for 10+ years. To realistically achieve a balanced regulatory approach, we should start within the confinements of laws that already exist - enabling them to address concerns about AI and learn how society reacts accordingly - before seeking completely new, sweeping approaches. The Venn diagram for AI and existing laws is already very dense. We already have data privacy and security laws, discrimination laws, export control laws, communication and content usage laws, and copyright and intellectual property laws, among so many other statutory and regulatory frameworks today that we could seek to amend.
The idea that we need entirely new "AI-specific" laws now - when this field has already existed for years and risks immediately curbing innovation for use cases we don't yet fully understand - feels impractical. It will likely be cumbersome and slow to enforce while creating undue complexity that will likely stifle rather than enable innovation. We can look to history for precedent here - there is no single "Internet Act" for the US or the world - instead, we divide and conquer Internet regulation across our existing legislative and regulatory mechanisms. We have seen countless laws that attempt to broadly regulate the internet fail - the Stop Online Piracy Act (SOPA) is one shining US example - while other laws that seek to regulate within existing bounds succeed. For example, Section 230 of the Communications Decency Act protects internet service providers from being liable for content published, and the protection afforded here has enabled modern internet services to innovate and thrive to enormous success (e.g., YouTube, TikTok, Instagram etc.) while also forcing market competition on corporations to create their own high content standards to build better product experiences and retain users. If they didn't implement self-enforcing policies and standards, users would simply move to a better and more balanced service, or a new one would be created - that's market dynamics.
Of course, we should be practical and realistic. Any laws we amend or implement will have failings - they will be tested in our judicial system and rightly criticized, but we will learn and iterate. Yes, we won’t get this right initially - a balanced approach often will mean some bad actors will succeed, but we can limit these bad actors while dually enabling us to build a stronger and more balanced AI foundation for the years ahead. AI will continue evolving, and the laws will not keep pace - take, for example, misinformation, where Generative AI makes it far easier to construct alternate truths. While this capability has been unlocked, social media platforms still grapple with moderation of non-AI-generated content and have done so for years. Generative AI will likely create extremely concerning misinformation across services, irrespective of any laws we implement.
EU: A concerning approach
With this context, let’s examine one of the most concerning approaches to AI regulation - the European Artificial Intelligence Act - an incredibly aggressive approach that will likely cause Europe to fall far behind the rest of the world in AI. In seeking to protect EU citizens absolutely, the AI Act seemingly forgets that our world is now far more interconnected and that AI programs are, and will continue to be, deeply proliferated across our global ecosystem. For example, the Act arguably could capture essentially all probabilistic methods of predicting anything. And, in one example, it goes on to explain (in Article 5) that:
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Rather than taking a more balanced approach and appreciating that perfect is the enemy of good, the AI Act tries to ensnare all types of AI, not just the generative ones currently at the top of politicians and business leaders' minds. How does one even hope to enforce "subliminal techniques beyond a person's consciousness" - does this include the haptic notification trigger on my Apple Watch powered by a probabilistic model? Or the AI system that powers directions on my favorite mapping service with different visual cues? Further, the Act also includes a "high risk" categorization for "recommender systems" that, today, basically power all of e-commerce, social media, and modern technology services - are we to govern and require conformity, transparency and monitoring of all of these too? The thought is absurd, and even if one disagrees, the hurdle posed for generative AI models is so immense - no model meets the standards of the EU AI Act in its current incarnation.
We should not fear AI - we've been living with it for 10+ years in every part of our daily lives - in our news recommendations, mobile phone cameras, search results, car navigation, and more. We can't just seek to extinguish it from existence retrospectively - it's already here. So, to understand what might work, let's walk the path of history to explore what didn't - the failed technological regulations of the past. Take the Digital Millennium Copyright Act (DMCA), which attempted to enforce DRM by making it illegal to circumvent technological measures used to protect copyrighted works. This broadly failed everywhere, didn't protect privacy, stifled innovation, was hacked, and, most critically - was not aligned with consumer interests. It failed in Major League Baseball, the E-Book industry abandoned it, and even the EU couldn't get it to pass. What did work in the end? Building products aligned with consumer interests enabled them to achieve their goal - legitimate ways to access high-quality content. The result? We have incredible services like Netflix, Spotify, YouTube, and more - highly consumer-aligned products that deliver incredible economic value and entertainment for society. While each of these services has its challenges, at a broad level, they have significantly improved how consumers access content, have decentralized its creation, and enabled enormous and rapid distribution that empowers consumers to vote on where they direct their attention and purchasing power.
A great opportunity to lead
The US has an opportunity to lead the world by constructing regulation that enables progressive and rapid innovation. It should be merited by this country's principles and its role as a global model for democracy. Spending years constructing the "perfect AI legislation" and "completely new AI agencies" will end up like the parable of the blind men and an elephant - creating repackaged laws that attempt to regulate AI from different angles without holistically solving anything.
Ultimately, seeking to implement broad new statutory protections and government agencies on AI is the wrong approach. It will be slow, it will take years to garner bipartisan support, and meanwhile, the AI revolution will roll onwards. We need AI regulation that defaults to action, that balances opening the door to innovation and creativity while casting a shadow over misuse of data and punishing discriminative, abusive, and prejudicial conduct. We should seek regulation that is more focused initially, predominantly on misinformation and misrepresentation, and seek to avoid casting an incredibly wide net across all AI innovation so we can understand the implications of initial regulatory enforcement and how to structure it appropriately. For example, we can regulate the data on which these models are trained (e.g. privacy, copyright, and intellectual property laws), we can regulate computational resources (e.g. export laws), and we can regulate what products predictions are made in (e.g. discrimination laws) to start. In this spirit, we should seek to construct laws that target the inputs and outputs of AI systems and not individual developers or researchers who push the world forward.
Here’s a small non-exhaustive list of near-term actionable ideas:
- Voluntary transparency on the research & development of AI - If we wait for Congress, we wait for an unachievable better path at the expense of a good one today. There is already an incredible body of work being proposed by Open AI, in Google's AI Principles among others on open and transparent disclosure - there is a strong will to transcend Washington and just do something now. We can seek to ensure companies exercise a Duty of Care, to have a responsibility under common law to identify and mitigate ill effects and have transparent reporting accordingly. And of course, this should extend and be true of our Government agencies as well.
- Better AI use case categorization & risk definition - There is no well-defined regulatory concept of “AI” nor how to “determine risks” on “what use cases” therein. While the European AI Act clearly goes too far, it does at least seek to define what “AI is” but errs in defining essentially all of probability. Further, it needs to better classify risk and the classes of AI from which it is actually seeking to protect people. We can create a categorical taxonomy of AI use cases and seek to target regulatory enforcement accordingly. For example, New York is already implementing laws to require employers to notify job applications if AI is used to review their applications.
- Pursue watermarking standards - Google leads the cause with watermarking to determine whether content has been AI generated. While these standards are reminiscent of DRM, it is a useful step in encouraging open standards for major distribution platforms to enable watermarking to be baked in.
- Prompt clearly on AI systems that collect and use our data - Apple did this to great effect, implementing IDFA and better App Transparency on how consumer data is used. We should expect more of our services for any data being used to train AI along with the AI model use cases such data informs. Increasingly in AI, all data is incredibly biometric - from the way one types, to the way one speaks, to the way one looks and even walks. All of this data can, and is, now being used to train AI models to form biometric fingerprints of individuals - this should be made clear and transparent to society at large. Both data privacy and intellectual property laws should protect who you are, and how your data is used in a new generative AI world. Data privacy isn’t new, but we should realize that the evolution of AI continually raises the stakes.
- Protect AI developers & open source - We should seek to focus regulatory efforts on the data inputs as well as have clear licensing structures for AI Models and software. Researchers and developers should not be held liable for creating models, software tooling, and infrastructure that is distributed to the world so that an open research ecosystem can continue to flourish. We must promote initiatives like model cards and ML Metadata across the ecosystem and encourage their use. Further, if developers open source AI models, we must ensure that it is incredibly difficult to hold the AI model developer liable, even as a proximate cause. For example, if an entity uses an open-source AI recommendation model in its system, and one of its users causes harm as a result of one of those recommendations, then one can’t merely seek to hold the AI model developer liable - it is foreseeable to a reasonable person who develops and deploys AI systems that rigorous research, testing, and safeguarding must occur before using and deploying AI models into production. We should ensure this is true even if the model author knew the model was defective. Why, you ask? Again, a person who uses any open source code should reasonably expect that it might have defects - hence why we have licenses like MIT that provide software “as is”. Unsurprisingly, this isn’t a new construct; it's how liability has existed for centuries in tort, and we should seek to ensure AI isn't treated differently.
- Empower agencies to have agile oversight - AI consumer scams aren’t new - they are already executed by email and phone today. Enabling existing regulatory bodies to have the power to take action immediately is better than waiting for congressional oversight. The FTC has published warnings, the Department of Justice as well, and the Equal Employment Opportunity Commission too - all of the regulatory agencies that can help more today.
Embrace our future
It is our choice to embrace fear or excitement in this AI era. AI shouldn’t be seen as a replacement for human intelligence, but rather a way to augment human life - a way to improve our world, to leave it better than we found it and to infuse a great hope for the future. The threat that we perceive - that AI calls into question what it means to be human - is actually its greatest promise. Never before have we had such a character foil to ourselves, and with it, a way to significantly improve how we evolve as a species - to help us make sense of the world and the universe we live in and to bring about an incredibly positive impact in the short-term, not in a long distant theoretical future. We should embrace this future wholeheartedly, but do so with care and great tact - for as with all things, life is truly a great balancing act.
Many thanks to Christopher Kauffman, Eric Johnson, Chris Lattner and others for reviewing this post.
Image credits: Tim Davis x DALL-E (OpenAI)