The Separation of Mind and State
Introducing Venice.ai
It is clear that central powers shall always desire to capture human institutions.
Money. Religion. Education. Business. Even language and mathematics itself.
Hundreds of years ago, through tremendous sacrifice, the institution of religion was incrementally removed from the government's sphere of authority. Today, religious or not, we recognize the importance of that separation of church and state.
The original cypherpunks sought to separate language from state through encryption—for a while declared a “munition,” where certain types of math were banned from export. Their descendants, the early Bitcoin pioneers, sought to similarly separate money and state.
But if monopoly control over god or language or money should be granted to no one, then at the dawn of powerful machine intelligence, we should ask ourselves, what of monopoly control over mind?
To whom do we grant license over intelligence itself?
Mind and State
Using nothing but a quick text prompt—a question or command written in plain language—human minds can now leverage computer minds to instantly generate astonishing content of any variety. Where the web allows people to access information quickly, AI lets people produce information quickly.
Perhaps it’s inevitable that those who fear liberty of the human mind would also detest its unfettered interaction with intelligent machines. Already, the busy authoritarians plan their restrictions, their licenses, their limits, their committees, their policies and their permissions. Our safety is at stake, they remind us.
But let’s also remember that technologies are neutral. A hammer can build a house or crush a skull. A specific human usage of AI could cause harm, but there is little evidence of AI itself doing so.
And let’s not conflate the current technologies of generative AI with AGI. The latter is something different, and something thus far unwitnessed. How will we know when (if) it happens? Perhaps when a computer program doesn’t follow instructions. But from Pong to GPT4, they all do.
We are witnessing the potential serious risk of a yet unseen phenomenon like AGI used implicitly or explicitly as justification to control all manner of non-AGI artificial intelligence technology. It’s a sleight of hand.
And these non-AGI technologies are, indeed, neutral. A society that can’t distinguish between the neutrality of technology and the risks of particular use cases will tragically subvert technological advancement in general. The avoidance of imagined harm causes actual harm. The tribes that feared fire froze in the darkness, and we will never know them.
Prudence alone dictates decentralization
All good people care about safety. The important question: is safety best achieved through coercive centralization, or through open decentralization? Some well-organized voices are certain it’s the former. Observe the enthusiastic alliance between large tech firms, which naturally wish to curtail competition from upstarts, and the State, which has never seen a thing over which it didn’t desire jurisdiction.
This union—the very definition of corporatism and regulatory capture—manifested two weeks ago in the Orwellian “Artificial Intelligence Safety and Security Board” from the Department of Homeland Security.
That such an organization will experience mission creep over time is guaranteed. And if today’s administration is wise and virtuous, consider the power in subsequent hands.
There is another way.
Transparency and decentralization are better safety mechanisms than appeals to experts and state-licensure. The former tends toward iterative improvement, while the latter tends toward complacency and stagnation (witness the rapid dynamism of web3 vs the banking system).
A foundation of open-source is the more realistic (and ethical) means by which we achieve both wildly exciting development and robust security. Indeed this is how the world’s entire web infrastructure operates—this decentralized, open-source foundation empirically works at the largest scale that mankind has been capable of building thus far.
Yet, The Department of Homeland Security didn’t put a single open-source advocate on its 22-member AI Safety and Security Board.
Building Permissionless Intelligence
Over the past year, a movement has been forming.
AI people who are passionate about technological advancement are mixing with crypto people who have a deep skepticism of centralization and trust-via-credentialism. These groups are natural allies in building decentralized, permissionless AI.
A couple years after the decentralization of ShapeShift, I’ve now found myself pulled into this movement of AI x Crypto. Since last fall, I’ve been contributing to a project called Morpheus, a decentralized AI network. And from that involvement, I realized a certain (obvious) product needed to be built.
And so we’ve begun building Venice, comprised of myself, our COO Teana Baker-Taylor (formerly of HSBC, Circle, and Binance), and a small team.
Venice - The Serene Republic
Venice is a generative AI app for non-technical people. It will feel like a snappy version of Claude or Perplexity or ChatGPT, but without all the Orwellian stuff (perhaps that’s why it’s snappy?).
Search the world’s information, have rich conversations, analyze documents, create interesting images and art of any style at the push of a button. Various AI services offer these things, so what makes Venice unique?
Venice doesn’t spy on you
Venice doesn’t censor the AI
In other words, Venice is private, and Venice is permissionless.
Note that each open-source AI model has its own boundaries and rulesets, but importantly Venice lets users have choice. Further, Pro users can actually edit the System Prompt, empowering them to shape the personality of the AI’s with which they interact.
So how does Venice work?
Venice utilizes leading open-source AI models (we’re fond of Nous Research) to deliver text, code, and image generation to your web browser or mobile app.
No downloads. No installations of anything. And for basic use, no account necessary and the service is free.
Technical people have been using open-source tools for generative AI, but for anyone a little overwhelmed when arriving at HuggingFace, Venice is for you.
The Venice front-end is a clean web app that should feel familiar to anyone who has used ChatGPT.
The app is straightforward. But here’s the difference…
Every company says they respect user privacy. This is nonsense. The only way to respect user privacy is to not violate it in the first place. If a company has your data, your privacy is already lost.
The leading AI companies have your entire conversation history saved and attach it to your identity, forever. Worse, they siphon it off to various 3rd parties—advertisers, hackers, and most dangerous of all, governments.
Those from the crypto world will be familiar with trust-minimizing architecture, client-side design, and end-to-end encryption. These patterns were forged in crypto because there was a strong culture of privacy and the requirement to protect users and their assets. Venice applies these protective patterns to generative AI.
Your conversation history is stored only in your browser. Venice does not store or log any prompt or model responses on our servers.
Your inference requests (the messages you send) go through a proxy server, encrypted, directly to the decentralized compute resources.
The response from the AI is similarly streamed directly back again through the encrypted proxy server to your browser, never persisting anywhere other than your browser.
The GPU’s which process your inference requests come from multiple decentralized providers, and while each specific decentralized server can see the text of one specific conversation, it never sees your entire history, nor does it know your identity.
The result:
What Venice knows: Your email and IP address, but not your conversation.
What the compute provider knows: a specific conversation, but not your email or IP address, and it can’t associate specific conversations with specific users.
Perfect privacy will only be achievable with FHE (we’ll get there) or running models locally (go for it). But, today, we believe Venice’s architecture is materially superior to any hosted AI service if you don’t want to be surveilled or censored.
Learn more about Venice’s privacy architecture here.
And it turns out, when you don’t add a bunch of spyware and logging into your user’s conversations, the responsiveness of the app is fast.
Privacy is our first USP. But now let’s talk about the second: censorship.
Every person who has used the leading AI apps has observed the weird, creepy, paternalistic censorship, and it’s getting worse. Are you interacting with AI, or with a multi-billion dollar bias simulator?
Ask an AI for a dirty joke and it refuses because it doesn’t want to offend…
There is a strange double-standard today, where anyone can go online and search any topic, no matter how grotesque, and see relatively unfiltered results. And yet if you use AI to search for even a dirty joke, you’re softly scolded and directed into a “more socially positive direction.”
If you’re easily offended and want guardrails on your AI app, that’s fine, those are your preferences. But let’s not place everyone in the lowest common denominator of sensitivity training.
Jokes aside, what about more important issues? When censorship is explicit with black redacted bars, that’s one thing, but what if it’s infused in subtle ways within the sinews of your information? Do you want Biden’s administration to govern what AI tells you? Do you want Trump’s? Will you ever even know it’s there?
When the bars of your cage are iron, they’re injurious.
When invisible, they’re insidious.
For those who prefer dynamism and vitality, unconstrained speech is a prerequisite, and a world where your AI app is telling you what to think and passing it off as objective machine intelligence… well, that seems a little dystopian.
We don’t believe the thoughts you develop in your mind are our business to regulate and censor. It follows that we don’t believe the thoughts you develop with the help of a machine mind are our business to regulate and censor, either.
The solution is not to force tech companies to act in certain ways. The solution is to build alternatives and humbly offer them.
Venice respects you as a sovereign individual, and believes privacy and free speech are not only human rights, but are necessary for civilizational advancement. Even passive surveillance has a demonstrable chilling effect on thoughts and action. As the minds of humans and machines merge, it should be up to you, not the State or a tech company, to define the contours of this relationship.
This ethos can only be delivered on a foundation of open, permissionless architecture.
True for religion.
True for language.
True for money.
True for mind… even if it’s artificial.
And so, we present Venice
ad intellectum infinitum