In the first part I introduced the idea of an anti-Google and what that might look like.
It seems to me that the average person desires a breath of fresh air in "consumer" technology. Many people want something fundamentally better but find it difficult to articulate exactly what that is.
Using Google as the reference, I am introducing a model from which such an alternative can develop: the anti-Google. What is Google doing? How is Google structured? Now, what is the inverse? This sets us on the right course.
The general security model of Google (and nearly every other company) is to prevent outside access to any internal information, whether personal information of its users, its infrastructure and software, or its internal procedures. Regular users each get pretty much the same level of security as anyone else as well as the same level of privacy - privacy from each other and from hackers, but not from Google.
As for internal policies and practices, it's a mystery as to which employees, contractors, or affiliates can see what, for what purpose, and what is actually done with personal information (which includes activity tracking). Regardless of policies and claims, there's no way to verify any of it yourself.
The part about privacy from each other and hackers needs no inversion, but everything else does. There should be no internal access to a user's personal information, as much as that is possible (and up to each user). Security policies, practices, and implementation are to be exposed and regularly verified by both internal and third-party testing.
Most significantly, each user gets an option to raise or lower the security level of their personal information independently of anyone else. The base level of security for each person would be at least industry-standard (and made as strong as possible). From there, anyone whose risk threshold was reached could choose a higher level of security. How? That's next.
I'll make a wild assumption that Google's software is highly efficient in that it involves large databases containing the combined information of many (or all) users. This is normal and usually a great idea. Similarly normal is the centralized control over their free apps, nearly all of which depend on remote servers.
However, focusing on efficiency like that reduces the potential value to the individual.
The inverse architecture is to provide a standalone "app + data" container for each user. Think docker or a virtual machine hosting the app server, background processes, and database. This container controls access to the user's data and performs all serving and processing. Very inefficient, no doubt, but it improves overall security and individual privacy while enabling per-user variation.
Many common vulnerabilities (XSS, SQL injection, weak input validation, etc) are eliminated by moving away from large combined databases. By encouraging variation amongst the containers, a single vulnerability may no longer expose everyone's information. And by encouraging each user to take an active interest in exactly where and how these containers are hosted, the cost of any attack is raised significantly.
Each container can be hosted anywhere and moved around as needed. Even if a container is hosted by a 3rd party service, the expectation is that nobody else is authorized to peek inside (while understanding that anything is possible with physical or root access). In this way, decentralization can be rewarded and encouraged without being the initial state. Forced decentralization tends to hamstring adoption.
Let's just say that Google is not an open book. Usually that's a wise business move.
But trust is limited without openness. What is happening now? What's coming next? Without openness, there is no way to know. And with no way to know, there is neither a way to verify nor any control. The power dynamic is made crystal clear: Google knows best. Given the information, talent, and resources they possess, that may well be true. But only if their interests align perfectly with yours.
This can be inverted.
Open source code, open software stack, open architecture and infrastructure, open security strategy and implementation. All of it freely available except the keys and the people, which had better be robust.
Open security may be a bit nerve-wracking at first, but when it's early there's little to lose and much to gain from a bit of "adhoc testing", also known as getting hacked. By being open about the openness and the consequences of it, appropriate expectations can be set for everyone from the start.
In any case, openness sends the message that providing the highest possible value to each individual user is the top priority. The power dynamic becomes similarly clear: You know best.
Google generates the vast majority of its revenue from ads, fueled by the collection and processing of personal information from its various free apps and services.
The basic inverse here is to monetize directly, since there is a need to engender trust, with no need to access personal information, and little need to track user activity. With such a resource-inefficient software architecture a subscription model makes sense.
However, rather than the usual multi-tier freemium model, a twist. Instead of the free version being arbitrarily limited, it is the best possible version. And instead of upselling toward a high-priced top tier, upsell and encourage adoption of the free version because it offers the highest possible value. No cynical manipulation either: this is a good outcome for all.
Still, many people won't want the free version as it requires effort and responsibility. To get it done perfectly, you have to do it yourself. Otherwise you get to pay.
Think back to when Google began roughly fifteen years ago - a superior search engine with a simple front page. Compare to now. What direction has it taken?
Generally, it's become an ad network with a voracious appetite for information of ever higher quality that somehow does not find its way onto a search result page. Broadly speaking, it has consistently put its own interests ahead of providing value to its users. You might point to all of the free apps and services they offer and you'd be right to say there is value. It's just nowhere near the maximum and clearly not the priority, as this inversion exercise illuminates.
It strikes me that Google has empowered itself at the expense of value to its users, who some would say are the product. Control, privacy, and trust are extremely valuable. Ownership of information is valuable, as is independence. But when looking at the direction Google has taken, these important qualities have been consistently eroded or ignored.
The inversion of Google's direction is to focus on raising those very important qualities over time. Then prove that's what is happening by tracking those qualities directly instead of tracking user activity. Use legitimately available information to improve the overall experience and value rather than degrading it through ads and trackers.
Adding It All Up
- Forced centralization
- Corporate control over all infrastructure
- Toward greater corporate power and less privacy
- Mixed data, shared services
- Essentially closed
- Free of charge
- User-driven decentralization
- User control over container location, level of security, privacy, ownership, trust
- Toward greater individual power and more privacy
- Per-user databases and services
- Essentially open
- Highest value is completely free, everything lesser has a direct cost.
From my perspective, these qualities naturally cluster together. Each side breaks down when tainted by elements of the other, making each an all-or-nothing deal. Either fully commit to being the anti-Google and keep pushing hard in that direction, or else give it all up and take the path well travelled. Without a continual hard push against the Google-like grain the tendency is to gravitate toward the status quo.
I believe the the average person is seeking an oasis amongst the harshness of modern software. That search is fulfilled by the anti-Google; my take on it is Benome.
To be continued.