Ethics of habit-forming products
Posted on Sep 14, 2020
A new Netflix documentary "The Social Dilemma" explores how addiction and data privacy breaches are utilised across social media platforms to make more revenue with ads. Even though the documentary makes quite bold statements about consequences, in some manner this is definitely true and applies actually to the whole software industry rather than one group of companies.
Sophisticated algorithms and AI are commonly used for user profiling by tech companies. They do not only predict and adapt to what users want, but they are also capable to manipulate how users think or behave. Designers call this a habit-forming product. Software is able to do this for example by providing certain recommendations or rearranging the content in the favour of catchier content. They might also provide rabbit holes that test how users react to different content.
This is where it might get twisted. If user clicks into rabbit hole, the whole user experience gets renewed with content that relates to this new catchy topic - thus making it more important in user's mind. The seed has been planted, and the improved profile can be later utilized for getting the same user deeper and deeper into other similar content - no matter how relevant or sometimes even harmful this content might be for the user.
This does not apply only to social media feeds, but also news, discussions, search engines, eCommerce and many other platforms. The goal is to make users addicted, which is not necessary a bad thing. But as a side effect, it can also turn people further away from each other as AI pushes them deeper into extreme content with an aim to gain more clicks.
Everyone - even the most professional product designers who know these tricks - lives in a small information bubble created by algorithms. These bubbles create illusions of opinions that otherwise would not really get any attention as being so fake or far from reality. Fake news travel six times faster in social media than more truthful news. That's good for ad campaigns, but bad for society.
Actually this all would be easier to understand if there was a hidden agenda behind everything - perhaps a political conspiracy made by someone who wants to rule the world. But the truth is rather boring: Algorithms and AI tools that tech enterprises create do not really care what is our political opinion, as long as it can be matched with ads.
Would be wrong to blame only tech giants for utilising their data into something like this. The problem was actually rooted into whole software industry already times ago. The book "Hooked: How to Build Habit-Forming Products" is still loved by designers and product managers. Where other's see a threat, others see an opportunity.
Many product designers have these ideas, but smaller companies cannot have such data as tech giants are having to implement the habit-forming products. With smaller data sets and user groups you can train an AI to adapt into user's thinking, but you cannot train it to manipulate users so that it would become massively harmful. For that you need to be a tech giant with a certain level of hegemony.
In this kind of world data privacy is more important than ever before. That's one reason why for example Europe is currently working quite hard with GDPR and other data privacy activities. For many of us this looks a bit desperate fight against windmills. But thanks to recent developments, most of the companies in EU and also growing amount in US are now carefully evaluating where they store the data and to whom they share it. By keeping data better under control, we also prevent anyone misusing it.
New tools, algorithms and AI are here to stay and they are shaping this world fast. We - product innovators - are in the front line of this change, so let's remember that there're good reasons to keep the ethics in mind and respect the end user's data privacy. We are here to build the better future, not to tear societies apart.