I've spoken about how quiet the internet really is, how it's not noisy like a cocktail party. This is mostly because the current model of transmitting and receiving quanta is point-to-point: the receiver selects the streams of quanta they give attention to. In the first world (analogue) we actually have a mass of jelly in our heads that rides the noise and is extremely adept at pulling out the signal as necessary.
I want to add one more aspect to this, one that I think the second world (digital) is better at: format conversion.
I've been mentioning 'streams of quanta'. The term 'quantum' comes from physics, where it means a packet of energy, usually in relation to elementary particles and their behavior. In my case, I am using it as a unit of information, the smallest piece in the stream that is enough to communicate something.
In the first world, this could be our name, a tune, a smell, it's the thing we interpret in the stream. Also, we do format changes based on the sense used to receive that stream of quanta. A good example of that is turning visual quanta into physical quanta, such as printed text into braille.
We can do this much more easily in the second world. For example, André could have been following his friends' playlists by only the visualization of the music, rather than a text list. And how many people now follow their Twitter streams via their phone vibra or message tone?
Cultural reference: The Matrix. Cypher and Neo watching the raining green screen, the only way to really follow the noise of the Matrix, to see what is what.
My vision of an Interquantum Translator converts streams of quanta into other streams of quanta.
Could this then be a key to being able to follow the noisy internet? Text is slightly hard to scan, but what about sound or visuals - both of which our brain is much better at sorting out?
This reminds me of two stories:
- Joi Ito using the voice channel of his WoW guild as a sort of background tribal chatter
- Caterina Fake mentioning the difference in scanability of photos versus video or other media types.
So my questions remain:
- How do we make the internet one big cocktail party, make all our apps noisy?
- Which leads me to ask, how do we them make it easy to pull the signal from the noise?
- Which leads me to ask, how can we, by manipulating the quanta, make use of millennia of refinement and use our own brain to filter out the signal?