IntroHave you ever participated in an argument where countless points being raised from both sides don't change anyone's position? Each consequent attempt to convince an opponent makes you look deeper and deeper into what would make them think different from you. Eventually, you may find out that some very basic perception aspect is different. A good outcome would be one of the sides (or both) figuring out the imperfections of their model (of the subject domain) and making corrections internally, resulting in a consensus. As you see, the argument is a collision of different models, and the facts are shots fired from each side. A fact/shot can be either explained/deflected by the the receiving model (shield?), or it hits hard, exposing the model inconsistency. This, in turn, is my model of an argument process, and you are welcome to argument about it.
In the past, I've followed the facts as anchors binding our fuzzy reality to objectiveness. Whenever listening to someone, reading news, watching a show, I used to always search for the facts, extract them, and process with my models of understanding. Consequently, anything based on the believe didn't make sense to me, like religions or propaganda. Lately though, my perception focus changed from facts to models. Instead of matching the facts to my models, I started constructing images of other models and evaluating them as a whole, using facts at hand for quick verification. This is my new meta-model of perception, and I'd like to register it with this post. I'm going to analyze facts and models in general in an attempt to explain the meta-model to you.
FactSimple. One can explain a fact in a single sentence, like "it was raining for 2 hours yesterday". It is universally understandable. It's on the lower level of understanding, way below opinions and feelings, thus being totally neutral in its message.
Manipulative. A single fact is harmless, but they always come as a bunch. Supposing I showed you another planet with life forms, and all 25 of them that you've seen were white. Would that make you think all of them are white? What if I was selecting them manually just to make you think they are all white? By filtering the facts you may change the way people construct their models, and social media has been doing this for ages. One would say the filtered fact set is "incomplete", but what would qualify as complete here anyway? It's a tough question, and my answer is - the model defines completeness, being part of the reason it's so important.
You don't have to lie to steer people's minds. Next time you see (on TV, newspaper, etc.) angry people oppressed by their country leaders (with lots of natural resources), supposing you consider it a fact (see the elusive section), ask yourself questions: do these people represent the majority? would you be able to show angry people in your own country? and why they are not in the news then? etcetera.
Elusive. What is the fact anyway? Was it raining for 1h58m yesterday, or where exactly was it raining? Maybe the rain clouds moved over this time, maybe it was on and off, maybe it was just reported as raining but nothing actually happened. You may try to dig these details infinitely, and there will always be some rough corners remaining. We don't exactly interchange facts, these are just little models approximating some experience, not always valid ones. So, you can't catch a fact, it's elusive. We often agree on facts because they are being generated by a common model, common sense if you will, but it barely pretends to be more objective than any other models we exchange.
When you hear the report of a passenger plane being shot by X, ask yourself one question: does it fit into your model of X, or is it a part of transmitter's model? What is the "fact" based on: a quality video of the event, or just the words of an involved party? What could have happened that would be perceived the same way?
Useless. We can remember the fact, transmit it (a model of it, to be precise), but that's it. Actually learning from a fact involves something much more complex - a model, explained in the next section.
ModelSmooth and extendable. Model is what allows us to predict the future, and to fill gaps in our past. This is how the brain works - essentially, it constructs and manipulates models of the perceived world. Prediction is the sole purpose of our high-level neural system.
A historian, for example, reads the "facts" about German history, tries to construct a model in their head, including the psychological profile of Hitler, if you will. When they finished the model, checked that the facts matched (or buried the ones that did not), exchanged the idea with other historians, applied the political agenda, only then they write a book explain every little bit of what happened. The model is often hidden behind the straight facts: it's in their formation, layout, and, of course, selection (if you know a bigger subset).
Contagious. As mentioned before, we exchange models and not facts. Once you understand the model, it changes your perception of the future by proposing difficult predictions of it. Often, you'd want to adopt a model to live in a better future.
In the last century there was a couple of strong contagious models: capitalism, communism, and fascism. The first envisioned goods and freedom for everyone, the second - equality and modesty, and the third - prosperity at the cost of other nations. People fought for these ideas all over the world (WW2, Vietnam, Afghanistan, Korea, etc.), by the way constructing enormous mechanisms of spreading the idea far away (mass media, culture of journalism, generations of kids educated in a biased way, etc.). Today we read the history books written by the winners, we listen to their voice adjusted to continue zombifying us, this time only to different threats.
Complex. A model is an essence of all the facts it explains. It bears the core complexity of the domain, which can be extremely valuable. It is often complex to explain it to others, especially if it hasn't shaped up nicely yet in your own brain. These qualities make it important to always serialize the important models, which is what I'm doing here.