Mathematics of a Virus
Welcome to another episode of “Mediocrity and Madness”, my podcast about our daily balancing act between aspiration and reality at our workplace, about the ever-widening chasm between talk and truth! Here is another episode in English and I have to start with a warning: Math will play a role in this podcast. Actually, nothing to worry about! We’ll hardly go beyond high school level, but I know there is quite a significant number of people who become edgy at the mention of mathematics alone.
Fake Data Driven
What a pity! Aren’t we all the time talking about being data driven? How can you pretend being data driven if already the thought of using a profound tool for making sense of these data – also known as “mathematics” – makes you break a sweat? Well, you might argue that you employ a bunch of fearless data analysts for that purpose, but does that make things really better? What are you going to task them with? There’s another reason for embracing mathematics: it’s just beautiful! Well, I admit, sometimes this beauty is a bit more of a hidden beauty and quite a part of this beauty remains hidden to us mere mortals (as opposed to true mathematicians), but it is possible to grasp a spark of that beauty, nevertheless.
Finally, I am sure that the sympathetic listener to this podcast doesn’t bear such trepidations and maybe trusts its humble author to take her on a journey free of dungeons and dragons. Actually, this episode is in no way a lecture about mathematics as such, but a plea: a plea for trying to get to the bottom of things, a plea for using science to do so, a plea against shallowness and politicization. – The more so as negating the prior and performing the latter is quite a habit these days.
The example we are going to use is as vicious as it is prominent these days. It’s “the virus”: COVID-19.
I may say that I was quite happy with the media coverage we had at least at the height of “the crisis” – as far as it’s already possible to say this height is behind us. Suddenly, talk show guests were free to admit that there are some things they don’t know, that with hindsight decisions might look different, that there might not be an either or, only shades of orange. Yet there was one thing I was disappointed of. I missed a proper treatment and interpretation of data.
Fuzziness is a Feature
I found and still find this surprising. Isn’t there a whole new sub-profession, calling themselves “data journalists”? Don’t get me wrong. There always was and is data available. And there was and still is that fuzz around doubling times and reproduction factors, but more often than not the handling of these was sloppy at best. Possibly the only exception is the New York Times which can be amazing in turning data into insights … and visualizing them.
It begins with the numbers as such. What was that fuzz about data differing between different sources, most notably – in Germany – those from the Robert Koch Institute (RKI, something like the German CDC) and the Johns Hopkins University? You could perceive a sense of national emergency if the RKI reported, say, 4.286 cases at a specific day whilst Johns Hopkins reported 4.642 new infections, based on their respective methods of counting. You could also sense some preference amongst journalists for the Johns Hopkins view. I suppose this was because they had the better user interface in the earlier stages, provided a global perspective and – most of all – managed to communicate the “superiority” of their approach claiming to incorporate data from different sources. And their figures always were slightly higher, which probably stemmed from these additional sources and appears slightly favourable to the media, too. Well, and the Robert Koch Institute can hardly be praised for giving exciting press conferences.
Thus, it somehow boils down to a question of communication. I haven’t seen any analysis of these mysterious “additional sources” of JHU, but that is not the point at all. The point is: first, both figures are wrong anyway and second, that doesn’t matter for all practical purposes. By the way, the discussion reminds me of this mysterious enrichment of company data from “other resources” that is supposed to bring step changes. Maybe we come back to this later again, but first let’s have a look at these two points.
Any Figure is Wrong. Any.
First, any figure is wrong. Any figure, not only these two. You just can’t give an exact count of infections on a daily basis. Well, you could if you would have some sort of automated test and immediate evaluation of your whole population every day at – say – noon. Swab your cheek, put it into some device and there we go. But we aren’t there. Yet. And I don’t think that’s a desirable scenario anyway. But as long as we don’t have something like this, 4.286 is as good as 4.642. Actually, the true message is: “given our current approach to testing, we had about 4.400 identified cases yesterday”. There are reporting errors and delays and – most important – there are huge numbers of unreported cases anyway.
I would have truly preferred figures not being reported to a single digit. It just creates such a wrong impression. On the other hand, I suppose it is that kind of pseudo-accuracy we as “consumers” ask for. Otherwise we would be tempted to claim: “They can’t even count cases!”, wouldn’t we?
At the same time, and that’s maybe even more important: the exact figures don’t matter anyway. It is not like anyone in the government or in the health system would do anything differently if the number is 4.462 instead of 4.286. The only thing that really counts on that level is trends. In this way, it is significant if figures move from about 4.200 to about 4.000, 3.800, 3.500 and so on until we end up at about 400 a day, or whether they move from about 4.200 to about 20.000 within a few weeks. Even the daily view is of limited help only. Numbers fluctuate for many reasons (most notably weekends have a huge effect) and the impact of measures is seen only weeks after they are taken. But we (the public) want something to chew on on a daily basis. By the way again: so do managers.
There is a lot to learn from this:
- Feigned accuracy doesn’t add value. In science by the way, we make a big deal out of systematically assessing the margins of error of our measurements and how they propagate if we combine results. Fun fact: even machine computing comes with margins of error. Computers have to truncate any non-integer number somewhere. This is a margin of error as such. Perform the right operations with such numbers and the resulting error margin goes totally crazy.
- The really important thing are trends. What’s important for example in the cases of our virus is whether figures grow or fall and how quickly they do so. Of course, it depends on the question you ask, but more often than not, absolute accuracy is not necessary to determine the trend. Or law of nature. Fuzzy data are quite often good enough. For identifying trends by the way, you might better stick to one method of measurement. Switching methods in between can blur your analysis more than a better accuracy might help your cause. Note of caution: the situation is totally different if you want to determine the mass of an electron or the value of the cosmological constant. Actually, you should turn the problem around: first ask what question you want to answer, then determine the accuracy you need, then design your measurement.
- If cause and effect are linked through a significant time lag, you will need patience before you can evaluate your data properly. Short term figures are meaningless. Say – just for the sake of argument – cases surge but you still boast hospitalization figures. Better wait a couple of weeks!
Just another link back to business. Quite often, we struggle with dealing with fuzziness. After all, double-entry accounting is deeply engrained into our corporate DNA. Assets and liabilities add up to exactly the same value, in principle at least. Enter the world of predictive analysis and life looks different. Prediction comes with a margin of error. By default. Sometimes the customer won’t be inclined to buy the “next best product”. Sometimes she will buy it despite you gave her a low propensity to do so. The usual reflex is asking your data analyst for collecting better data and refining her calculation. More often than not though, the better approach might be asking: “What can I achieve with the information I have in my hands?”.
Enter the Math
Now – as I threatened in the beginning: a bit of math. Very simple, indeed. Say, we have some data, for example numbers of positively tested people over time. You find excellent – though differing – data of these for example on the web sites of Johns Hopkins University or The New York Times. If you are interested in Germany, you can use the Robert Koch Institute data as well. The big question is: “What to make of these data?”. What we actually want to know is: “How fast does the virus spread?”. This gives us information about potential consequences like number of sick people not being able to go working or hospitalizations in relation to capacity etc. . Based on these predictions, we then might make decisions and take action like contact limitations or the like. And of course, we also want to know how these measures will affect growth rates again so we know the metes and bounds.
There are different kinds of “fast”. From “quite slow” like in linear growth to “relatively fast” like in a power law to “exceedingly fast” like in exponentially.
Let’s look at the linear growth scenario first. Linear growth is quite manageable. You have a constant number of cases per day. If you cumulate them, you will get a bar chart whose tops resemble something like a straight line. Don’t mind short term fluctuations. Numbers of – once – infected people will grow steadily, the faster the growth, the steeper the line. Your average TV anchor will say with a sincere expression “the number of infections grew again”. But what’s the real message? Why is this kind of growth quite manageable?
Shades of Bad
At this point I have to bring up that “manageable” is anything but “good” or “OK” or “acceptable”. People will be sick. People will suffer from long-term effects. People will die. “Manageable” does not account for individual suffering, harm, loss. In this way it is an important discussion, whether simply “manageable” is a reasonable goal in the first place. I do not have an answer to this question. All we are discussing are shades of “bad”.
“Manageable” in the context of this podcast merely means “not necessarily breaking the systems”, the systems of health care and economics, that is; -- not breaking these systems by overflowing them in a way they can’t cope with. So, why is linear growth “manageable” in this sense? Well, say we are looking at hospitals. Then the limiting factor is the number of beds or personnel or ventilators … . The respective load is determined by the number of people being sick at a given point in time. In order to calculate this number, we have to deduct the total number of people recovered from the total of infections at that point in time. In a linear scenario this load is – after a period of growth – constant. It equals the average number of days of being sick times the average of new infections per day. As the latter is constant in our scenario, the product is constant as well.
This does by no means mean that the situation is an easy one. “Constant” can still be huge. A lot of people might be sick and as the virus has a certain lethality rate, the number of deaths per day will be constant as well. And there is no end in sight for a long, long time, too. The only consolation this scenario holds is that the virus won’t outgrow capacity as soon as you have brought this capacity to the right level. Doctors, nurses, beds, protective gear, respirators should be in enough supply … at least as we do not have to cope with fatigue.
So much for this least bad scenario. A situation by the way, a number of countries have reached, most of them after a period of fiercely fighting back other growth scenarios. Let’s move to the opposite side of the growth speed range: the exponential scenario. Actually, the exponential scenario is the standard model for viral spread. It goes like this. One person infects a number of other persons within a given period of time. Each of these – again – infect a number of persons within a given period of time. Each of the newly infected infects a number of persons … and so on. Of course, neither the “period of time” each infected person is contagious nor the number of people she or he infects is the same in each case, but for the time being, we are interested in the most basic principles of the spread, so it is absolutely fair to work with averages. Say, that period of time is one week and every person infect two others within that period of time.
Let’s Do Excel
The result is: exponential. You do not even need math to see what happens. You can use Excel, which in my private and very personal opinion is something like akin to the very opposite of math. Put a figure one into the first top left field (A1) of a blank spreadsheet, preferably start with the figure one. Then key into the field below (A2): “=2*A1” (don’t key in the quotation marks!). You should find the figure two in the box as a result. If your first one was a one, that is. Klick on A2 and you will find it framed with a little square in the right bottom corner. Grab that square and drag it down the first column as far as you want and here you have it: the exponential.
Apologies for that very basic guide to using Excel. I am fully aware of the fact that the kind listener of this podcast most probably is an Excel-Pro, but I wanted to be precise regardless! Let’s have a look at the result. The lines indicate the number of weeks passed. By the way, you will recognize that it actually doesn’t matter if this are actually weeks. It could be days or months or years all the same. The line’s number actually indicates an abstract number of periods past but thinking in terms of weeks doesn’t hurt. The figures show the new infections in the respective week. It starts slow. It takes ten weeks to go from one case to 500. Then it picks up a bit. After another ten weeks, we’re already above 500,000. Well, that’s the equivalent of one bigger city. But wait another ten weeks and we are at 500 million cases. A day or two later, we arrive at the equivalent of the population of Europe. Four weeks later, in week 34, every single person on the planet would have been infected following that development. It wouldn’t work like this towards the later stages. We will discuss that in a bit. But it is mind-boggling, anyway.
And it is counter-intuitive. For ten weeks almost nothing happens. 512 new infections in a country of – say – 80 million. That’s but a drop in an ocean. Ten weeks later, well, you have some sort of issue, but still: 500,000 out of 80 million …? Then, it explodes. BAM! Only eight weeks later, every single person in your country is or has been sick. Maybe 1% of your population, 800,000 people, is dead or dying.
The Biggest Deficiency of Mankind
Someone has once said, the biggest deficiency of mankind would be a lack of understanding of the exponential curve. I believe, there are other deficiencies, but developing an intuition for the exponential curve is very hard, indeed. It starts so deceptively slow and then all of a sudden, it blows in your face.
At least if you look at absolute numbers, that is. If you look at growth rates, there is actually no surprise at all. Look at your Excel. Figures double every week. They grow by a factor of a thousand every 10 weeks. The calculation is more than simple. A few clicks in Excel. Yet we are mostly accustomed to linear developments. One step at a time, not: one today, two tomorrow, four the day after and 500 million steps in a month. Linearity is widespread. If four workmen need a month to lay bricks for a hose, how long would it take eight workmen? Or twelve? By the way: it doesn’t work like this in software development, but that’s a completely different story.
In an exponential scenario, the time lag is treacherous. Ten weeks in the early phase translate into minutes another ten weeks later and not even a second another ten weeks on. This is the reason why a) exponential developments can easily be underrated and b) need decisive and big scale intervention if you want to stand a chance fighting them.
From a mathematician’s perspective, exponential functions like the one we simulated with our Excel sheet have a specific beauty. Almost everything you do with them, maintains their exponential character. Their derivatives (the slope of the curve) as well as their integrals (the area below an exponential curve) have the same exponential characteristics as the genuine function. If we used instead of the 2 in our example the so-called “Euler’s number” e which is roughly 2.72 but cannot be expressed as a finite decimal number, the value of our exponential function (y equals e to the power of x) would be exactly the value of her derivative. That’s really beautiful, isn’t it? That’s also why the exponential function with Euler’s number as a basis is called the “natural” exponential function. So much for what mathematicians regard as “natural”.
Introduce complex numbers, that is an entity called the “imaginary” number i, whose square is supposed to be minus one and we go from beautiful to magic. Using an imaginary number, say i times x, as the argument of our “natural” exponential function yields that this function can be composed as a sum of the trigonometric functions, sine and cosine. Rummage around in your memory. Sine and cosine are these wavy lines, oscillating between plus and minus one. It’s quite a fascinating leap from these waggled and limited lines to that explosively and limitlessly growing exponential. It all culminates in the expression “e to the power of i*pi equals minus one” bringing together Euler’s constant and the likewise fundamental Pi in one equation with minus one and its square root. Magic.
I have to stop here. I have promised you. Back to the virus. Or maybe not immediately. There is another class of growth functions, somewhere in between linear growth and exponential growth: power functions. Here, the cumulative number of infections follows a law of the type: “number of periods” to the power of a; with a being an arbitrary number, for example two or five or ten or one hundred. You can try Excel again. The respective formula is for example ROW()^2 or ^5 or ^10 or ^100. Put this into the second column, next to your exponential function and you can start experimenting. For the mathematically savvy amongst the listeners I’d like to add that the linear function and even the flat horizontal line can of course be classified power laws, too, but let’s not make too much out of this here.
Experimenting, you will probably see that with the smaller exponents, power functions are dwarfed by the exponential. If you move to greater powers – what a pun – though, it might appear that the power function wins over the exponential. But that’s an illusion. Just extend the number of rows and the exponential will catch up quickly and dwarf the power law again. If Excel can manage numbers that big, that is. Regardless, in the end the exponential will always, always make the power function look miniscule. That’s the nature of the exponential.
We will get back to the virus in a minute, just another detour. The difference in the speed of growth between exponential and power (or polynomial) functions is utterly important, for example in computer science, too. Computer scientists for example define the complexity of a problem by how computation time for solving the problem by some algorithm scales with the “size” of the problem. If that time scales exponentially, the problem is really a toughie.
Computing power itself follows an exponential law, the so-called “Moore’s law”. It isn’t exactly a law, but a correlation that holds for dozens of decades now. Moore’s law says that computing power per square inch of a chip or per Dollar in costs doubles roughly every two years. Read the “period of time” in your Excel as two years and assume that the first microprocessors have been developed around 1970, then you will see that in the 50 years since computing power has increased by a factor of 3,300,000,000 percent. Wait another ten years and that factor may well be 100,000,000,000 percent. Growth as this may be though, you will hardly beat a problem that scales exponentially this way. You simply tune up the “size” of that problem and even the latest generation of supercomputers won’t be faster in solving the problem than their predecessors with some smaller sized version.
One problem of that sort is integer factorization, factorizing numbers into their prime factors, like in “91 is the arithmetic product of 7 and 13”. The “size” of the problem is simply determined by the value of the number you want to factorize. All the algorithms we have today for solving that problem scale exponentially. This is the reason why encryption based on this method still works. Certain elements of blockchains also make use of this principle.
On the other hand, there is the class of problems and their algorithms that basically scale by power laws. These are the easy ones. It may take a while – like in your Excel sheet – but ultimately Moore’s law will conquer these problems, once and forever. There are shades, but in essence, the world is pretty black and white from a computer science complexity perspective. If it scales by a power function: easy-peasy. If it scales exponentially: tough cookie. Interestingly, for some algorithms that are in the latter class – tough cookie – verifying the result is in the prior one – easy-peasy. Factorization, for instance. It is utterly time consuming finding the prime factors of a given, sufficiently large, number, but it is a piece of cake to verify if a set of figures constitutes the factorization of a certain number. – Just multiply!
P vs. NP
This brings us to one of the most chased problems of mathematics and for a brief moment, we leave the realm of high school mathematics. The problem is in the same category of renown amongst mathematicians as for example Fermat’s last theorem, except for the fact that Fermat’s theorem has finally been proved, whilst the one I am going to sketch now is still neither proven nor disproven. You can actually win a million Dollars by doing either. The problem is called the “P versus NP problem”. We have already laid all the ground for understanding the question. “P” is the abbreviation for the class of all problems that can be solved by an algorithm scaling according to a power law. “NP” is the abbreviation for the class all problems whose solutions can be verified by an algorithm scaling according to a power law. The simple but million Dollar question now is: Does P equal NP? Or: Can all problems that can be verified in polynomial time be solved in polynomial time, too.
If the answer would be “yes”, this would for example imply that there is an algorithm for factorization that is in the easy-peasy category. Just a pity that nobody has found such an algorithm yet. Here you can grasp the gist of mathematicians’ mathematics. Just the fact that over decades no one has found such an algorithm proves nothing. In fact, it could be possible to prove that P=NP without creating a blueprint for developing such algorithms. This would be a bit humiliating, wouldn’t it? We would know that there is an easy-peasy solution, we are just too stupid to don’t find it.
Thus, we might tend to believe that P does not equal NP, that there are problems indeed that are tough to solve but with solutions that can be easily verified. Proving that would seem relatively straight forward. Just take one such problem, factorization for example, and prove that there is no conceivable easy-peasy algorithm. Done. But how do you grasp “every conceivable algorithm” in a way that fulfils mathematics’ aspiration of watertightness?
Well, if I have sparked your ambition now, you might want to delve deeper and chase the million. If you believe – as I do – that there are simpler ways to earning money, I guess it’s time to summarize what we might have learnt on this detour into the realms of computer science or mathematics:
- Problems that scale following power laws are easy-peasy. Moore’s law will make us conquer them rather sooner than later.
- Problems that scale exponentially are tough cookies. I was somehow inclined to say, these problems were “bad”, but actually we should be grateful for having at least a few of these so we can do for example encryption or blockchains.
- There are still unsolved mathematical problems out there in the league of Fermat’s last theorem. The P-NP-problem is one of them. You can actually earn some money and fame, solving one of them, but for most of us mortals, there will be more realistic ways to do so.
The second of these insights might be the one to bear in mind as a general angle from which to look at the world: as long as it is exponential, control is an illusion. That can be a doom as in the case of a spiraling plague, it also can be some sort of blessing as in the case of growing computing power according to Moore’s law. There is even an undecided case in terms of doom versus blessing: the development of Artificial Intelligence.
This raises an interesting question about humankind: if exponentiality is such a dramatic force, why haven’t we developed a better intuitive understanding. After all, our reptile brains have developed quite some instinctive behavior for questions threatening our existence. One answer might be that true-bred out-of-control exponential growth is rather the exception than the rule in nature. True, exponential growth is the standard model for viral spread, but usually there are other forces at work, too.
One famous example are predator-prey systems. These are systems with a “predator” population that would actually grow exponentially with an abundance of “prey” to feed on and a “prey” population that also bears the potential of exponential growth in the absence of that vicious predators feeding on them. In order to avoid dreams of vicious carnage, imagine panda bears as those predators and bamboo as those poor prey. Pandas guzzle up to 18 kilos of bamboo per day. You can easily grasp what happens if you start with a certain population of pandas and bamboo. Say, there is plenty of bamboo in the beginning, then pandas will feed reproduce as happily as they feed. Due to the exponential growth of the panda population, bamboo will be in dire straits and decimated to near extinction. Yet as bamboo supply decreases, pandas will famish and at least loose interest in reproduction. Or starve. Thus, panda population will decrease, bamboo will recover and even surge as pandas are down. Slowly, but acceleratingly again, panda population will recover, and the cycle begins anew. There usually are actually solutions, too, that are not that dynamic: the pretty sad and “trivial” one in which both, pandas and bamboo, seize to exist and another one, where both populations exist in a healthy and stable equilibrium.
For the Masters of Business Administration amongst the kind listeners: It’s all a question of demand and supply!
The standard viral dynamic is not much different. First, the virus surges with an abundance of people to be infected. As more and more people recover or die, they deplete the virus of potential victims. The spread slows and finally becomes rather a trickle as reproduction rates fall below one. This is what is called “herd immunity”. As you will have noticed, this is not the up and down behavior we observed in our panda vs. bamboo example. The reason is that we kind of assumed that as opposed to bamboo, our virus’ “prey” will not regrow, ie, that once a person recovered from the virus, she will be immune for her whole life. Unfortunately, this doesn’t seem to be the case. Corona virus immunity appears to persist only a few months. Thus, we might be very well back to the tidal scenario of predator-prey systems.
Beyond the Standard Model
Anyway, with current case numbers we’re in the single digit percentages of total population even in badly hit countries and even considering unregistered cases. Thus, we are leagues from any herd immunity effect. Yet if we look at the statistics, there hardly seems to be true exponential growth. So, what’s the reason? Actually, it is this question where I’d like to see more answers. Seemingly, there are some other factors at work.
First, testing. The mere fact that only a fraction of the population is tested at any given point in time somehow influences results. Not in that crude sense that more testing creates more cases, of course. If you would have only a limited number of cases, then testing a bigger number of people would only bring the share of confirmed cases down. On the opposite side, the number of tests you conduct definitely limits the absolute number of cases you can confirm. Thus, it may actually work the other way round. In addition, it matters whom you are testing. If you are testing only people with symptoms, your positivity rate should be higher than if you do a broader scale testing. If you test people potentially exposed to the virus, your rate should be higher than otherwise. The other way around, if you are testing ever more people AND your rate of positive cases goes up, then you most probably are in trouble. Regardless, the analysis of the effects of testing regimes on quantifying the virus spread appears a bit underdeveloped.
At this point, I can’t withstand another detour from purely statistical questions: testing. In order to start this detour with a tiny provocation, let me say this: There are advocates in some parts of the world arguing that one should limit current Corona virus testing activities, because more testing – quote – creates – unquote – more positive cases. I would say, they are right in that you could well think of stopping testing in some places, but not for the reasons they have in mind. Here we go.
Testing, as any kind of measuring, actually serves two purposes. First, to give you information about how you are doing. More often than not, you can gain a proper understanding for your situation with somewhat limited data. Say, you’re testing 50,000 people a day in a state of 20m residents and your positivity rate is 20%, then the insight you get is: “We are in deep … erh … mud”. It actually doesn’t matter whether the mud reaches your hips, your belly or your chin. It’s too deep to move anyway. Well, in the latter case speaking might also become critical. It also doesn’t matter much whether you test another 50,000 people, the realization won’t change.
Head in the Sand
The second purpose of testing, or any kind of measuring, is directing action. This is actually the core of what we have as the strategy for fighting the virus as of today: testing, tracing, isolation. Find the people potentially infected and get them (self-)isolated in order to disrupt the chain of infection without big scale general lockdowns. Problem is: this can work fairly well as long as you are in the first rows of your Excel sheet. Even there, it is quite a challenge. For every person infected, you have to trace and potentially isolate dozens of contacts. If you go to the second order contacts, we’re in the hundreds. If you aspire to trace more than direct contacts, like contacts in restaurants or on trains, we’re talking way higher numbers. Thus, testing as a tool makes sense only if a) you have a proper organization in place to trace and isolate and b) if numbers are in a manageable range in the first place.
10,000 confirmed cases a day are already out of that range. Add to this the turnaround time between testing and delivering the results of that testing. If that turnaround time is significant, you’re still working on – say – line 14 of your spreadsheet, ie, 8,192 cases, whilst reality is already at line 15 or 16: 32,768 new cases you still have no clue about. And as soon as you get to work at the latter ones, the real world is again two lines ahead: 131,072. You get the notion. At this point, you might actually stop testing at all and massively change strategy. … Well, or bury your head in the sand, let everything just spiral further out of control and pray that it miraculously goes away some day.
Back to our statistical deliberations. One thing is for sure: the effects of testing regimes will be a bit complex. Way more complex than claiming “if we’d test less, we’d have less cases”.
Also a bit more complex than the simple geometric progression that is reflected in our Excel: one person infects another two, these infect four more, these infect eight and so on. The actual chain of infection seems to work a bit differently. For a start, the virus needs some “path” to that next person. In order to infect another two people, you might have to meet – say – twenty people at least. And the contact has to be of some type, ie, sort of close and sort of for an extended period of time. If you don’t get into that sort of contact with that number of people, this path might be a dead end. From the virus’ perspective, that is.
Thus, the spread is determined by our social interactions and these vary. If you commute to your workplace in a big city on a daily basis and work in a crammed and maybe even chilly interior space, the avenues for the virus are plenty. If you live in a rural area and meet only a limited circle of people on a regular basis, things might be very different. In addition, this specific type of virus appears a little – how should I say – selective. In our vicinity, we had an actual case, a medical doctor from our regional hospital. She and a whole group of her colleagues got infected. Of course, she quarantined. Together with her family, homeschooling and all. Surprisingly, none of her family contracted the virus. On the other hand, we have these examples from choirs and congregations and other spreader events. To me it appears like the virus spreads like in these domino toppling setups. Once, there is a whole area that topples exponentially, then the spread trickles down to a single row of dominos, toppling unhurriedly one after another before it reaches the next super-spreader area. This kind of behavior should leave marks on the math of the spread dynamics.
Unfortunately, there is not much analysis available in that regard, at least not in the public domain. I would suppose that virologists should have better models. After all, shuffling together a proper computer simulation should not be difficult at all. Well, there is also the slight chance that models have been just too simple until now, following the notion “if it’s viral, it is exponential”. Time to improve.
Before I conclude this podcast, I would have liked to pick up some loose ends, like indicators as reproduction factor and doubling time or the question whether even the countries that reacted decisively could have done so earlier and quicker and better. But I feel it is time to come to an end. Only so much with regard to that last question: I would not dare to cast a stone. Not even with hindsight which – as always – is too easy. Look at your Excel sheet one last time. Which is the row you would have begun locking down a country? Row five? Ten? Eleven? Fifteen?
Time to conclude.
Glad and Grateful
How to conclude a podcast that has meandered from Corona virus statistics over rather basic mathematics, even an Excel spreadsheet, to the heights of an unsolved problem of mathematics and computer sciences, to panda bears and predator-prey models and finally back to the virus again, touching on the purpose and the intricacies of testing and tracing? Well, I’d like to conclude on a note I hardly imagined I would ever make publicly. Her it is.
I am glad and grateful for living in Germany, a country that is governed by Angela Merkel, a woman and scientist by education, a country where in time of a crisis politicians of all provenances and genders (btw, the male ones, too!) try to dig deep. At the same time, they acknowledge their responsibility for taking decisions against a background of uncertainty, complexity and ambiguity. I am glad of a country where federalism proves a strength and not a reflection of divisiveness. I am proud of my fellow countrywomen and countrymen of all ages. Most of us showing rigour but equally ease with kind distancing, wearing masks or queueing on alleys and staircases. I am glad to witness Europe grappling with her next stage of evolution, but cautiously taking that step. Ah, well, and when I look out of my window, I am glad of being rooted in Bavaria, a state where the sky always appears a little bluer, the clouds a little whiter than elsewhere and where “live and let live” is our motto … though it sometimes takes a while to see through our grumpy guise.
This was the latest edition of “Mediocrity and Madness”, my podcast about our daily balancing act between aspiration and reality at our workplace, about the ever widening chasm between talk and truth!
Thank you for listening! Stay healthy! Try to get to the bottom of things! If you liked this podcast, feel free to recommend it to a friend.
Until next time …
Mehr zu "Mittelmaß und Wahnsinn":
Oder direkt bei