Blue Tuesday: Is there too much work against Blue Monday?

60_jahre_allgemeine_erklarung_der_menschenrechte_3084670759

This bear is leaving home because its owners believe that Blue Monday has a scientific origin. (Attribution)

Yesterday wasn’t Blue Monday. Or to use its full name, Blue Monday (A Normal Day Of The Year Which Was Rebranded Through Marketing With A False Veneer Of Misleading Science). Blue Monday (ANDOTYWWRTMWAFVOMS) became a “not a thing” which happens as a result of holiday sellers, Sky Travel, and public relations company, Porter Novelli, selling holidays and public relating. They invented a formula which supposedly calculates that the third Monday in January is the most depressing day of the year and stuck what looks like a scientist on the front to complete its fancy-dress costume of sexy fake science concept. Needless to say, the average mood of everyone is too complex a thing to calculate with the simple equation being touted. Saying it can is a horrendous misrepresentation of the scientific method, human emotions and mental health. The added scientist, Cliff Arnall, is not a doctor or a professor of psychology. Or of anything. Saying he is is…

It’s difficult to argue with the success of the Blue Monday (ANDOTYWWRTMWAFVOMS) idea as a piece of marketing. On the day itself, the number of companies, including charities, that use the term to promote their products or causes is vast. With the general theme of spending money to improve your mood, Blue Monday (ANDOTYWWRTMWAFVOMS) is used to sell pretty much everything; be that the holidays it was designed to sell, cars, chocolate or financial advice. Perhaps more subtly, some groups have tried to re-purpose Blue Monday (I’ll stop now). They argue that while the supposed science might be a gargantuan heap o’ nonsense, it can still be a day to consider and support those who are unhappy. In addition, a lot of people have put a lot of work into explaining why, as a scientific concept, Blue Monday has the same credibility has half a brick with a picture of Dr Emmett Brown sneezed onto it by a guinea pig. So much so, that the publication of pieces debunking the science of Blue Monday have become as much of a tradition as the shower of gaudy sadverts.

kiara

This dog is more scientific than the formula for Blue Monday. (Attribution).

For the last few years, I have gained the impression that the pieces attempting to counteract the Blue Monday information have become more common than the items using its selling power. If this was indeed the case, the main thing keeping Blue Monday alive would be the valiant efforts to kill it. This could be placed in the Venn diagram of ironic things and bad things. However, whether this is the case is far from decided. While I have seen the same claim from others, my perception that anti Blue Monday work is more common than pro Blue Monday work is just that, a perception. Perceptions are at risk of bias.

Confirmation bias would mean that I might be interpreting information in a way that confirms my pre-existing beliefs. All the evidence I’ve seen shows that confirmation bias exists. The Baader-Meinhof phenomenon (or frequency illusion) would mean something that’s recently been noticed by me, suddenly seems to occur at a greatly increased rate. Once you’ve noticed the Baader-Meinhof phenomenon, you’ll start seeing it everywhere. Finally, the perception that anti Blue Monday work is more common than pro Blue Monday work might be the result of an echo chamber. I’m more likely to associate (digitally or in the great outdoors) with people who hold similar points of view to me. I’ll therefore see opinions the same as mine with greater frequency, and if I’m not careful will come to believe that those opinions are the most common. Everything I’ve seen on Twitter confirms I’m right.

One potential antidote to the plethora of human bias is correctly analysed data. I didn’t have that, so I took to the internet. On 16th January 2017, I searched for the term, “Blue Monday” on Twitter. I didn’t specifically use the hashtag because I wanted to avoid people or organisations using it just to make their tweets more locatable on the specific day. On a separate note, SEX! I then counted the tweets that seemed to believe the effect of Blue Monday, the tweets that actively opposed the effect of Blue Monday, and the tweets that didn’t believe Blue Monday, but wanted to use it to at least gain some benefit. I did this until the total tweets I’d counted reached 100. To be counted, a tweet had to at least hint at belief in Blue Monday or otherwise. It couldn’t just spout a load of a nonsense about sofas and then end with a hashtag. I also did a similar thing with Google (incognito window to avoid the influence of my search history) to count sites, news items, blog posts etc. and place them in the same categories as were used for the tweets. This was also completed when the total links reached was equal to 100. I later checked the Google search o a separate device and found the resulting list to be practically the same.

The results can be seen below. In summary, the pro Blue Monday items were much greater in the number than the anti Blue Monday items. These were both much more prevalent than items trying to re-purpose the day. My perception was wrong, and unfortunately the work to demonstrate that the idea of Blue Monday is anti-scientific rubbish appears to still has some way to go.

blue-pie

Pie part showing the proportion of pro Blue Monday, anti Blue Monday and re-purposing Blue Monday items.

 

One thing to note however, was that out of the pro Blue Monday items, 72% were advertisements. As discussed, these would make the argument that it’s the saddest day of the year so why not buy chocolate/hair gel/happiness? It is unclear to what extent the people behind these believe that Blue Monday was a scientific concept. While their adverts vaguely hint at belief, it’s just as likely that the mention of Blue Monday and its supposed effects are being used as devices to enhance how noticeable their brand is on a specific day. An increasingly difficult task given how common the use of the Blue Monday “brand” is. It seems to me that an advert that went with something other than Blue Monday marketing on the third Monday in January would be the one to stand out.

I’m not sure why efforts to educate people as to the non-scientific origins of Blue Monday are not working or even if they are actually not working in the first place. As discussed, it’s possible people know all of this, but find the term useful for their purposes; whether these are charitable or otherwise. Indeed, some news outlets may be using anti Blue Monday work to join in and take advantage of the temporary interest while maintaining an appearance of credibility. There’s no point in having your cake if you can’t eat it.

Ultimately and unfortunately, it appears that not much can be done about the Blue Monday juggernaut. I might still hold out hope for those valiantly explaining the gibberish behind the claims and even for those re-purposing the day for more noble causes. Judging by the current proportions, these efforts need to increase or change their methods to become more effective. How? I don’t know, although at least I’ve got nearly a year to think about it.

Advertisements

How unreliable are the judges on Strictly Come Dancing?

the_strictly

That very clean glass wall won’t hold itself up. Photo by Dogboy82 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44203685

Strictly Come Dancing, one of the BBC’s most popular shows involving celebrities moving in specific ways with experts at moving in specific ways while other experts check if they’re moving specifically enough contains certainties and uncertainties. We’re not sure who will be voted out in any particular week. We don’t know know what the audience are going to complain about. An injured woman not dancing! I was furious with rage! We do know that Craig Revel Horwood will use the things he knows to make a decision about whether he likes a dance or not while saying something mean. We can be pretty sure what Len Goodman’s favourite river in Worcestershire, film starring Brad Pitt and Morgan Freeman and Star Trek: Voyager character is. But can we be sure that the scores awarded by the judges to the dancers are accurate and fair?

In science, a good scoring system has at least three qualities. These include validity (it measures what it’s supposed to measure), usability (it’s practical) and reliability (it’s consistent). It’s difficult to assess the extent to which the scoring system in Strictly Come Dancing possesses these qualities. We don’t really know the criteria (if any) that the judges use to assign their scores other than they occasionally involve knees not quite being at the right angle, shoulders not quite being at the right height, and shirts not quite being able to be done up. As such, deciding whether the scores are valid or not is tricky. The scoring system appears to be superficially usable in that people use it regularly in the time it takes for a person to walk up some stairs and talk to Claudia Winkleman about whether they enjoyed or really enjoyed the kinetic energy they just transferred. In some ways, checking reliability is easier. Especially if we have a way to access every score the judges have ever awarded. And we do. Thanks Ultimate Strictly!

For a test to be reliable, we need it to give the same score when it’s measuring the same thing under the same circumstances. If the same judge saw the same dance twice under consistent conditions, we’d expect a dance to get the same score. This sort of test-retest reliability is difficult to achieve with something like Strictly Come Dancing. The judges aren’t really expected to provide scores for EXACTLY the same dance more than once. Otherwise you’d end up getting the same comments all the time; which would be as difficult to watch as the rumba is for men to dance. Ahem. However, you can look at how consistently (reliably) different judges score the same dance. If all judges consistently award dances similar scores, then we can be more sure that the system for scoring dancing is reliable between raters. If judges consistently award wildly different scores for the same dances, we might be more convinced that they’re just making it up as they go along, or “Greenfielding it” as they say in neuroscience.

To test this, all scores from across all series (except the current series, Christmas specials and anything involving Donny Osmond as a guest judge) were collated and compared. Below, we can see that by and large the judges have fairly similarly median scores (Arlene Phillips and Craig = 7, Len, Bruno Tonioli, Alesha Dixon and Darcey Bussell = 8). The main differences appear to be in the range of scores with Craig and Arlene appearing to use a more complete range of possible scores.

strictly-box-plot

Box plot (shows median scores, inter-quartile ranges, maximum and minimum scores for each judge)

A similar picture is seen if we use the mean score as an average, with Craig (mean score = 6.60) awarding lower scores than the other judges, whose mean scores awarded range from 7.05 (Arlene) to 7.65 (Len and Darcy). Strictly speaking (ironically) we shouldn’t be using the mean as an average for the dance scores. The dance scores can be classified as ordinal data (scores can be ordered, but there is no evidence that the difference between consecutive scores is equal) so many would argue that any mean value calculated is utter nonsense meaningless not an optimum method for observing central tendency. However, I think in this situation there are enough scores (9) for the mean to be useful; like the complete and utter measurement transgression that I am. At a first glance, these scores don’t look too different and we might consider getting out the glitter-themed cocktails and celebrating the reliability of our judges.

strictly-bar-chart

Box plot (shows median scores, inter-quartile ranges, maximum and minimum scores for each judge)

In order to test the hypothesis that there was no real effect of “judge” on dance scores, I did a statistics at the data. In this case a Kruskal-Wallis test because of the type of measures in use (one independent variable of ‘judge’ divided into different levels of ‘different judges’ and one independent variable of ordinal data). And yes, it would be simpler if Kruskal-Wallis was what it sounded like, a MasterChef judge with a fungal infection. Perhaps surprisingly, the results from the test used could be interpreted as showing that the probability that the judge doesn’t affect the score was less than 1 in 10,000 (P< 0.0001). The table below shows between which judges the differences were likely to exist (P< 0.0001 for all comparisons shown as red).

strictly-table

Table showing potential differences between judges in terms of scores they give to dancers

Thus it would seem that the probability that Craig isn’t have an effect on score is relatively small. In this instance, Craig appears to be awarding slightly lower scores compared to the other judges. The same could be said for Arlene, except if she is being compared to Craig, where she seems to award slightly higher scores.

So it transpires that the scores on Strictly Come Dancing are indeed unreliable. Arlene did and Craig is throwing the whole system out of alignment like a couple of Paso Doble doing a Jive at a Waltz. Tango!

Possibly not though, for a number of reasons. 4.) I am clearly not an expert in statistics, so I may have just performed the analysis incorrectly. 2.) If differences do exist, they are relatively subtle and are likely to be meaningless within individual shows, only coming to light (and bouncing off a glitter ball) when we look across large numbers of scores. That is to say, that a statistical difference may exist, but this difference likely makes no practical difference. A.) At least it’s not The X Factor.

Keep dancing. And doing maths.

Does Sean Bean Always Die at the End?

The Alpha Sean Bean, shown here to be still alive.

The Alpha Sean Bean, shown here to be still alive.
“Sean Bean TIFF 2015” by NASA/Bill Ingalls. Licensed under Public Domain via Wikimedia Commons .

There’s a quote from a character in The Lord of the Rings: Fellowship of the Ring, and J.R.R. Tolkein’s character from some book or other, that has been doing the rounds as an internet meme for quite some time: “War makes corpses of us all.”  Of course you all know it, it’s ridiculously famous, after all, one does not simply forget a Faramir quote. Much better than Boromir. In Sean Bean’s case however, the quote might as well be “appearing in a role in television or film makes a corpse of me, Sean Bean.” Sean Bean is well known for dying in films. So much so, that there exists a campaign specifically against the further onscreen killing of Sean Bean. At least, I think it still exists. It might have died.

Basically it is a fairly common assumption that if Sean Bean is in something, he will most likely not make it to the end. However, everyone knows what happens when you assume; you make a prick of yourself. Is it actually true that Sean Bean always dies? In psychology, confirmation bias describes the tendency for people to better recall information that confirms their existing beliefs than information that would refute them. The frequency illusion is where something (it can be an event or just an object) which has recently been brought to a person’s attention suddenly seems to occur or appear with greater frequency than it did before it had been noticed. This is also known as the Baader-Meinhof Phenomenon and once you know about it, you’ll start seeing it everywhere. So it is possible that the appearance of Sean Bean’s repeated celluloid mortality is a function of some common cognitive biases rather than him actually ending more times than a Sunday furniture sale. The following information that was collected to test this may contain spoilers for Sean Bean projects. Unless you believe the appearance of Sean Bean in a cast list is in itself a spoiler.

Using some sort of internet search engine (if you want to find a similar one, you can look it up on Google) all of Sean Bean’s roles in film and television were listed to create a population of Sean Beans. From here forward, the collective noun for Sean Beans used will be “population” rather than the perhaps more common “can” or “cemetery.” Sean Bean’s roles in theatre or performing voiceover in video games were not included due to a combination of being too difficult to include, laziness and the words “Sean Bean” starting to lose all meaning. The actual actor Sean Bean (the Alpha Sean) was also included, as while technically it is an ongoing role, we do know with reasonable certainly that Sean Bean will die at the end of it. The Alpha Sean was not included in any cause of death calculations in case I end up as a suspect in a future murder investigation. Jupiter Ascending was not included for obvious reasons.

The number of times Sean Bean was dead at the end of a film/TV show and the number of times Sean Bean was alive at the end of a film/TV show were counted and used to calculate the incidence of death for the total population of Sean Beans. The incidence rate is the number of new cases of a disorder or death within a population over a specified period of time. This is commonly express in terms of per 100,000 persons per year. In terms of deaths, this in some ways can be seen as equivalent to the Mortality Rate. Some basic demographics, causes of deaths and intentionality of deaths were also calculated.

The demographics for the population of Sean Beans are shown in Table 1.

Table 1. Sean Bean Demographics

Characteristic Sean Bean Numbers
N 75
Mean (SD) age, years 6,0810,851.05 (523,114,369.60)
Species, n (%)
Actor 1 (1.33)
Human 71 (94.67)
Lion 1 (1.33)
Portrait 1 (1.33)
God 1 (1.33)
Survival
Alive, n (%) 45 (60.00)
Dead, n (%) 30 (40.00)

The incidence of Sean Bean deaths across the total existence so far of Sean Beans (6000 BCE to 2072) is 4.85 per 100,000 person per year. The causes of Sean Bean death and intentionality of Sean Bean death are shown in figures 1 and 2, respectively. The most common cause of death was being shot by a gun. The best cause of death was fall from cliff due to a herd of cows. Most Sean Bean deaths were intentional (as a result of homicide) compared with accidental and orcicide.

Figure 1

Figure 1. Cause of Sean Bean death.

Figure 2

Figure 2. Intentionality of Sean Bean death.

The aim of all this Beanian death numbering was to determine if there was any truth to the common belief that Sean Bean always dies at the end. Examination of a fairly complete population of Sean Beans shows that this is not the case, with 60% of Sean Beans managing to survive the time it takes for many film and TV directors to tell a story. If you are a Sean Bean though, it seems you are most likely to die by being shot by a human. There may be some money to be made in a line of Sean Bean-specific bullet-proof vests.

So why is the belief that Sean Bean always shuffles off the mortal coil at the end so common? The application of confirmation bias to this has already been discussed, but for that particular bias to take effect, there must be an existing belief to confirm. The earliest manifestation of Sean Bean’s tendency for premature televisual corpse shenanigans that I could be found was approximately around his fourth appearance. However, at a preliminary glance, Sean Beans don’t seem to kick the bucket particularly often early on in the ascendance of Sean Beans to make any reputational impact.

If we divide the appearance of Sean Beans into tertiles (an ordered distribution divided into three parts, each containing a third of the population, not an aquatic reptile with a shell) and look at the proportion of deaths as time progresses, we get something that looks like figure 3.

Figure 3

Figure 3. Proportion of Sean Bean deaths by Sean Bean time tertile.

We can see that if 3 is the most recent tertile and 1 is the furthest in the past, then the Sean Bean death rate appears to be greatest in the middle of the population’s progression through time. In psychology, the serial position effect describes the tendency for people to recall items earlier (the primacy effect) and later (the recency effect) in a list the best, with items in the middle being recalled the least. This would not explain the Sean Bean always dies reputation, as in such a model we would expect more deaths in the first and last tertile. Besides, one explanation for the serial position effect is that earlier items are stored more effectively in long term memory than the other items, while more recent items are still present in working memory and are thus easily available for recall. This would only apply to these data if people experienced Sean Bean necrosis as a list in front of them, which most people (besides me) don’t. Even if the data matched a serial positioning explanation, it would be a stretch (i.e. wrong) to use it to explain the Sean Bean deceased at the finale reputation phenomenon.

Rise of the Nicole Kidmen would be a good episode of Doctor Who.

Rise of the Nicole Kidmen would be a good episode of Doctor Who.

Characters don’t become instantly well known in popular culture. It takes time for a reputation to build and saturate society. In this respect, perhaps we can consider the middle tertile to be more akin to the starting point for a reputation i.e. Sean Beans will be more well known, with more opinions being formed about them. The Sean Bean death rate here is 52%, meaning that during this period Sean Beans were slightly more likely than not to die at the end. This may be enough to start the rumour of Sean Beans’ non-existence by the credits and establish a source for confirmation bias.

Characters don’t exist in isolation. They usually exist in a complex ecosystem of other populations. The Sean Bean population exists alongside the population of Bruce Willises (Willi?) and the population of Nicole Kidmans (Kidmen?) among others. Important data to consider would therefore be how often Sean Beans die in comparison to other populations. If the comparative death rate of Sean Beans is noticeably higher than that of other comparable populations, then this may explain the Sean Bean clog-popping conundrum. Future “research” should focus on this (I can’t be bothered right now).

It was suggested to me by KTBUG (kgwright73) that the popularity of the mode of presentation of Sean Bean would have an impact on the perception of his tendency for pushing up the daisies. It seems feasible Sean Beans die in more popular things and live in less popular things then the public perception would be that of a gentleman prone to leaving his life behind. To this end (where available) I took an average of lifetime box office takings for films where Sean Bean died and films where Sean Bean lived (figure 4).

Figure 4

Figure 4. Average lifetime box office takings by Sean Bean survival.

Figure 4 shows that films where Sean Bean shook hands with the Grim Reaper on average took more at the box office than films where Sean Bean continued respiring. If we use this as a crude measure of popularity (and it is very crude, subject to bias from missing TV shows and films where I simply couldn’t get the info) and impact on cultural awareness, then films where Sean Bean becomes an ex Sean Bean seem to have made a larger cultural impact. This could certainly be at least one source of the idea that Sean Bean always dies.

Please note, I am in no way suggesting that Sean Bean dying in it makes a film popular. As the old saying goes, “Sean Bean’s death correlation, does not prove film popularity causation.” You all know it.

In conclusion it would seem that Sean Bean’s reputation for always dying at the end is somewhat over exaggerated, with a death rate of approximately 40%. Sean Beans are most likely to die from being shot intentionally by a human or from being in the middle of their career trajectory. The Sean Bean Ex-Parrot Meme may be best explained by a high death rate at a time when Sean Beans were likely to be reaching their maximum prevalence in the public eye and by films which feature a Sean Bean death having made a larger cultural impact than films that feature a living Sean Bean at the end. These perceptions feed into confirmation bias. And then Sean Bean died.

The Science of “The Science of…” Articles

Science of Word Cloud

Introduction

The science of biscuits, ducks, politics, trains and glitter. If not for these then it’s likely you’ll have seen at least one article in the media that claims to explain the science of something. But what does that mean? Does it mean the articles contain a list of facts? Sometimes, although you’d have thought not because science is a methodology rather than a list of facts; and articles with the title “The Facts about…” are often confined to pieces about supermarkets, dieting and celebrity gossip. “The Facts about Ryan Gosling, the secret food behind his rock-hard toe muscles and how he used them to woo an eagle.” Like that.

However, neither should “The Science of…” articles be the direct reporting of a piece of research. That’s what we have scientific journals and 0.5% of science press releases for. It might be fairly safe to argue (he said on the internet) that a “The Science of…” article should contain some facts that were obtained using the scientific method, some description and criticism of that method and how it explains the phenomenon in question. Is this what “The Science of…” articles are doing? “Is this what people think “The Science of…” articles should be doing? Is the device of asking yourself a question overused in writing? Yes.

The aim of this post is therefore to aim some science(ish) at “The Science of…” articles to investigate what they should contain and if they contain them. The hypothesis is that “The Science of…” articles exist and contain some stuff. The null hypothesis is that “The Science of… articles don’t exist and don’t contain stuff. The square of the hypotenuse is the sum of the squares of the other two sides. Hippopotamuses are large mammals.

Method

First I conducted a carefully thought out pre-study survey (I asked on Twitter what they thought should be in a good, “Science of…” article and had a cup of tea and a biscuit). I received 23 responses. This places the power of this experiment closer to the kitten pulling a super-tanker using string woven from Climate Change Denier accuracy end of the spectrum than to the Superman with He-man’s sword, SuperTed’s secret magic word and The Black Widow’s everything end. I then took all of the responses and used them to make a word cloud because of infographics. When this accomplished nothing more than my delight at seeing it create the phrase “inductive gorilla” I decided some more analysis was needed.

Using the word cloud and the most obvious themes from the responses I made a list of what a good, “The Science of…” article should contain or be. This was as follows:

  • Big words
  • Comprehensive
  • Diagram/Graph/Infographic
  • Evidence
  • Inductive reasoning
  • Links to more in-depth stuff
  • Pictures
  • Referencing
  • Theory

This list isn’t necessarily what I think a good “The Science of…” article should contain. It may not even be what the majority of people think a good “The Science of…” article should contain. I can only speak for the people who responded to my question and sadly can’t take the opinions of the people who didn’t respond (the grey Twitterature) into account. I also asked what improvement could be made to produce a better class of “The Science of…” article, but I’ll save talking about that until the discussion.

I then used an internet search engine which might have been Bing (it wasn’t Bing) and typed in “The Science of” and took the first 10 articles that were the “The Science of… “ articles.

The Science of Search

 

Figure 1. The search engine suggestions for The Science of a.k.a. a fairly depressing poem.

I read the 10 articles and after multiple moments of increasingly less quiet despair, I determined if they satisfied the criteria identified by my survey. I then turned the results into graphs because of graphs and had a look to see what I thought/wanted them to show. By this of course I mean, the results were analysed and any trends in the data were identified.

Results

The table below displays the first 10 articles produced by my internet search that were “The Science of…” articles and what I initially thought before reading them. This doesn’t even slightly matter, but I was told once that people relate to science articles more if they contain a personal element. The story of how I was told this is of course heart-warming.

Table 1. The articles found and my initial reaction to them

Articles Table

 

Figure 2 shows the source of the articles found. Obviously as I used an internet search engine, the articles were all technically on websites, but some of those articles (40%) were associated with specific newspapers and magazines. Magazines and newspapers put articles online! You can practically hear the ground breaking.

Pie Chart

 

 

 

 

 

 

 

Figure 2. Sources of the “Science of…” articles.

Figure 3 shows the proportion of articles that contained the desirable qualities identified by the survey. This was decided by me after reading them. You might come to a different conclusion and you’re welcome to read them and see what you think. I wouldn’t recommend it though. Unless you hate your spare time.

Graph

 

 

 

 

 

 

 

Figure 4 is an Action Man with eagle-eye action.

Action Man

 

 

 

 

 

 

 

Figure 4. You know.

Discussion

 The whole point of this post was to use scientific(ish) methods to question what makes a good “The Science of…” article and see if “The Science of…” articles are doing those things. That depends. In science, it always depends. Note to self: make an “In science, it always depends” t-shirt. As you can see from figure 3, in terms of using big words and pictures, “The Science of…” articles are doing quite well. Ninety percent of articles had a picture and 60% used big words! Make them waterproof and you’ve got the ingredients for an educational children’s book! Over 50% of the articles contained evidence, inductive reasoning and theory. This seems good, but isn’t. In fact it doesn’t even seem good. A science article without evidence?! Fine. I’ll get back to you with what I think about that when I’ve finished watching this football game that doesn’t have a leather orb or any teams of entitled orb-kickers. The rest of the results are similarly dismal, with only 40% of articles being judged as comprehensive and about 25% of the articles linking to more in-depth material. I’ll let them off in terms of diagrams and formal references on account of them being articles about science rather than actual research papers. Something I think scientists would do well to keep in mind when reading and criticising science articles.

Can “The Science of…” articles be improved? Well, you’ll recall I asked about this. The suggestions for improvement are shown in the table below.

Table 2. Suggestions for improving “The Science of…” articles.

Improvement

 

 

 

 

So there you go science writers. Your problems solved. If your problems were a lack of pedantic titles and sparse nudity.

Ultimately what I read seems to indicate that “The Science of…” in an articles title is generally shorthand for “This article offers to explain something. It might mention science. Go on. Read it. SCIENCE!”  If it actually contains some well-written information about the scientific method, what it found and how it might (and might not) explain the subject at hand, then that’s a bonus. Although it should be a given.

I should probably point out the flaws in this research, which for the most part are obvious as well as numerous. I only look at 10 articles, they were the first 10 articles I found, it was only me that looked at them and the criteria I used to judge them while, not arbitrary, were certainly not extensive. This clearly isn’t a high standard or even legitimate piece of scientific research. Unless Nature wants to publish it in which case that stuff just then was a hilarious joke. Most of the criteria could probably be applied to this post with it doing quite well as a “The Science of…” article and that perhaps would be a travesty (adj. music like that of Travis). However, maybe you’ll have a think about what “The Science of…” articles should be like and expect a decent standard from any such articles you read in the future. The Guardian has a series of articles/posts about science writing and how to do it if you’re interested in that sort of thing. You must be a little bit. You just read this for a start. If nothing else you’ll have seen some of the sections that go into the write-up of scientific research (introduction, method and so on). Also the inductive gorilla.

References

The Guardian. Secrets of Good Science Writing. Available from: http://www.theguardian.com/science/series/secrets-science-writing

Google. Google. [Online][Accessed loads] Available from: Google it.

BBC. The Science of Love. Available from: http://www.bbc.co.uk/science/hottopics/love/

BBC. The Science Behind Why We Take Selfies. Available from: http://www.bbc.co.uk/news/blogs-magazine-monitor-25763704

Bartlett. T. The Science of Hatred. Available from: http://chronicle.com/article/The-Science-of-Hatred/143157

Fermilab. The Science of Matter, Space and Time. Available from: http://www.fnal.gov/pub/science/inquiring/matter/

Adams. S. The Science of Hangovers. Available from: http://www.theguardian.com/science/sifting-the-evidence/2013/dec/19/the-science-of-hangovers

Keim. B. The Science of Handwriting. Available from: http://www.scientificamerican.com/article/the-science-of-handwriting/

Boggs. B. The Science of Citizenship. Available from: http://www.orionmagazine.org/index.php/articles/article/7810

Woolaston. V. The Science of Santa: Mr Claus will eat 150 BILLION calories and visit 5,556 houses per SECOND this Christmas Eve. Available from: http://www.dailymail.co.uk/sciencetech/article-2521973/The-science-Santa-Mr-Claus-eat-150-BILLION-calories-visit-5-556-houses-SECOND-Christmas-Eve.html

Chivers. T. The Science of Christmas: Santa Claus, his sleigh and presents. Available from: http://www.telegraph.co.uk/topics/christmas/8188997/The-science-of-Christmas-Santa-Claus-his-sleigh-and-presents.html

Popper. B. The Science of ‘Her’: we’re going to start falling in love with our computers. Available from: http://www.theverge.com/2013/12/16/5216522/can-humans-love-computers-sex-robots-her-spike-jonze