Just a very quick Sunday night* post, partly because I'm feeling guilty: I'd promised myself I'd write something this weekend and I hadn't yet managed it.
Over at Mentor Thinks (a blog I think is fantastic), Andrew Brown posted a little while ago about some data released by the London Data Store covering ambulance call-outs. He suggested that although there might be data quality/recording issues, the trends shown for what were classified as call-outs relating to 'binge' drinking weren't exactly encouraging, as they had risen recently. I'm going to use this data release as a kind of case study in analysing these sorts of things, and thinking about some of the many possible factors that might be in play.
Now, as I'm sure those who work with me would testify, looking in detail at these sorts of trends is one of my fascinations, and I'm particularly sceptical whenever rising alcohol-related emergency admissions** are mentioned, having seen the number of diagnoses and codes that go into producing the headlines. As government and wider society have become more concerned about alcohol consumption and 'binge' drinking specifically, so recording practice has developed - hence the Cardiff Model, for example.
You might call this the observer effect, but actually it's not (necessarily) so much a bias as an attempt to improve the quality of the measurement and data. Admittedly, there's often a political element to this too. (Think of the drug treatment example in my last post: how does an organisation look like it's doing something useful, or convince others of a particular problem? Numbers. Ideally ones with pound signs in front of them.)
At the same time, though, I'd be on sticky ground if I was suggesting there's something wrong with recording all the relevant factors in a call-out, particularly on this blog where I've just trumpeted my commitment to clarity of thinking and transparent, well-evidenced decisions. However, even accepting new data aren't wrong or a form of exaggeration, it might be that comparing them with the old can be misleading.
In this particular example, I did a quick bit of analysis on the ambulance data to filter out seasonal effects. That meant creating a rolling 12 month figure for admissions. This is shown in the graph below. What you'll see is that there isn't a steady upward trend. There's a pretty steady rise at first, though 2010 and the start of 2011, but then numbers fall and stay pretty much static till the start of 2012. However, there's a bit of a wobble, and the new steady (and steep!) rise comes in the past 12 months - since April 2012.
Two things occur to me about this pattern.
First, it seems unlikely that there's been a huge rise in 'binge' drinking since April 2012, particularly given recent reported patterns in alcohol consumption amongst younger people (and that's how the ambulance service are defining 'binge' drinking here - alcohol poisoning where the patient is aged under 40). (In the report I link to there, Chris Sorek talks about the 'hidden binge drinkers' aged 25-44, who are drinking more than their younger counterparts. Most of them would still be counted in these 'binge' drinking figures, but this still can't explain what is in fact a one-year trend.)
Second, the three phases of the line in the graph seem to match financial years very closely. Funding is often granted for these periods, meaning that an alcohol liaison post or a Cardiff Model data lead might start over this period. Or a new policy might come in. Just having a new policy in place, or a new member of staff leading on this can improve the data no end - as I'm sure plenty of local area alcohol leads can testify. The line then would rise consistently, as it does, because each month sees a month of the 'old' regime data replaced in the rolling 12 months by one of the 'new' regime. A consistent policy would lead to a consistent rise, and a relatively straight line - which I would say we have in the graph.
So, what of the broader pattern? I don't know, is the answer. In terms of these stats, we'd have to see what happens after April in the new 2013-14 financial year. In the meantime, I remain unconvinced that there's been a 25% rise in drunkenness or alcohol-related incidents in London over the past 12 months, as one interpretation of the data might suggest (not Andrew's, I hasten to add!). I'd agree with Andrew, though, that however you cut the data they don't tell a pretty story. I'm just not sure whether the story's new.
(I'd welcome anyone else's explanations on what might cause the patterns. I initially wondered about the Olympics, but for reasons of timing and the sorts of tourism it brought about I can't see that it would have any noticeable effect. As for Euro 2012, timing again shouldn't make the difference, and I can't see why that would show up differently from World Cup 2010, which is way down where the graph starts.)
*Technically Monday morning by the time I've 'published' it...
**I want to be clear that I don't see any inherent problem with general alcohol-related hospital admissions not moving in step with consumption figures; there might well be a time lag with long-term conditions not appearing. This is why I think the argument sometimes made by the industry that the two are not directly related is disingenuous. However, changes in recording practice also have a role to play and should be acknowledged if we're to really understand what we're dealing with.