Grade inflation – inevitable and holding the UK back?
As an engineer I am passionate about education. We need more students to learn the STEM subjects (Science Technology Engineering Maths) if we are to be a knowledge led economy and for this we undoubtedly need an excellent education system. As the world gets ever more technological it frightens me that these two truths are not evident to all. I have spent some time and my own resources in helping in this area and one concrete result was the When STEM report I helped sponsor and create along with the IMechE.
I try not to veer into political minefields in my writings, but sometimes it is just unavoidable. In this case I feel that the overall UK education debate is cheapened by the continual yearly squabble about grade inflation and exams becoming easier. There seems to me, with an engineers view, that there is a very simple solution which will allow all of us and “the system” to move on from this debate and consider the real changes that are needed.
Before I give my solution – which I hope more informed readers in this subject area will criticise, refine or reject with reasons – I will give my position on the main areas of criticism. Has there been unwarranted grade inflation over the years? – almost certainly yes when I see what my own children have studied and achieved and when I see what younger staff bring to the table; however, do we have an intelligent and able youth – yes, it’s just harder to distinguish who shines in what subjects. Have tests got easier over the years? – yes and no I think. The topics in each subject area and hence the questions have changed – and they have to as the world progresses so making comparisons from one decade to another is inherently difficult. There has been some clear “dumbing down” which the “scandal” (it should have been a big scandal, but was reported as a misdemeanour) of certain exam boards helping teachers effectively cheat for their pupils showed. My view is that one of the big reasons for the obfuscation has been school league tables. What gets measured gets done is the business mantra – very true and if you measure the wrong things the wrong things get done, and if you’re not good with your management the people being measured will work out how to game the system. In STEM this has been particularly virulent where the combined science award is rated as 2 GCSEs rather than the one it is, so instead of being taken by that minority of pupils who find the single sciences excruciatingly difficult, it is now taken by the majority of pupils in England & Wales because it helps boost table places.
So what to do? Well the first thing is to work out what the exams are for. They are to position you as someone who can study and know a particular subject well (or not). They are used to determine your suitability for the next step on the education ladder, or for your first/next step on the employment ladder. They are essentially ephemeral, meaning that after you have taken the next step it is what you do in that next step that counts for your subsequent progression and not your previous exam results. It matters not what GCSE (O-level) results I got 30+ years ago, but what I did afterwards in getting A-levels which got me into a degree course from where the degree got me my first job from which my ability in my first job got me my second job and so on. Now there might be some longevity to your results beyond 1 step in your education/career progression, but at most it will influence 2 steps except in rare circumstances. Therefore there is no real need to compare exam results from one generation to the next – they are for comparing individuals within a single year (or cohort).
Therefore if the results are only for in year comparison, we simply need to be consistent in what we give an A* for, an A, etc so that, for each individual subject, anyone can be placed in rank order. This is what statistics has the normal distribution for (or poisson or other distributions if more applicable to the shape of exam result distributions) – simply align grades to whatever standard deviation boundary you like, or being simplistic the percentage of the cohort that is relevant for each grade. So for example you could have top 5% of results for a particular subject getting A*, next 10% get A, next 15% get B, next 20% get C and so on. The particular mark for each grade boundary will change for each subject each year – as a result of ease/difficulty of exam and cohort ability, but this is no issue as we get an absolute in year ranking.
There is still one area open though – if there are different exam boards with different degrees of difficulty then an A in one may not match an A in another. This is an issue that has been around for years – I remember back in 1979 some of us being told we were doing Cambridge exams as they were harder and would be better regarded, but for those in the class who would not get an A in Cambridge they would be entered into the AEB exams as they would get one grade higher in those as they were easier – a nonsense! We have a single national syllabus so why don’t we have a single national exam? So my second change would be to have only one national exam for each subject. If the exam boards want to pitch for the right to set and oversee that exam then great we can have different exam boards, but only one for each subject.
These two simple changes (fixed percentage grade boundaries and a single national exam board for each subject) would I believe remove the thorny subject of grade inflation and exam dumbing down from the agenda allowing everyone to concentrate on actually improving the overall standard of achievement and providing a much needed clarity to schools, universities and employers on what grades actually mean and the ability to compare people from different education and location backgrounds.
I’d be interested in hearing from anyone who thinks I may be being too simplistic here or plain wrong (and of course if you think I am right!) – what is the logic for the current situation if I am wrong or what is a better alternative system?
(One comment I received on an early draft of this blog, was what happens if someone wants to compare one year to the next. Firstly this is always difficult because curricula change so year on year comparisons are always difficult, but if you did want to do so then by publishing what the grade boundary marks are each year that information would be in the public domain for those with a specific interest.)