|Darren Millar AC|
|John Griffiths AC|
|Lynne Neagle AC||Cadeirydd y Pwyllgor|
|Llyr Gruffydd AC|
|Mark Reckless AC|
|Marian Morris||Cyfarwyddwr, SQW|
|Robert Smith||Rheolwr Ymchwil, Sefydliad Cenedlaethol ar gyfer Ymchwil Addysgol|
|Research Manager, National Foundation for Educational Research|
|Yr Athro Christopher Taylor||Athro Polisi Addysg ym Mhrifysgol Caerdydd a Chyd-gyfarwyddwr Caerdydd o Sefydliad Ymchwil, Data a Dulliau Cymdeithasol ac Economaidd Cymru|
|Professor of Education Policy at Cardiff University and the Cardiff Co-director of the Wales Institute for Social and Economic Research, Data and Methods|
|Gareth Rogers||Ail Glerc|
|Sarah Bartlett||Dirprwy Glerc|
|2. Cyflwyniad, Ymddiheuriadau, Dirprwyon a Datgan Buddiannau||2. Introductions, Apologies, Substitutions and Declarations of Interest|
|3. Ymchwiliad i Gyllid wedi'i Dargedu i Wella Canlyniadau Addysgol: Sesiwn Dystiolaeth 1||3. Inquiry into Targeted Funding to Improve Educational Outcomes: Evidence Session 1|
|4. Ymchwiliad i Gyllid wedi'i Dargedu i Wella Canlyniadau Addysgol: Sesiwn Dystiolaeth 2||4. Inquiry into Targeted Funding to Improve Educational Outcomes: Evidence Session 2|
|5. Papurau i'w Nodi||5. Papers to Note|
|6. Cynnig o dan Reol Sefydlog 17.42(ix) i Benderfynu Gwahardd y Cyhoedd o Weddill y Cyfarfod||6. Motion under Standing Order 17.42(ix) to Resolve to Exclude the Public for the Remainder of the Meeting|
Cofnodir y trafodion yn yr iaith y llefarwyd hwy ynddi yn y pwyllgor. Yn ogystal, cynhwysir trawsgrifiad o’r cyfieithu ar y pryd. Lle mae cyfranwyr wedi darparu cywiriadau i’w tystiolaeth, nodir y rheini yn y trawsgrifiad.
The proceedings are reported in the language in which they were spoken in the committee. In addition, a transcription of the simultaneous interpretation is included. Where contributors have supplied corrections to their evidence, these are noted in the transcript.
Dechreuodd rhan gyhoeddus y cyfarfod am 10:00.
The public part of the meeting began at 10:00.
Good morning, everyone. Welcome to the Children, Young People and Education Committee. We've received apologies for absence from Julie Morgan and Michelle Brown. Can I ask Members whether there are any declarations of interest, please? No. Okay, thank you.
Item 3 this morning, then, is our first oral evidence session for our inquiry into targeted funding to improve educational outcomes. I'm very pleased to welcome Marian Morris, director of SQW. Thank you very much for attending this morning; we appreciate your time. If you're happy, we'll go straight into questions.
Yes, sure. That's fine.
Could you maybe just give a very brief introduction to SQW and the work that you did on the evaluation?
SQW, which, if I actually spell it out, is Segal, Quince and Wicksteed, which is what it used to be called and it sounded like a firm of dodgy lawyers, so we changed it. [Laughter.] It's just known as SQW now. It's basically a socioeconomic research consultancy.
We work in a number of different areas across children and young people, across the innovation field in industry and across spatial—. So, we have quite a broad remit, which enables us to bring techniques from one type of research into others, which is actually quite helpful.
The evaluation itself was basically over the two-year period, 2014-15 to 2015-16, and it was always set up to finish in 2016. There was a possible extension for a few months to enable us to use data from the 2015-16 attainment from the schools, which actually did happen, but it was never intended to cover the entire period. It was a mixed method piece of research. We knew from the outset that there was no way that we could possibly do a true counterfactual, because every school in the country was getting support one way or another through Schools Challenge Cymru. We were only focusing on the Pathways to Success schools—those most challenged schools—which is why we adopted a contribution-analysis approach in the end to actually see, 'Well, given that they've got all of these things coming in, has it actually led to the kind of outcomes they were expecting?' and to what extent did the SCC funding contribute to that. So, that was the main element of it. Is there anything else that you'd like to know before we launch in?
No, that's great. We've got quite a few detailed questions to come, anyway. You mentioned that the evaluation was only for two years of the programme and that that was always the case, so we're assuming from that, then, that there were no discussions with Welsh Government about evaluating the final year of the programme.
No, and I think it possibly partly because it was announced that the funding was going to cease prior to that time anyway, so they didn't continue it on. I don't know why from the outset it was always just a two-year thing. I think it was just to see whether it was actually contributing, but I don't know why the decision was made.
ANd do you think it would've been worth while to evaluate the third year of the programme?
I think it would've been helpful, but it's one of those things that, again—because I think it was to inform things, going forward—the data from the 2016-17 academic year wouldn't be available until now. So, it's everything—. It's always the problem with the time delay and data as to what comes out. From a trend point of view, it would have been a lot better to be able to use a third year's-worth of data, yes.
You identify four main areas of support for the school: additional funding, the challenge advisers, accelerating improvement boards, and the school development plans. How effective do you believe that each of those four were, respectively?
They were variously effective, I think, on different things, and I think, in a sense, it was the combination. I don't think any one of those on their own—. The accelerated improvement boards were actually pretty critical, but they were only effective where you had a mature system in place. So, again, the challenge advisers I think were the ones who actually moved things forward most, but if you talk to the schools, it was the funding, because, of course, what it was covering for many of them was revenue shortfall, staffing shortfall and the inability to send teachers on continuing development—. So, I think it was the combination at the time and the way it was used, rather than an individual thing on its own. Without the challenge advisers, though, I don't think it would've been effective.
Your conclusion is statistically robust, if you look at the range of performance and how it changed. The challenge advisers appear to have driven that and had some statistically significant correlation, if not potentially causation, but the funding doesn't have that. Are you saying that?
No, I don't think you can talk about the statistically significant impact of challenge advisers. The process we used is one that you have to—well, I wouldn't say you have to use, but in a situation where you've only got 39 schools, trying to identify the impact of any single intervention statistically is a bit challenging, really, and I think I would be taken up by any statisticians out there saying, 'You can't do this and you can't do that.'
But the approach we used tries, in a sense, to quantify the weight of the evidence and the quality of the evidence from various sources. So, the only statistical analysis that we did was with the pupil attainment data, and it was an experimental approach there to try and see what we could actually do to enable us to get better insight into progress for the schools. So, I think the weight of evidence—there's a strong weight of evidence, I think, on the qualitative side, and it's kind of borne out by what we see from the attainment for the young people, but you can't actually talk about a statistical significance of the challenge advisers.
The three baskets of schools—A, B and C—that you've divided them into: was the impact of the four different areas of support—did it differ between those groups A, B and C?
I think I ought to first say that those baskets of schools—when the schools were initially identified, there was no concept of a basket of schools. It was just something that came out of the evaluation. We did look at all of those things, and in fact I think it depended—. I think the difference was that for the schools in what we'd called the group A, the schools that had been showing a decline, one of the major issues for them had been about management of the senior leadership teams. In some cases, the role of the challenge adviser had been to go in and actually just change the senior leadership team. In some cases it was about developing them. Certainly by the end of the first year, a lot of those changes had been made. So, by the second year, a lot of that push on the senior leadership was going down much more to the middle leaders.
But I think in terms of actually trying to balance out how these varied in terms of their impact—the challenge advisers, I think, were critical for all, and the accelerated improvement board element of it worked better in some ways in the group C schools than in groups B and A, but that's only a sort of—. That's partially, I think, because group A schools adopted it and took it on wholeheartedly. When you had a change in the senior leadership, or you had a move in the senior leadership, actually they were as effective there. Because it was all the interrelationships, it wasn't just, 'If you have this type of school, this interventions works, and if you have this type of school, this intervention works.' It was much more about how that all worked together, and they were all on a trajectory moving forward. I'd find it difficult to say, 'If you've got this type of school, this is what you should do, and if you've got this type of school, this is what you should do', but they do need to be adapted. And I think all throughout the report we're talking about that.
So, the important area to distinguish was whether it was a school where you were needing to replace the main school leadership team or whether it was one where you were working with them to—
On where the priorities were, yes.
Focusing more on attainment, then, and impact on attainment, you found that all the Pathways to Success schools progressed academically in those two years, 2013-14 to 2015-16, but some of the attainment data doesn't bear that out, for example some of the level 2 data. So, how would you explain that?
Right. What we're not talking about here is just raw data, and looking at what the attainment of those schools was compared to other schools, or compared even to where they were the previous year. I think that was one of our challenges right from the outset—that each of these schools needed to be judged from where it was starting, and the context in which it was starting. Our analysis actually grouped the schools because, again, trying to do anything statistically significant with one school is not possible. So, what we tried to do was to take all of the schools to look, using a sort of hierarchical approach—again with 39 schools it's just about stable—to identify what were the key features that appeared to be, over the five years previously, associated with higher levels of attainment, to isolate those features, and then we put those into a conditional forecasting model. Basically, the forecasting model was saying, 'For all schools, this is what we'd expect. For these schools, what trajectory, based on their current cohort and the history of the previous cohorts, would we expect?' And what we were finding was, in most cases, attainment was on a par with, or slightly above, that which would be expected by that trajectory, just controlling for all those different features.
So, it's more to do with the groupings of the schools, and within those, we did actually run models for each school, but it gets very, very messy and not the sort of thing you'd want to put in a report. But in most cases, what we were seeing was, even if progress was slow to start with, they were improving by that second year. But we only had two years' worth of data, so trying to establish a proper trend is really challenging with that. It's still—. I wouldn't call them stable models, and you'll see in the report that the confidence intervals were kind of fairly large.
So, what you've just described is the forecasting model that you use. How robust is that then? Is it well established, the use of that sort of model in the way that you devised it?
It's very well established in other fields. It's very well established in economic modelling, econometric modelling, et cetera. And I think that's one of the things that we were exploring because hierarchical modelling is very well established in education, but it only tells you what's been achieved. It doesn't enable you to look forward. And what we were looking at was to try and see—. Well, with these schools, we had explored, we'd looked at their back trajectory and, in fact, what we were seeing there was that, for some schools, where they had been over the previous three years, prior to the introduction of Pathways to Success, was that the figures that they were showing there were—. Actually, if you look further back, some of them had actually been doing really well five, six years previously, but were now on a downward trajectory; some of them were fairly stable; and some of them were on the way up, but the three years wasn't sufficient to actually see some of that. So, what we were trying to see was, 'Okay, based on their past trajectories'—and we initially considered just looking at past trajectories—'are they on the same path?' But, the trouble with the trajectories is that it doesn't take control of context. And for some of these schools, their rolls were falling, the proportion of free-school-meal students they were getting was increasing, various other contexts were changing, so we felt we needed to control for that. And a lot of economic modelling is based on conditional forecasting, and we had an econometrician in the team who basically said, 'We just explored with that.' Well, how best—? What things do we control for? But we only have the information that was on the NPD. Well, which of these ones come out of the hierarchical modelling that enable us to identify those that are associated, put those into the model and then see what we can do with it. So, I think it was very much an experimental approach, but it did actually show, even within that small scale of time, you could actually start seeing where the changes were beginning to occur. And even at the level 2, inclusive, that was actually showing that, particularly—I think it was this one I'm looking at, the different types of schools—schools in group A who were the ones that had been the lowest performing schools had actually made more progress than might have been expected, given both their context and their previous trajectory.
So, you could see some sort of pattern then for those three groupings of schools.
Yes, it was. I think for group A, they made more progress than expected. Group C performed as expected. Those were the ones that were already on the way up. The group B schools, although they'd moved closer to the all-Wales average, they actually made slightly less progress than was expected. And I think it just shows you that, in a sense, they were the schools that were stable, but well-below average, and often face, I think—. For a group A school, the ones who had the furthest to go, basically it's easier to move from a lower position. So, I think, over those two years, we've really got insufficient data to say, but we did see changes like that. And it varied according to subject as well. I think they all made more progress in English than expected, but not in maths, which was a bit more of a worry.
Okay, and in terms of this third year, I heard what you've already said, but is there anything more that you could say about the third year—any observations or—?
We've not been involved in it at all, so we've not actually had access to the data—we've not seen anything, so it's difficult. We use pupil-level data and not whole-school data.
Yes, just on the attainment, if I may. So, one thing that we know has happened nationally in the year following your work is that, obviously, there's been a significant opening of the gap between free-school-meal children's GCSE-level attainment. How easily would SQW be able to drill down into those individual schools, in terms of if you were commissioned to do a piece of follow-up work on this, to see whether the deprivation grant—or the development grant as it's called now—is actually still making a difference in those schools, or perhaps they weren't falling behind quite as much?
We weren't involved in the deprivation grant—
But for those 39, yes, we could because we know which ones they were—
Yes. We've got all the prior data; we've got all the modelling data.
So, you can pick up very easily and see whether that relationship in each of the groups continued into the future.
So, in terms of your expectations, you clearly obviously set expectations for each of the baskets of schools at the start of the evaluation. Presumably, you could have an expectation for those schools for that third year and then look back at whether they were realised or not.
Yes, it's not so much setting expectations as actually plotting trajectory, but it's the groups of schools together, because I think with the models themselves, as we said, even with those numbers of schools in them, you've got a very large confidence interval. We did it for individual schools, but it's kind of messy. But I think with the groupings, we could still do that. Yes, we've got all that infomration—it would be perfectly possible to do that kind of analysis.
The confidence interval means basically how confident we are that this is a statistically significant—
And that's what I'm struggling with a little bit, because clearly you were working in very narrow parameters really and the window for you to take a meaningful snapshot of the situation, maybe, was very limiting. So, how much can we really take from the work that was done?
I wouldn't base an entire policy on it, but I think what it does show was that—. With those considerations—. Because what we did is, we tried going back with data from three years and from five years, when we had access to data from five years—we didn't have any access to data prior to that. The five-year model actually was quite stable in the lead-up bit; it's the fact that we had only two years' worth of data post intervention, and I would never base anything on two years' worth of data. So, I think that's why we were saying that is was all moving in the right direction, but it's within—. We did not try and claim through this, but we were trying to put it alongside all the qualitative information that was in there, which was basically suggesting—. Because the teachers were saying, 'Well, we are seeing improvements in these areas and we're seeing improvement in that and we're not seeing improvement in this.' This was a means of saying, 'Well, if you're going to examine these things over a period of time, perhaps there are other ways of doing it rather than just looking at straight, raw data.'
Could I ask about entry patterns and whether there's anything you might be able to say in terms of vocational qualifications compared with GCSEs—whether there was any noticeable movement and, if so, what might have been linked in terms of 16 to 17 to different performance measures?
I'm afraid we didn't have that data. So, I can't really say anything about it.
Okay. You mentioned that there was a difference really in terms of what you were able to say in terms of qualitative and quantitative evidence. To what extent did qualitative evidence play a part in your evaluation?
It played a pretty important part really because we knew from the outset that, I think, when we set up the evaluation and were initially commissioned to do it, it was always known that a quantitative form was virtually impossible for this. So, it was trying to take the strength of a qualitative approach. We built logic models to actually say, 'Well, what was expected? What was going in? What might you see coming out of that?' and then used that to inform the model that we've included in here, which was to say, 'What were people really focusing on doing?' which was moving the senior leadership forward, teaching and learning forward, and pupil engagement forward.
Within that, we were then exploring the extent. We were talking to teachers and we were talking to senior leaders, we were talking to external partners, talking to challenge advisers, talking to local authority staff and talking to consortia staff about what their perceptions were of all of these things, so, what evidence they had, because the schools were obviously keeping a lot of detailed records—well, some of the schools were keeping a lot of detailed records, let's put it that way. So, how could they support the changes they were seeing? What evidence did they have to support that, so that we could then look at not just people's perceptions, but the weight of the evidence behind what they were saying, which is why we used—.
Contribution analysis is basically a theory-based approach that tries to look at if there are changes being seen and being observed, to what extent people and the evidence suggest it's to do with the contribution of the particular intervention, or whether there are other things that could explain it, which is why, when we talked to school staff, they often didn't say anything about the challenge advisers because they weren't aware of how much they were doing. We had to take that into account, so, yes, the qualitative information was pretty important.
Okay. If I could just ask you about a few particular impacts that you might have evidenced or noticed. Pupil attendance and behaviour: what would you say about the impact of Schools Challenge Cymru?
Limited at this stage, but a lot of the schools were saying that what they were seeing—. We didn't see any statistically significant improvements in attendance at all. We saw, I'd say, more things to do with unauthorised attendance, which seemed to be going down. What the young people themselves were reporting on when we did the pupil surveys was that, over the years, we were seeing a reduction in truancy, self-reported truancy. It's surprising how honest young people often are about their levels of truancy—they generally are, although they always see their friends as being more prone to truancy than themselves. We saw a reduction in that, but what a lot of the staff were saying was that it was too short a period to start seeing it translating into statistics. You can bring one or two children back into the fold; you can get them engaging more and you can get them attending better, but that's not going to affect your overall statistics. They were all saying that two years was simply too short to actually see that happening.
The bigger challenge for some of them was that they'd used the revenue from Schools Challenge Cymru to bring on board support staff who were specifically working on improving attendance, going out and talking to young people, bringing them in and talking to their parents and getting them more involved. And, the funding for that, many of them were feeling that they weren't going to be able to fund that post Schools Challenge Cymru. So, in a sense, what they managed to achieve with that particular cohort was not necessarily going to be achieved with subsequent cohorts.
It was interesting because I know that was a focus of the intervention, but what most schools were doing with that was talking about pupil engagement, which, for most of them, ended up translating into pupil attendance and actually getting them into school. What was interesting, looking at what the pupils themselves said, was that there appeared to be some quite significant changes for some young people in terms of the level to which they felt engaged with learning, or engaged with the school over that period. But I would say that the focus for the schools seemed to be more on actually getting the young people into school and staying in school and attending classes, than on the more nuanced elements of that.
I think it was probably true in the schools where they'd brought in additional staff to do that and they were focusing on it. But it was almost a third element of what they were doing, so the first thing is making sure that your management is up to scratch, then that your teaching and learning is up to scratch and then about getting the young people. So, it's kind of a process, and I think I would find it very difficult to say that they'd made a significant impact on well-being.
Okay. And finally from me, could you tell us to what extent the funding was used to support the more able and talented learners?
Interestingly enough, I don't think any of the schools mentioned that during the interviews. They were all talking about either a whole-school approach or targeting, particularly targeting young people on free school meals. So, there was no—. In fact, we went through it and there was no evidence of anybody mentioning that.
Yes, clearly there are different aspects to the support available, from funding to the challenge advisers to the board as well. But I'd just like to focus on the challenge advisers, really, and just ask what impact you feel they had, and particularly, maybe, how schools responded to them.
I think, to be honest, they were pretty critical. But it really did depend on the relationship. So, I think in the first year, some of the schools, the relationships were a bit—not too good. They were a bit challenging, and I think it was partly because in some cases I think the initial match of expertise and need were not always as good as they became, without going into details on that one. And I think some of the challenge advisers in the first year had too many schools to deal with, and I think when that was sorted out, when they redistributed some of that, the relationships were pretty strong in most cases. I remember one school in particular where there had been talk of changing the challenge adviser, and the senior leadership team were up in arms, saying 'No way'. They valued the stability, they valued the expertise. So, I think where, in a sense, it was a bit like the accelerated improvement boards—it was that level of maturity, where the senior leaders saw the challenge adviser as a support and as a problem solver and as a critical friend and as an expert rather than as somebody that was coming in to threaten their leadership. It worked, and I think that was where that change happened over time.
So the relationship between the challenge adviser and the school was critical. The amount of time the challenge adviser spent in the schools was critical, because they needed to know—and their visibility in the school was really important as well. I think that was probably more important for the rest of the staff than for the senior leadership team because they need to see it as a journey of the whole school, not just the head. It was interesting talking to some of the schools, particularly in the first year, where the rest of the staff weren't particularly aware there was a challenge adviser involved at all, because the challenge adviser was working with the senior leadership team and possibly, in some cases, just the head, and so they were not particularly aware of them. But over time that changed as well.
And Schools Challenge Cymru was happening at the time, and the regional consortia as well were coming in line, and I was just wondering what the relationship was—whether the regional consortia were involved in any way with that. Because the roles are pretty similar, aren't they?
They're pretty similar except in the sense that the challenge advisers were selected in discussion with the regional consortia, and they had more time in the schools. In a sense it was about freeing up the consortia to focus on the wider groups of schools. Sometimes there were other representatives from the regional consortia on the accelerated improvement board, but it was really the challenge adviser was the critical role in there. They had a link officer as well, between the challenge adviser—
Because what I'm after here is whether there was any synergy or in fact whether there was any duplication in the role of the consortia and the challenge adviser, because the consortia themselves have challenge advisers, and I was wondering whether it was the same person or was there the potential for duplication there?
Actually, that wasn't something that we came across at all. We didn't come across duplication because it was more or less that the Schools Challenge Cymru challenge adviser was specifically for those schools to enable the regional consortia to focus on other schools. So, there didn't seem to be any duplication of effort—not that we came across.
There were occasional members of the regional consortia linking into the AIB, but—
Yes, it was pretty aligned. I think there was a lot of discussion locally and I think a lot of sense made of the relationships, yes.
On the accelerated improvement boards, you refer to them having an indirect rather than a direct effect. Can you just expand on that?
I think the reason we said that was because basically the accelerated improvement boards worked best when there was a mature system in place—when the senior leadership was feeling confident, with a good relationship with the challenge advisers, so that they felt it was more of a process for challenge and support, and problem solving. I think in some of the group A schools the accelerated improvement boards didn't achieve as much as they could have done because they were seen—. Often, senior leaders were a bit defensive, because they were seen as challenging their role in the school. They felt like somebody else was coming in, and generally, elsewhere, where heads were chairing those boards, and it was seen as a head-led thing, sometimes that wasn't happening in some of those group A schools, and the challenge adviser was being asked to chair it because the head didn't feel confident, or the head was in there supporting it, but, actually, almost rejecting some of the things that were being said, because it was a challenge to what they were doing. So, I think there was a level of—.
Where it worked was, there were some schools, even amongst one of the group A schools I remember, that were actually saying, 'This is a really useful thing because we've got people here who are there to support us—to challenge us, but to support us.' And they were actually having pre-meets, so they would run through the data, they'd run through what they were going to do, so that when they came to the meeting, they felt really charged up and able to do it. But it was very difficult for schools, for example, to start talking about pupil progress if they didn't have good tracking systems in place. And when you go to the accelerated improvement board, you're trying to present data but you haven't got anything there. You haven't got the data to support it. You're being asked to bring your middle leaders in to do things, and the middle leaders have got absolutely no idea because they've never been challenged on these things before. So, I think it was to do with—. That's why we say it was indirect rather than direct, because they didn't drive things forward. They supported and enabled things to happen.
From what you so saw of the accelerated improvement boards, do you think that could be a useful model to apply to the education system more generally in Wales?
I think it could, but it does need that level of maturity and level of confidence, because I think if you put it into every single school, on its own, without support, and without the sort of direction, in a sense, that the challenge advisers brought to it, I think it would just fall down because it was the fact that the—. Where it worked really well was because the schools were in a position to accept it and they were being encouraged, they were being enabled to see how best to make use of it. And I think when we're talking about some of these schools, they really needed support to even diagnose what was wrong in their school. So, there were challenges there, but they didn't actually know what they were. They knew there were things that needed changing, but they weren't necessarily focusing on the right things.
So, to put an accelerated improvement board in place, you're relying on—. You've got people from the primary clusters in there. You've got people coming in from outside. It kind of exposes all your weaknesses. So, I think it's something that, in a more mature part of the system, I think would work very well, but it won't necessarily work with the schools in most need.
Thank you. I wonder if we might consider that as a potential role for the regional consortia.
Can I just ask you about the role of data there? You mentioned the fact that schools weren't always able to identify what the problems were. On a visit that a small group of the committee went to in north Wales at Ysgol Clywedog, which had obviously involved in the Schools Challenge programme, they told us that one of the key decisions that the new headteacher took there was to invest in a data programme, essentially, to help track progress at the school. I think it was called 'Alps', but I may be incorrect.
Do you think that there ought to be perhaps a little bit more prescription, really, from the regional consortia, or the Welsh Government, about the sort of data tools and key performance indicators that need to be measured in order to be able to demonstrate the differences that are being made and achieved here?
I'm not sure if 'prescription' is the right term for it, because I'm always slightly concerned about prescription. But I think there is. The reality for a lot of these schools is that they simply didn't have the information technology systems, the data systems, to enable them to work out what was going on in their classes. And if you don't have those data systems in place to record pupil progress, however you're going to do it, then it becomes a self-fulfilling prophecy in a way. And there were quite a few schools that didn't have that, and that is what they were doing—they were buying in various things; some of them, I think, brought IRIS in. The trouble is that there are a lot of these systems on the market. They're not all relational. They don't all speak to your other bits of software. They're expensive and they need updating, and I think that's the challenge when schools are being asked to do all of these things, as an essential part of school improvement, if there is no investment. You've basically therefore got the schools that, in some areas, can do better in their data tracking and everything simply because they've got the right systems in place.
Yes. It was very clear to us during that visit that this was a key tool that they were using to make progress—to make the progress that they needed to make and to track individual pupils to see what they were expected to achieve, and if they were falling below that standard, to give the appropriate support. I accept what you say, absolutely, about the need for local decision making, but at the end of the day, if we're going to make sure that every school has some monitoring arrangements within those schools to be able to target support at individual pupils, then this really is the only way, isn't it?
On this, I was on that same visit with Darren and Llyr and was similarly struck by the importance of the data system that they had. Would you know: is there any reason why a particular system that works very well in one school in Wales, such as the one that we've heard about in the north Wales school, wouldn't work generally right across Wales? These systems aren't particularly reliant on local circumstances or what, historically, schools have had in terms of data systems.
I don't think there is, other than if schools have got things in place that they know work really well for them and don't necessarily want to change them. When you bring these things in, it's not just about introducing it— somebody's got to support it in the school, they've got to have the hardware, they've got to have the software, and they've got to have the tech support, and I think it's quite a major investment. In a sense, by the time you've identified one thing that works for one, others have moved on. I think that's the challenge now. If it was someone from the outset saying, 'We need to do data tracking. Right we are going to all have the same system', but you may well find some schools that will say, 'Well, actually, our system works perfectly well, thank you, and we've managed to customise it and do this, that and the other with it.' But it would be really worth while, I think, to find out what is being used across various schools and what does appear to be more effective or more customisable, which I think is also the need.
The Cabinet Secretary has made a decision now to bring the programme to an end or not to extend it. She says that it was always meant to be a short-term leg-up for those schools that needed it most, really. I'm just wondering to what extent you think, or how sustainable would you say is the progress that's been made by some of the Pathway to Success schools and what impact ending the programme will have on them?
I think for some of them, some of it is not going to be sustainable because it has been highly dependent on getting staff in or getting systems in, which there won't be funding for afterwards, because we didn't initially expect to identify groups of schools within this, because they were all selected in a similar sort of way. I think there were some schools that were struggling with major budget deficits and this enabled them to overcome some of that. Some of them were struggling with leadership, and so leadership has been improved—the things that can be put in place on those terms may well be sustained, but there are other things that I think are going to fall out of the window and I think a lot of the heads were concerned about that.
I think the bigger issue for most of the schools was that, when the funding came in—when moneys are made available and when school planning happens, they are not on the same trajectory at all. So, they were often finding that they were getting confirmation of budget input coming in towards the end of the summer term, and then one of the schools saying, 'We don't even know if we're going to be open in September.' But for a lot of them, what they were saying is that they start doing their planning in the spring term for the subsequent year, but they don't know what budget's available until the summer, and if some of that doesn't come in, then some of the plans they've made are just going to fall by the wayside, or there are plans that they could have made, if they knew that the money would have been coming in. So, I know I've gone off the point of sustainability, but I think there is a real issue there about tying those systems together—about when budgets become available and when school planning takes place, because it's not a neat fit at the moment. I think that's why some of the things here—. Some schools probably didn't go as far as they would have liked or could have gone, and therefore the changes they made won't be sustainable. One school, in particular, said that they didn't want to become reliant on funding that was not necessarily forthcoming, and so, therefore, were not as ambitious as they could have been. So, I think they made slower progress.
It may be difficult to answer this question, but how long do you think typically these schools would need in order to build up that sustainability within the system so that they could carry on?
I think nearer five years, rather than three, to be honest. I've done quite a lot of work in the Republic of Ireland and they talk about 10-year planning cycles. And if they're putting something in place, after five years they'll review what's going on, and they will either continue or discontinue elements of it. They just don't think that two or three years is sufficient and I think in this case it was based, in a sense, on two-year's worth of data.
But they got what they got, so, how effectively would you say that funding was used?
I think it was used pretty effectively actually, as well as it could be. As I say, one or two schools, I suspect, were probably less ambitious than they could have been simply because they just felt, 'Well, there's no point in us planning this if the funding disappears, because we're going to be back worse than we were.' But I think, generally, the funds were well-used within the schools, and the schools made very good use of the challenge advisers to identify where best to put that. And they were putting it into things that they would hope would sustain them for the future. But the reality is that in some cases some of those things may not be sustainable.
So, some of that was more about cultural change than anything else really?
Yes, and it wasn't long enough to do that. It wasn't long enough to change culture, but, yes.
No, no, quite. But you could start on that journey, but then hopefully it would—
Hopefully some things will—. Some things I think will continue, but it will be within individual schools and it will be different things in each school.
So, what would you main recommendation be in terms of trying to make sure that some of that progress now continues beyond the end of this programme? Where are the strengths or what are the features that are most prominent in terms of the positivity coming out of this?
Well, I think the two most positive things about it, apart from the funding being available in the first place, was the challenge advisers and the accelerated improvement boards, even if I talk about indirect rather than direct, because I think those two work together.
So, is there a role there for the consortia then to step in and to maybe try and continue some of that work?
I think potentially, yes.
Thank you. Have you got any observations to make on how effective the programme was in reducing the gap between children on free school meals and children who aren't on free school meals?
I think a lot of the schools focused a lot of their activities on the young people with free school meals. In terms of the statistical analysis we could do, that level of differention—the modelling—we simply couldn't do it, because there just wasn't enough data over the two years. We did look at it, but I don't think we can comment on that. We can look at the raw data, but not in that modelling context. So, I think I'd feel loathe to make a statement about that.
Okay. Earlier on, you said that the programme achieved better results for maths than for English.
No, for English than for maths.
I think that's generally the case, actually. We've been looking at the national literacy and numeracy programme as well and some of the initial progress in English was a lot more effective than in maths. I think there's a difference between key stage 3 and key stage 4 as well. So, I think key stage 4—. I couldn't tell you exactly why, because we weren't looking at them on a subject by subject basis within the schools—we were looking at whole-school progress—but I think all we can point to is what the data was showing really.
Okay. And is there anything else you'd like to bring to the committee's attention that would be useful for our inquiry?
I think, to my mind on this one, there is a need to look exactly at how schools are assessed and measuring it. It's not just about moving averages with attainment and I think that's why some of the challenges for some of these schools—. When we looked at these schools initially, we were struggling to work out why some of them were in that section, that category of most challenged schools, because their trajectory would have suggested something other than that. There were clearly other things in operation, but I think there is scope here for approaching the analysis of school performance in a way that is not necessarily done at the moment.
Okay. Well, can I thank you very much, on behalf of the committee, for attending this morning and answering all our questions? We very much appreciate your time. You will be sent a transcript of the Record of the meeting to check for accuracy, just to check that everything is as you've said it. So, thank you very much for your attendance this morning. The committee will now break until 10:55.
Thank you very much.
Gohiriwyd y cyfarfod rhwng 10:45 a 10:56.
The meeting adjourned between 10:45 and 10:56.
Can I welcome everyone back for the next item, which is an evidence session for our inquiry into targeted funding to improve educational outcomes? I'm very pleased to welcome Christopher Taylor, professor of education policy at Cardiff University and Cardiff co-director of the Wales Institute for Social and Economic Research, Data and Methods, and Robert Smith, research manager, National Foundation for Education Research. Thank you both for attending this morning. If you're happy, we'll go straight into questions.
Can I just ask you to give a very brief overview of the evaluation that you did?
Yes. I'll start. Thanks for inviting me. Our evaluation was in partnership with Ipsos MORI. It began in April 2013; it was a three-year evaluation—it was extended into the third year. We undertook a survey of 201 schools. We had 22 case studies, which involved talking to teachers, practitioners, headteachers, parents and pupils and some detailed analysis of education and admin data over a five-year period—two years preceding the introduction of the pupil deprivation grant and three years after the introduction of the PDG. The key findings were that, obviously, the attainment gap between free-school-meal kids and non-FSM children has closed over the time period that we were evaluating the PDG and that schools had been using their PDG quite effectively in terms of tracking and monitoring pupils, introducing new interventions, employing teaching assistants to support particular vulnerable children in their learning. So, those kinds of activities had proved to be quite effective.
Two main concerns for me—I'll say those and we can go into more detail later. The first one is that schools tended to blur the categorisation of who was the target audience for the PDG. So, quite rightly in some respects, they considered other children to be in need who were living in poverty. For example, for every one child with free school meals, there are two children living in poverty. So, inevitably, they saw a wider audience for this. The second blurring was the blurring of the targeted intervention to mitigate disadvantage with the wider school improvement funding and grants and priorities. Consequently, schools were using their money to raise low attainment, not necessarily raising the attainment of all children who are from disadvantaged backgrounds, which is kind of a fundamental misunderstanding of the conceptual basis of the policy.
NFER's work was focused on the early years pupil deprivation grant and was a smaller scale piece of work, and I think we've got to bear in mind that that grant came in in, I think, 2015. So, when we looked at it in 2017, it had been paid for about 18 months to two years. Our focus was on how the grant is interpreted and implemented by practitioners looking at maintained settings and non-maintained settings across the board, whether it's being implemented as intended and then identifying emerging practice. I think what we found was that the settings are following what-works recommendations from the evidence we saw. They're accessing a range of sources of information and support, including work by the Sutton Trust and the Education Endowment Foundation, and they are being supported to do so to varying degrees by the consortia and the local authorities, also drawing on Welsh Government guidance and advice, and there is some evidence or emerging evidence of changes to systems and changes to the way settings are approaching their work.
A subsidiary piece of work was scoping an impact evaluation, and I think there's further work to be done on that. There are various statistical models that could be used to try and establish the impact of the early years pupil deprivation grant, but I think you've got to bear in mind that it's got to be commensurate with the amount of funding that this provides, and we've got to look at the monitoring systems that go into the pupil deprivation grant itself. We need be reasonable about what we expect schools to be collecting and the level of data collection.
Okay, thank you. Can I just ask, Professor Taylor, was there a delay in publishing the final evaluation report, given that the year 1 report was published in October 2014 and the year 2 report was published in December 2015?
The evaluation is led by Ipsos MORI, they're the lead partner in this, so they will know more accurately whether there was a delay and why there was a delay. As far as I'm concerned, the evaluation was for three years—from April 2013 to summer 2016. We completed our part of the analysis in that last academic year, 2015-16. So, why the report wasn't published until autumn 2017—it's probably for a variety of reasons, many of which I have no idea of.
Okay, thank you. And can I ask Robert Smith for an update on when the evaluation of the early years PDG will be completed and finished and whether you've shared any interim findings with Welsh Government.
The research has been completed. The final report is with Welsh Government for sign-off. So, then the next stage will be translating the report and it will be published.
One of the key things that comes through in a lot of the evidence that we've received so far is this blurring of the funding, as you described it, and that you can't actually say that money is being—well, rarely can you say that the money is exclusively being used for those eligible for free school meals, that it is trying to have that sort of impact on a broader group of pupils. I think you said 'rightly so'—maybe not—in outlining that issue for us. I'm just wondering whether it is more efficient to do it that way and, if it is, then how can you actually attribute any benefits back to the pupil deprivation grant because, clearly, a lot of people say it's just making up for deficiencies in core funding?
That's right. When we first decided to do the evaluation, I used to attend practitioner conferences on this, and I was very clear. I said, 'Look, if you don't target free-school-meals pupils, you won't close that attainment gap, which might then mean that the policy gets withdrawn, because you won't be able to demonstrate the impact that you need to have.' And many practitioners—headteachers—would respond by saying, 'Yes, but we know that there are other children who are disadvantaged as much as these FSM children.' So, actually, I undertook some further work of our own in WISERD looking at free-school-meals pupils and their characteristics, using a large-scale data set called the millennium cohort study. That very clearly showed that FSM children are those in disadvantage, but clearly they are not the only ones with those low levels of income and living in poverty. So, I understand why schools would broaden their reach, but I also recognise that, by doing so, inevitably, the attainment gap that they're trying to close is not going to close as much as it would if they were targeting it. It's a big dilemma. I remember teachers saying to me, 'This is a moral dilemma we have, a moral dilemma.' I would say, 'Yes, there is a moral dilemma. There's also a practical dilemma about whether you want to show the effectiveness of this intervention.'
I think the other blurring is the one that is more important, and that's the blurring of low attainment versus mitigating the impact of disadvantage. Because funding of schools is such that—. You've just had evidence about Schools Challenge Cymru. Schools are using this money alongside their other funding, their education improvement funding, to try to get economies of scale. Again, I understand that, but what they end up doing is targeting low-attaining children. Now, for every free-school-meals child who doesn't achieve level 2 thresholds at key stage 4—that's five GCSEs—there are four non-free-school-meals children also with low attainment. So, again, if you're prioritising attainment in your school, you would look at low-attaining children, and you'd look around the classroom and say, 'Well, all these children need this help.' Only one in four of those are going to be free-school-meals pupils, because the proportion of free-school-meals pupils is 17 per cent—it just means they're lower. Now, the problem with that is that that misunderstands the point of the policy. The point of the policy was to mitigate the structural inequalities that some of these children experienced living in poverty. It doesn't matter what their levels of attainment are; they can be high-achieving pupils, for all I care. They also ought to receive the benefit, because the argument is that they should be doing better than where they are now. And I think schools have not really grasped that, partly again because there are other priorities in the school, particularly for many schools about raising levels of attainment.
That was the point I was going to make subsequently: that focus is nearly always—and Estyn has backed this up as well—on low attainment and not maybe on the discrete needs of those individuals, some of whom will be more able and talented.
There's been some mitigation of that blurring, and the school categorisation in Wales, which requires schools to report the attainment gap and the school development plans, which now increasingly many schools use to report their attainment gaps, really focus the mind—and I'm not saying that schools aren't focused on this gap, but when it comes to the practical implementation of activities and policies and interventions, that's where it starts to blur a bit.
Yes. I just wonder—. The underlying objective of Welsh Government—isn't that to reduce inequality? And isn't that FSM-based policy coming from that root belief? And therefore, dealing with low attainment—isn't that, in that sense, consistent with at least the underlying aims of the policy?
That suggests that you've got two groups of children: free school meals and non-free school meals and the free-school-meal children achieve at this end of the threshold and these children achieve at this end. The reality is that there's huge overlap in levels of attainment. There are structural differences between them because the averages are different between the two groups, and it's only the averages that are different—there's huge overlap otherwise. There are many—and there should be—free-school-meal children who are doing very well at school. The point is that they are doing well despite their circumstances, and the policy, in my mind, was designed to address the structural inequality, which means that their attainment should be raised too. It doesn't matter whether they're average attainment or above-average attainment, they should receive the benefits of it too. That's the only way you address the structural inequality.
So you're making the argument then—not only welcoming the initial flexibility in the guidance but actually to take that further and to fully allow the use of it beyond—. Or maybe not necessarily to use the—
I think the guidance is to make it clearer that it's about children of all abilities. That's the argument we've been making throughout the evaluation.
Yes. Okay. Beyond attainment, although I suppose these other aspects I'm about to raise would have an impact, what about the use of PDG money on well-being and emotional resilience of pupils—some of the softer impacts?
Yes. I know the by-product of the focus on attainment is that it gets very narrowly used in terms of particularly literacy and numeracy and particularly secondary in terms of GCSE maths and English or Welsh. What that does then mean is that they're not looking at the broader needs of those learners, which may be social and emotional. They may be other kinds of things like aspiration—you know, in terms of realising their aspirations to go to higher education, for example. That would be an obvious example. So, yes, I agree with you. There are some schools that are doing some of that work. They're doing parental engagement activities with their money. But, again, if the priority in the school—. If the context of the school is, 'We have low attainment here and Estyn are knocking on our door and our local authority are knocking on our door saying, "You need to raise your attainment"', you will use your PDG to raise attainment, and that's the first starting point for the policy.
It's realised through the attainment gap. If you look at A* attainment gaps between FSM and non-FSM kids, that's hardly shifted, and that's really striking.
I think, with the early years deprivation grant, the focus of that was more on the softer skills. It was on working with parents, it was on children's communication, interaction with others. So, if you define 'well-being' broadly enough to encapsulate that, in the early years that's certainly a focus.
So, are we able to see any improvement then in that respect in the short time that—
Well, it's a very short time, but there are examples of more work being done with parents and so on.
And another measure is attendance as well, of course, which is another—
It is the only measure we have. My argument would be that we don't really have in Wales a systematic data collection of these kinds of softer outcomes, other than attendance. Attendance has improved clearly with the PDG. Attendance has improved generally across Wales, and that gap has closed for FSM and non-FSM children, particularly around persistent absence as well. It's still there, but that's the one that's closed the most. So, that's a good indication, but we don't know very much about student well-being in the classroom. We don't know much about student involvement in their learning. The only evaluation that's been funded in Wales to do that was the foundation phase evaluation, which we undertook several years ago now.
So, based on what you've said, then, are you saying that the guidance, which obviously was 2015, should be updated now to address these concerns that you're flagging?
There's always been this tension between—. I mean, we've highlighted it in the guidance. It used to say it was for free-school-meal children, and then it would talk about socioeconomically disadvantaged learners, because the problem is FSM is just a proxy for that in the first place. So, they're in a catch-22. Because they either say it's about free school meals, which is kind of a bit of an odd, arbitrary statement to make, but it is the proxy for the fundamental issue, which is socioeconomic disadvantage, so they end up putting both in the guidance material, and it allows schools—. It gives schools the free rein to move beyond the free-school-meal pupils, but in practice it's actually very hard to put interventions that just target individuals anyway. If you're in a school with only one child who's FSM, and you've got your £1,500 to spend on that child, you're not really going to just take that child out and give them one intervention on their own at a time. It's very hard. And if you've got a classroom of 20 out of 30, again, are you going to take the other 10 out? There are some practical implications for this, and inevitably there's going to be some, if you like, blurring of the boundaries of who gets the benefit. It's where the concentrated effort goes.
Do we get too wound up about these boundaries, then? Should we just relax a bit?
Well, then you have to argue: what are you evaluating and how do you evaluate success?
The fundamental thing is that they're bound to blur the boundary, because I understand why they blur the boundary, but as long as it's a realisation that all those children who are from socioeconomically disadvantaged backgrounds, whatever their levels of ability, ought to be the benefactors of it, not just the ones at the end of the low-attainment spectrum. That seems to be even more fundamentally flawed, just to target only those that happen to have low attainment. What about the ones who are C grade students at GCSE, who could be getting As and A*s and going to Oxford and Cambridge? They should be the benefactors of this.
Back to attainment, what would you say are the overall trends in the attainment of free-school-meal pupils over recent years? And to what extent can those trends be attributed to the pupil deprivation grant?
The evaluation showed very clearly that the rates of improvement for free-school-meal children, in the main, have been faster than those that are non-free-school-meals, which then leads to the closing of the attainment gap. The evaluation, though, also showed that some of those gaps were closing before the PDG was introduced. So, to some extent, schools were either directly or indirectly addressing some of these issues with their other focus on raising attainment, and that's an obvious point to make, going back to the argument that many of these children have low attainment to start with. So, if you focus on attainment, inevitably you will concentrate your efforts on those groups more than others. It's impossible for us to really be able to say, categorically, that this is directly linked to the PDG. As I say, closing the gap predates the introduction of it. There have been so many other policies and interventions in education in Wales over that time period. All I can say is that it does at least recognise that the cost of educating children varies according to their needs, and it therefore represents a key funding formula, if you like, to address a particular set of needs that the previous funding system didn't provide for, irrespective of what the outcome of it is.
And in terms of its impact for different ages of pupils, how would you say key stage 4 compares with earlier stages?
Yes. Certainly, the rate of improvement has been much greater at primary age, at the end of key stage 2, than it has been for key stage 4. You can see that in the evidence. Now, that might be because the PDG is, you might argue, being used more effectively in primary years. I suspect it's probably because the key stage 4 results are, if you like, dependent on your prior attainment too. So, if you like, the impact is exaggerated over time as you go through the education system. So, consequently, you might argue the PDG needs to be spent—. More of it needs to be spent later in the years to compensate for that accumulation of disadvantage in your learning. But a lot of the debate is about moving funding to the early years to address these issues at an early stage, which I understand too. But I think what we don't recognise is that, actually, these gaps grow after you start school. They're not strong at the beginning of school.
The analysis I talked about before about free-school-meal attainment of children in Wales that I did outside the evaluation clearly shows that, at age seven, the gap hardly exists. FSM status does not predict your levels of ability—cognitive ability—at age seven, but by age 11, they do. So, pumping your resources into early years, which I know has been a trend in England, Wales and Scotland over many years, has been for a right reason, because the economic argument is that, for every pound you spend early on, the greater benefits you get later. It forgets that the disadvantage continues after that intervention has happened at the beginning. So, you have to keep the intervention going, and I think that's why we see a slower rate of progress at key stage 4. The other part of the picture is, of course, that key stage 4 tends to focus on the thresholds, so grade C at GCSE, not at other grades. So, consequently, again, as long as you're getting children to grade C, your intervention ends and you move on to the next child, and I think that's a bit of a shame.
[Inaudible.] difference between key stage 1 and the lack of evidence—I was very interested to hear that—of any difference between free-school-meal, non-free-school-meal and key stage 2, may that reflect the different level of robustness of the assessments of those two?
Both are teacher assessments. At the end of the foundation phase and at key stage 2 they will be teacher assessments, but the analysis I'm talking about are cognitive assessments, which we don't do in schools, but we do as part of the cohort study that I used the data from. So, they're not perfect and, of course, every child does well differently on a different day, so they're only a snapshot of somebody's ability, but these are taken outside the school system, these assessments, so I'd say they're fairly robust, these analyses.
The millennium cohort study.
UK wide, there's about 15,000 children born in the year 2000-01. We've been following them every three, four years in their lifetime. We have about 1,500 in Wales.
And is this from a different series than the, I think, 1995, 1970 and 1958—
That's correct. This is the most recent birth cohort study we have, yes.
But it's not on the consistent basis of those three longitudinal studies?
No. They're all the same sorts of longitudinal studies. So, the last data suite was when they were aged 15. They're currently collecting data now that they're aged 17.
I just wanted to follow up on this attainment issue, if I can. So, we've seen significant increases in the pupil development grant in recent years, and yet the attainment gap, particularly at level 4, seems to have increased quite significantly in the last academic year. Do you have any idea as to why that might be?
Since the evaluation was completed?
So, we didn't do any further analysis after that year.
I appreciate that. Were there any indicators that made you expect that?
The most likely scenario is because children were being entered for GCSE science instead of BTEC science in the last year. I suspect that has something to do with it. Some of the benefits we've seen at key stage 4 were because many children were not being entered for GCSE science; they were being entered for BTEC science. The analysis in the evaluation shows that the pass rate is much higher on a BTEC than it is on a GCSE and, consequently, those FSM children who are disproportionately being entered onto BTECs were looking like they were doing much better than their GCSE equivalents. Now, the policy's been to move away from BTEC science, and what you might have seen is a readjustment of the figures, but, of course, time will tell. I wouldn't take a one-year snapshot as an illustration of any structural changes just yet.
But in terms of the BTEC science, obviously, the non-free-school-meal pupils would have been doing that science subject as well.
Disproportionately, they would be doing GCSE science.
Although, in many schools, the whole cohort was doing BTEC rather than GCSE.
Darren, that's a useful point. I'd forgotten to mention throughout this that the analysis I'm talking about is at a national picture, not talking about school level—
—and there are huge differences in the outcomes if we look at it on a school-by-school basis, and we might come back to that later. Whilst, yes, in a school, there might be BTEC science students completely universally, nationally the picture I'm looking at is that they're disproportionately entered for GCSE—non-free-school-meal children—and FSM children are disproportionately entered for BTECs. And that difference, and changes over time in terms of that difference, might account for some of the adjustments in key stage 4 attainment, I suspect.
And I just wonder as well what the impact is in the non-free-school-meals cohort of children, because there is some evidence that suggests that they are perhaps not doing as well as they ought to be. It could be perhaps because of the diversion of resources to the free-school-meals children, I don't know. But do you have any evidence on that front? We know that Estyn, the Programme for International Student Assessment and others have said, particularly, the more able and talented kids aren't doing as well as they ought to be.
I think it's important to ask that question, because you have to draw a line in terms of where you allocate your resources—resources are finite. You're basically saying, 'Are we disproportionately giving too much money to particular groups over others and does that therefore effect their attainment?' The evaluation shows very clearly that whilst the attainment gap was being closed over the time period, all the time, the non-free-school-meals pupils were doing better each year as well, so they were not doing worse. Now the question is: could they have been doing even more? But then my argument would be: of course they would, if you get them more resources. But that's not the political decision we're making as evaluators, is it? At the end of the day, somebody has to decide how much of the resource, disproportionably from others, each group should have. And the decision has been made to give an extra £1,500 to free-school-meals pupils. One might argue that's the correct decision, some people might say that's not anywhere near enough.
When you say that FSM children are disproportionately entered for BTEC rather than GCSE in science, is that free-school-meals children as a weak proxy for attainment, or is it actually FSM that's the driver of people being entered, even adjusting for attainment?
I've no evidence for either of those. I think they are good questions to ask: 'Is it the low attainment or is it because of the labelling?' There's some evidence previously, not from this evaluation, about how some children, who are from particular backgrounds, are, if you like, labelled within the education system as being low achieving. Therefore, there's already an assumption that they should be entered for lower grade qualifications or alternative qualifications, irrespective of what their real ability could be on the day when they sit an exam. Predictive grades of FSM children, for example at A-level or GCSE, are much lower than their actual grades, when you compare that with non-FSM children. So, there is some evidence that decisions made in schools tend to downplay the abilities of these children.
Does that suggest it may be counterproductive to be identifying FSM children, even if we're giving more money on that basis if that's leading to—[Inaudible.]?
That's right. That's when you have to make that decision as to, 'Could this be more detrimental by having it than actually the advantages of it?' At the moment, I'd suggest the evidence says there are advantages of having it, very clearly. But I think we should be mindful of the point that this could be actually only simply saying and pointing out to schools and teachers, 'That's a free-school-meals child, we wouldn't expect them to do as well as the other ones.' But that's only if you assume it's about attainment and low attainment. If this is about structural inequality, then I think you change the focus and you go, 'Actually, they do deserve it, and why shouldn't we target them? They should have extra resources, extra tuition—whatever it might be. Extra social and emotional support—whatever it needs to be, because they don't get that at home.'
You mentioned the fact that resources are finite, so it's imperative that we maximise the impact of this, really, and I'm just wondering what you can tell us about how effective regional consortia and local authorities have been in helping schools and early years settings to maximise that potential impact?
If I can start with that, the regional consortia would support schools as part of a broader package of support. There's very little difference between EYPDG and PDG—it's all part of the same support package. With the early years settings, one of the things we found was—the non-maintained setting, sorry—they were less aware of the various sources of support that were available, but they tended to rate the support they had a lot higher. And I think that highlights one of the issues here—that we need to look more closely at the extent to which non-maintained settings have been supported and what initial and continuing professional development opportunities are available to them. I don't think it's a question of capability but it's certainly a question of capacity there, and that needs further research. I don't think our research will give you any conclusive evidence, but I think it's certainly an important avenue for further research.
So, is that raising their awareness of what's available as well as, maybe, encouraging consortia to be more proactive and reaching out to them?
Absolutely. It was the local authorities that supported the non-maintained settings.
Certainly, from our experience, the challenge advisers haven't really pushed this as a focus of their work with schools. Again, their focus is on attainment and school improvement generally, and the real danger—this is another example of the blurring—if their role is about school improvement and raising low attainment, is that by giving them the responsibility of encouraging and supporting schools to address the socially disadvantaged learners, then it will inevitably become about low-attaining free-school-meal children not free-school-meal children generally. But that's okay, because, to some extent. there needs to be some additional advice.
I'd also suggest that governors have a key role in this too. The more people who scrutinise schools over how they use their PDG funding, and ask them the questions, 'Well, you did that last year, why are you doing it again this year? What was the success of that? What were the benefits of it?', then we'll start to create the evidence base within schools to decide what they should be doing to use this resource effectively, as you suggested.
But you still see the challenge adviser as playing that role, albeit they need to step up—?
I do see that, yes, without a doubt. Certainly, more scope could be made of their role. I think Robert, in your previous discussion, made the point, though—at the end of the day, it doesn't matter who it is, as long as they are equipped with the skills and expertise to either challenge or advise the school on what they're doing with their PDG. Challenge advisers aren't necessarily the right people, and they will themselves have to go and find out what they need to know to be able to advise schools appropriately. We talk a bit about the Sutton Trust toolkit and the Education Endowment Foundation's resources to help schools. The same would apply to the challenge advisers. If they're really going to go into schools and start helping their schools too, then they need to understand what's in that toolkit, what's useful, what the evidence is outside in research circles as to what would be effective and what wouldn't be effective, and how to apply it in the school, what they need and that kind of thing.
Certainly, there's nowhere near enough of that happening.
Okay. So, have you identified any variation in terms of regional consortia's approaches? Have some been more effective than others? Is that something that you can identify?
We didn't look at that specifically. But, obviously, there are three consortia that tend to work with a more centralised approach. The fourth consortium's work was led by the constituent local authorities. It wouldn't be fair for us to comment on that, to be honest, because it wasn't something we looked at.
So, you wouldn't be able to say whether one approach is more successful than the other?
That said, there are examples in the evaluation of consortia, local authorities, working with cluster schools developing cluster-wide policies and interventions, which is, again, another good idea. We were not looking at the attainment gap below the national level. Now, you might argue that we should have done, and we did some preliminary work on school-level analysis to try and suggest that, but that was outside our evaluation remit.
I think it's fair to say that there's good practice across the four consortia, but comparing one with the other—no, we wouldn't be able to do that.
And in terms of what you said about the role of governing bodies providing that level of challenge, in your experience, is there a good understanding of the PDG at governing-body level and are they providing that challenge about the use of it?
Our evaluation didn't interview governors, interestingly. But other work we've been doing, where we interview headteachers and governors about their schools, would suggest they're not familiar with it and it's not routinely reported in terms of their funding of it. I've written a guidance document for the Cardiff Governors Association, about a year and a half ago, which was circulated to all governors in Cardiff about the sorts of questions they could be asking of the school in relation to the PDG. That's a resource, which even though I wrote it, would be hugely useful if people were to read it, take it to the governors' meetings and apply it.
And that's been our evidence as well. Yes, there is evidence that there are some governors and some governing bodies that are very well briefed on the PDG and the EYPDG, but it's not across the system.
I'm a school governor, and I know the challenge for governors as well as the headteacher is that the priorities are in an order. There's a list of priorities and your school could be in special measures. Your school might have the lowest levels of attainment. Your primary focus is going to be on that. Attainment gaps will have to come second and so there are—. But it's part of the developing story. The key message from the evaluation is that you can't judge this just in a short space of time of two years, really. You need to see the longer sustainable changes that schools are making in terms of training and the changes in the teaching practices of the classroom practitioners and what you can do to make this gap close permanently, not just temporarily.
As always, that's the right question to ask the researcher to answer that. What would be a reasonable amount of time? I'm one of those people who believes that things taking a bit longer but making them more sustainable is the right solution. So, I'd be looking at at least five years from the development of this. What was really interesting about the evaluation, because it was a three-year longitudinal evaluation, was that we still saw schools changing their approach to this in the three years, So, they hadn't yet decided what they needed to do. Even in those three years, they were still learning. Many schools just put the money to the side and other schools took it to the forefront. It will take five years before they establish an understanding of what it's for, and then to embed it will take even longer, quite possibly.
We're going to talk about looked-after and adopted children now. Mark.
Just now I think you didn't want to comment on the regional consortia, but in terms of looked-after children and previously looked-after children who have since been adopted, my understanding is the regional consortia are responsible for the distribution of PDG. What sort of job are they doing on that?
Looked-after children were outside the remit of our evaluation. So, we didn't look at that in any great detail, other than to say that schools that had relatively large numbers—and we're only talking a small number, admittedly, where they did have a reasonable number of looked-after children—they took that quite seriously in the use of their PDG. But we didn't look at the relation to regional consortia and the allocation of that.
Are you able to say anything about the regional consortia in that context?
No. We didn't look specifically at that. The work we were looking at was more in terms of capacity building with providers than it was in terms of individual children. It was supporting their professional development, it was putting the systems in place. So, it was one step removed from targeting individual children, and, of course, that was partly because in early years there's no free-school-meals eligibility. So, it was a proxy indicator anyway.
You said at least with some schools you saw that they were putting a particular emphasis on looked-after or adopted children. Did that appear effective, to the extent you were able to discern?
We didn't come across any interventions that were targeted specifically at that group and their particular needs. Again, there are probably lots of other strategies and initiatives in place for those kind of children, so it would be impossible for me to say on the basis of this evaluation.
Some of the data I've seen in terms of looked-after children as university entrants remain shockingly low. Is PDG something that should be one of the tools in the box to seek to address that?
I think if you start looking at higher education participation then you help address my concern, which is this is not just about low attainment, because low-attaining children will very rarely go to university. If you don't make grades B in GCSE or above, your chances of going to university are very, very low anyway. You make reference to the rate of progress of LAC children into higher education—I've tried to find those figures. There are committee reports that cite them and Government White Papers that cite them. I've yet to find the source of those. I don't think that evidence exists yet.
I had some written questions, actually—I just got answers back last week—and they seem to show about a sort of 1 per cent, perhaps 2 per cent, proportion. I've gone back with a follow-up question to try and clarify if I've got the right answer, because I found it difficult to believe it was so low. I'd be willing to share those with you when I get them—
That would be great. That would be fantastic.
Will do, yes. Absolutely. This was a different evaluation, but we had a unique data set we created, which was following every 15-year-old in university or not, which is the first time we've ever achieved that in Wales—they've done it in England two years prior to us—but it's precisely that. You don't know who doesn't go to university—you often know who does go, but you don't know who doesn't go, so you can't judge the real rates of participation. So, this is an opportunity to do that.
If I can just clarify what I said earlier, the kind of activities we saw in terms of upskilling staff, early identification of needs, targeting individual children's needs—those kinds of strategies would obviously impact positively on looked-after children.
The longitudinal cohort study you've been using—can I perhaps look at potentially a different approach? If we look at the amount of PDG spending per school over the period for which PDG has been available, can we track that against the gap of free-school-meals versus non-free-school-meals, and see whether there's any statistical correlation between the change in the PDG and the change in that gap?
That's quite possible. You still have the problem in Wales that we have no control group, so we don't have any free-school-meal children who didn't get the benefit of the PDG, because, in effect, it's a per pupil-funded thing. You can look at the scale of how much money each school might have got, so you can look at whether there are economies of scale. That analysis has not been done but, again, it could be done. But, as I say, we never look below the national picture.
The longitudinal study would give you far more information about the children's lives, so you could look at the other factors that may mitigate their attainment levels, and that would provide that robust analysis. That would require some detailed work because we'd have to link the cohort to the school data.
But if it's not possible to discern as to the significant correlation between the change in the availability of PDG and the change in FSM or non-FSM attainment, shouldn't that give us concerns about the effectiveness of the policy?
No, because there isn't a control group. So, you can't—. What you really want to have is a situation where you've got some children who didn't get any funding but had the same circumstances as the children who did receive the funding, and you compare the two groups.
But my question is: isn't the increase in PDG orientated towards schools with lots of FSM, the hypothesis is that that spending and the interventions it can support should be closing the gap—is it?
In those schools—in those schools with large numbers, where they've got more money?
Well, surely for Welsh Government the concern is to counter the deprivational lack of opportunity of being in a poor family—and I emphasise we're speaking about generalisations rather than for every family. Shouldn't it be the case that, when we start directing PDG on the basis of whether people are on free school meals or not, should we not in response, at least over time, see some reduction in the attainment gap of free-school-meals against non? And, if we aren't, isn't that a reason to question the policy?
You certainly see the gap closing. The question you're asking is: should the rate of improvement have increased significantly following the introduction of the PDG? The evaluation demonstrates, it illustrates, that that rate of improvement did not change dramatically in all outcomes. For some it did, in others it didn't. So, science improved quite quickly after the PDG, but, as I've pointed out before, some of that might be to do to with the fact that many FSM children were not being entered for GCSE science in the first place.
Yes. I'm interested in this link between spending and achievement and attainment. What evidence is there more generally that, if you spend more, you get better results in schools? Because it appears to me that some of the lowest-spending local authorities, for example, per capita, get by far some of the best results in Wales. So—
We're going way beyond the evaluation now—
I appreciate that, but you're an academic with lots of information at your disposal.
I know some things; I don't know a huge amount. But the argument in Wales has always been that there's evidence somewhere out there that shows that the amount of funding you have doesn't necessarily predict the levels of attainment. PISA scores are used as a good example of that. But to extrapolate from those analyses to saying how much more attainment should you get for every £1 you put into a child's learning—it's really difficult to say, because the point of this group is that they have particularly challenging circumstances. They may all be very, very different circumstances, and therefore you need to know what their needs are. They may need social and emotional help; they might just need extra maths tuition, because without maths tuition they can't do science. There's a whole load of knock-on effects. The key thing to remember for me is that it's not a linear relationship; it's a non-linear relationship. So, say we've got key stage 2 achievement where 80 per cent of children reach expected levels, but 20 per cent don't—that's where we were before the foundation phase was introduced. The money to get that 20 per cent to reach the expected levels will be far greater than it was for the last 20 per cent who got there. The harder it is, the more money you need to plan. It's a non-linear relationship. That's all I can comment on it, really. It would be great to say, 'This is how much money you should spend on a school'; I don't think we really know.
Obviously, we want to make sure that in these times of austerity there's value for money for the taxpayer, and there's been significant additional resource put into this particular policy pot of investing in the free-school-meal pupils against disinvesting in the non-free-school-meal pupils. On the basis of the evidence that you've given us, it's difficult to establish a direct link between—
Yes, but it would have been difficult anyway. Many of the schools with large numbers of free-school-meal children who were receiving the largest amounts of PDG were also getting Schools Challenge Cymru funding. They were also getting funding for other reasons. So, to differentiate it requires a detailed, forensic accounting analysis of a school's budget and levels of attainment—an analysis and a study that I would love for us to be able to do in Wales, and we've tried to seek funding for that in the past. It would be a great thing to do. Because many schools actually raise their own money as well. Obviously, some of the most—. You talk about schools areas with low levels of per-pupil funding; well, actually, you might find that those schools are actually very good at raising additional money, either through philanthropic sources or through their own raising of money through parents associations, that can mitigate some of that difference in funding gaps. We just don't know. I would love to be able to say—and that would then help all of us round this table—in future, actually we know that £100 extra means that you might raise a grade by one level.
Yes. You've already made it clear that you think the guidance should be updated, but I'm just wondering what other key lessons should we now take from the PDG so far in terms of recommending changes for the future. You've touched on a number of them, I know, but I was just wondering what the main themes would be.
It depends, really, from the policy makers' point of view, what they really want to achieve with this. Is this really about raising the attainment of free-school-meal children who are struggling at the tail end of the attainment range? In which case, as you were—carry on, because that's what this is doing. If this is really about addressing the socioeconomic disadvantage, the structural inequalities, that some of these families face and our best proxy for identifying those families is that they're eligible for free school meals—there are other ways of doing it, of course—then you need to change your focus slightly. You need to realise that this is about filling in the gaps that those children don't have compared to the non-free-school-meal children that do have those advantages—those other advantages they might have—and then working out what they need. That might take you away from both worrying about just literacy and numeracy attainment. It might also take you away from looking at just those with low attainment, but looking at everybody across the distribution.
And I think that, as I said earlier, it's about clarity about the aims of this. it's about identifying good practice, identifying what works, recognising that, very often, each child—well, each child does have individual needs, and very often you have to tailor your provision to meet those individual needs. So, 'one size fits all' isn't going to work, and, as I mentioned earlier, certainly with the non-maintained settings, there are issues there about initial and continuing professional development, and how the system supports the sector to deliver what we expect of it.
There's a huge amount of awareness gathering that needs to be undertaken amongst practitioners, advisers, governors, policy makers about what it is that funding should be used to do. The first thing is to identify what are the needs of those learners, whoever those learners are.
You can't guess what they are. You can't just go, 'Oh, I remember from when I did my teacher training that this is what we were told'. That's not good enough. Then, secondly, you need to know what are the best interventions that are going to make the difference to address those particular needs. There are lots of interventions out there—there are more interventions and things you can take off the shelf than there are teachers, probably. But the evidence base about the robustness and validity of those interventions is very weak, in the main, and schools have to either realise that in terms of what they take off the shelf, or, secondly, they have to start collecting their own evidence to support their decisions that they're making—and then recognising that they might also make mistakes. So, one year they might actually not use something effectively, and I know that might upset people, particularly those holding the budget strings, the purse strings, but, if that means they make a better decision in the second year, surely that's a good mistake to have made—as long as they learn from it, and they need to make sure that they learn from those things.
Thank you. In terms of the fact that your evaluation goes up to 2015, has there been any approach by Welsh Government about doing further evaluation?
Not on the data analysis, no.
Okay, thank you. Are there any other comments or observations you'd like to bring to the attention of the committee, either of you, that would help inform our inquiry?
Just one other observation: as I said, we did some indicative work at school level. It's just important to note for the committee that, whilst the national picture might show an attainment gap closing nationally, that doesn't necessarily mean that the attainment gap's closing in every school, and, actually, something called Simpson's paradox means that you can actually have a situation where the national picture is the attainment gap is closing nationally, but every school attainment gap is widening. That is perfectly possible. As it happens, in the analysis we did, which was never published, half of the schools were closing their gap and half the schools were widening the attainment gap after the introduction of PDG, and yet the national picture was showing a closing of the attainment gap. So, sometimes, this national picture might only be because there just happens to be one more school that happens to be doing a good job compared with the schools that aren't doing a good job on this. So, the variation in the impact of the use of the PDG at the school level—and, as I say, it's only a tentative insight into this, because what you really need to do is look at this over a number of years, not just a one-year snapshot—would suggest, actually, there's a very mixed picture behind that national picture.
Can I just ask a follow-up question to that? I find that very interesting, and it would be interesting to see that analysis by schools. Presumably, it would be much easier for you to do some analysis by local authority area and by the regional consortia area, which, again, might be very useful in terms of planning further interventions and so forth.
That's right. That would be equally valid. Again, though, just because a local authority might be widening or closing the gap doesn't mean that's what's happening in the schools either. Again, Simpson's paradox might apply.
I understand. Yes, okay. But is that information easy for you to disaggregate from your—?
We couldn't do it now, because the evaluation's completed and the data has been destroyed. We would have to make new requests and we'd have to get further funding for that.
Okay. Thank you. We've come to the end of our session. Can I thank you both for attending and for answering all our questions? You will be sent a transcript of the meeting to check for accuracy in due course, but thank you again for your attendance.
Thank you very much. Thanks, everybody. Thanks for your questions.
Item 5, then, is papers to note. Paper to note 1 is all the information that Carol Shillabeer agreed to send us on the Together for Children and Young People programme. Paper to note 2 is additional information from North Wales Police for our inquiry into the emotional and mental health of children and young people. Paper to note 3 is a letter from the children's commissioner to the Cabinet Secretaries for health and Education, following the evidence session we held with them. Paper to note 4 is a letter from myself to the chief executive of the WJEC in relation to availability of textbooks. Paper to note 5: a letter from us to the Chair of Health, Social Care and Sport Committee highlighting evidence that might help them with their inquiry on suicide. Paper to note 6: a letter from us to the Chair of Public Accounts Committee, and paper to note 7 is the reply from the Minister for Welsh Language and Lifelong Learning on the youth work inquiry follow-up. Members will recall that we've decided that we will go out to a targeted consultation on that letter, if everyone is still happy with that. Is everybody happy to note those papers? Thank you.
bod y pwyllgor yn penderfynu gwahardd y cyhoedd o weddill y cyfarfod yn unol â Rheol Sefydlog 17.42(ix).
that the committee resolves to exclude the public from the remainder of the meeting in accordance with Standing Order 17.42(ix).
Cynigiwyd y cynnig.
Item 6, then. Can I propose, in accordance with Standing Orders 17.42 that the committee resolves to meet in private for the remainder of the meeting? Are Members content? Thank you.
Derbyniwyd y cynnig.
Daeth rhan gyhoeddus y cyfarfod i ben am 11:49.
The public part of the meeting ended at 11:49.