The future of exams

We are now in the exams season with students all over the world sat in exam halls with pen (and pencil) and paper, completing their GCSE and A-Level exams.    5 years ago, it was the same, 10 years ago, 20 years ago, in fact I suspect we could go back over 100 years at we would see a similar scene of rows of students sat taking paper-based examinations.    Isnt it about time we looked at a more modern solution to the need for terminal exams?

Computer Based Testing – Challenges

One of the big challenges in any computer-based examination solution would be the requirement for schools and colleges to have large numbers of computers available for students to use in taking their exams.   If we are simply substituting the paper test for an electronic test, where all students across the country are expected to sit the same exam at the same time, I feel this problem will be difficult for schools and colleges to resolve especially with core subjects like Maths and English.   

We could as an alternative look to allow the taking of tests using students own devices however equally this is problematic as students will not have equal access to equipment and in some cases might not have access to a suitable device, plus there would be concerns in relation to cheating where students are using their own equipment.  We saw some of these issues, particularly in relation to access to technology during the pandemic.

Remote Invigilation or proctoring

There is also a question as to whether we even need to get students into a common location.    Following the pandemic where a lot of teaching went to using online tools and video is it possible to use the same technologies to allow students to take their exams remotely in their own time.    I myself experienced this only a few years ago when doing a Cybersecurity exam which involved remote proctoring and someone monitoring my exam efforts via my web camera.   This might be another option that could be considered however the potential safeguarding implications would need to be considered.

Adaptive Testing

The use of adaptive testing might be another solution here as in this situation the students do not necessarily do the same questions.   The questions are selected from a pool with the adaptive testing solution then selecting subsequent questions based on how the students do in each question.    Using adaptive testing we wouldn’t be as worried about all students sitting the same test at the same time, given the students wouldn’t be receiving the same questions.    As such schools could use their available IT resources over a period of time to allow students to access the relevant tests.   The challenge, I suspect, with adaptive testing will be convincing parents and students that it is fair.  Fairness is easy to point to where all students do the same test at the same time but not so easy where they are doing different questions at different times.

And do we need knowledge based final assessment

We also need to question whether there is still the need for the final assessment of students.   For some students it is an opportunity to show all they have learned, but for others it is a massive stress and a negative impact on their wellbeing.   I have long been a supporter of vocational qualifications based on ongoing assessment throughout the course rather than the heavily weighted final exams of so called “academic” qualifications.   

Additionally in a world where we routinely use technology tools such as google to search for answers and solutions, should we actually be considering how such technology might have a place in future exams, rather than banning such devices from exam halls.

Conclusion

I don’t have an answer for this challenge;  Any change is likely to be difficult especially after over 100 years of terminal exams.    It is however noteworthy that a number of examination bodies are actively looking and trialling alternative digital exams solutions.

Here is another example of where the pandemic has fuelled an exploration of future solutions.    I suspect however it will be some years, maybe 10 or more, before any real change happens, although I hope it happens sooner.

Exams: Why should 1/3 of students fail?

criminalatt from freedigitalphotosNot so long ago I read of a discussion in relation to whether the GCSE English Language should be scrapped.   Part of the reasoning behind this is identified as being due to the subject identifying a third of students as having failed.    As a headline I think it is difficult to disagree with.  How can identifying a third of students as having failed be an acceptable thing to do.    On reflection my view is that this issue is less about English Language subject and more about the educational system as it is now and as it has been for over one hundred years.

I remember when I worked within an FE college and I was involved in enrolment following the release of the GCSE results.   A-Level and Level 3 BTec courses had clear admissions requirements in terms of the minimum number of B’s or C’s required to gain entry to each course.   This often included the need for a minimum of a C in Maths or English.    I also remember working with students on their university applications, post A-Levels, where once again universities have entry requirements which students must achieve to gain entry.    Once again there might be a need for three C’s to get on their preferred university course.

The issue with the above is that a certain set of grades will gain entry and other lower grades will not result in entry.    It is easy to therefore perceive some grades as being passes and as a result the other remaining grades must be fails.     The education system as we know it is built on the ability to group students in terms of their ability, as described by their grades, and through this identify the opportunities which will be available.     As a result of this, independent of the U, or ungraded option, there will always be a perception as to some grades, those that easily permit entrance to the next level of education, being perceived as being passes and the remainder as being fails.

An alternative is to have qualifications which allow all students to pass.   From the headline point of view, improving from only two thirds of students passing to one hundred percent of students passing sounds logical and a success worth celebrating.   The issue is that it is unlikely to result in any real change.     FE colleges will still need to set requirements, meaning some passing grades will permit entry while others will not.    Universities will also set their requirements and again some grades will allow students to pass onto the next level whereas others will see their application fail to get them in.

The above alternative continues to be based on an education system where students pass through the system based on their age.    Given this there is a need to differentiate the students hence assigning grades to students based on their exams and coursework.

If we are to consider a system where all students are to achieve, we need to acknowledge the students learn at different rates.   We therefore need to allow students to progress through education at different rates.    The different rates of progress can therefore be used to differentiate students and identify when they are ready to progress to the next educational level.   Again this seems like an enviable solution in that students either complete or can be considered as having not yet completed or achieved.   They haven’t failed as the opportunity to complete always exists, being available for them at a time that suits their learning and rate of progressions.     The issue here is once again perception in that quickly there will become a view as to what the expected rate of progression will be.   This might be that by the age of 18 students will progress to university.   Instantly with this perception the media will be able to quote the percentage of students who proceed on or ahead of this target and therefore the percentage which do not.    Again we have those that progress as normally expected, those which pass, and those who progress at a slower rate, and therefore have not passed;  those which are perceived to have failed.

I don’t like the idea of one third of students failing.  It simply doesn’t feel right.   That said it is difficult to find an alternative solution that wont simply see us back in the same position a couple of years in the future.

 

Thoughts from the Bryanston Education Summit

20180606_091909_resizedI attended the 2nd Bryanston Education Summit during the week just past, on 6th June.   I had gone to in the inaugural event last year and I must admit to having found both years to be interesting and useful.   The weather both years has been glorious which also helps to add to the event and the beautiful surroundings of the school.   Here’s hoping Bryanston keep it up, and run another event next year.

During the day I attended a number of different presentations on different topics so I thought I would share some of my thoughts from these sessions.

The first presentation of the day was from Daisy Christodoulou who was discussing assessment.    She drew a really useful analogy in comparing preparing students for their exams with preparing to run a marathon.    It isn’t something where you can jump straight into a marathon distance on day 1 of training.  You need to slowly build up your preparations, focusing on developing certain skills and approaches.   You need to have a plan and then work to this plan, although amending it as needed as you progress, should injury arise or due to weather conditions, etc.    I found myself wondering about how often we actually spend with our students in discussing this plan, the proposed goal of the subject or year and how we will all, teachers, students, support staff and others, work towards those goals.

Daisy also spent some time discussing summative versus formative assessment suggesting that the use of grades should be kept to a minimum of only once or twice per year.   My first reaction to this was concern as it seemed to disregard the potential benefits of spaced retrieval testing which ultimately would result in a score representing the number of correct answers.   Following further thought my conclusion was that spaced retrieval is very focussed on knowledge plus just indicates where an answer is right or wrong as opposed to grading which is more a judgement of students ability.   As such it may be possible to reduce overall summative assessment grading while still making regular use of testing of student knowledge.   I think this also highlights the fact that assessment and testing are actually different things even although they are often generally used as two interchangeable terms referring to the same thing.

Mary Myatt was the second presenter who discussed how we might make learning high challenge but low threat.    As she discussed Sudoku I couldn’t help but draw parallels with computer gaming.  In both case we engage, of our own free will, in a form of testing.   In both cases the key is the low threat nature of the testing.    For me the question is therefore how do we make classroom learning and assessment low threat.    Mary suggested a path towards this in discussing with students our expectations such as setting reading outside their current ability level, which is therefore challenging, but telling them this and then promising to work through it with them in future lessons.   I think this links to building an appropriate classroom culture and climate such that students feel able to share the difficulties they face and work through them with the class.  It is very much about developing an open culture and positive or warm climate in which mistakes and difficulties are not seen as something to be feared or embarrassed by, but to be embraced, shared and worked through together.   Another thing I took away from Marys session was a list of books to read;  My bookshelf will be added to with some of her recommended books shortly.

The third of the sessions which I found most useful was the session by Andy Buck.    He discussed leadership drawing a number of concepts from the book Thinking Fast and Slow by Daniel Kahneman, a book which is one of my favourites.     I particularly enjoyed the practical demonstrations where he evidenced how we all show bias in our decision making.  This is a fact of being human and the way the brain works, we bring to decision making processes assumptions and viewpoints based on previous experiences, upbringing, etc.   He also, linked to this, demonstrated anchoring, managing to influence a whole room of educational professionals to get a question in relation to the number of Year 11 students in the UK wrong.   Statistics suggest that a percentage of the audience should have got this question correct based on a normal distribution of responses however using anchoring Andy influenced the audience away from the correct answer.   I have since used a very similar approach in a lesson with Lower 6 students to show how easily I can influence their answer and to suggest that Google, Amazon, Facebook, etc. with their huge amounts of data on individuals may therefore be able to influence individuals to a far greater extent.

There was also a presentation on VR in education which has opened my mind up a little to the possible applications of VR.   This might therefore be something we experiment with at school in the year ahead.

20180606_150407_resizedMicrosoft’s Ian Fordham presented on the various things Microsoft are currently working on.   I continue to find the areas Microsoft are looking at such as using AI to help individuals with accessibility and in addressing SEN to be very interesting indeed.   I also was very interested by his mention of PowerBI as I see significant opportunities in using PowerBI within schools to build dashboards of data which are easy to interrogate and explore.    This removes the need for complex spreadsheets of data allowing teachers and school leaders to do more with the data available however with less effort or time required.    I believe this hits two key needs in relation to the data use in schools, being the need to do more with the vast amounts of data held with schools however the need to do it in a more efficient way such that teachers workload in relation to data can be reduced.

I also say a presentation by Crispin Weston on data use in school.    His suggestion that we need to use technology more to allow us to more easily analyse and use data is one I very much agree with.   This partly got me thinking about the Insights functionality in PowerBI as a possible way to make progress in this area.   He also talked about causation and correlation suggesting his belief that there is a link between the two and that the traditional call that “correlation is not causation” is in fact incorrect.   At first I was sceptical as to this however the key here lies in the type of data.    Where the data is simple and results in a simple linear trend line the resulting reliability of an argument that correlation equal causation is likely to be very low.   The world is seldom simple enough to present us with linear trends.    If, however the data over a period of time varies significantly and randomly and the second data element follows this however the reliability that correlation equals causation is likely to be significantly higher.     I think the main message I took away from Crispins session was to take data and findings with a pinch of salt and to ensure that context is taken into account.  If it looks simple and clear then there is something which hasn’t been considered.

Overall the day was a very useful one and the above is a summary of just some of the things I took away.   I must admit to taking 5 or 6 pages of tightly written notes, hastily scribbled on an iPad during the course of the day.

I hope that Bryanston decide to repeat the conference next year and is the quality of presenters and their sessions continues, that it becomes a reliable yearly event.   Here’s hoping the trend of good weather also continues should they decide to run the summit again next year.

 

 

 

Standardized Testing

testing-signI have written a number of times about my feelings with regards standardized testing.    (You can read some of my previous postings here – Some thoughts on Data , Building Test Machines).   Having worked internationally in schools in the Middle East I am particularly aware of standardized testing and the weight put on the results from such testing.   Within the UAE there is a focus on ensuring that education is of an international standard with the measure of this international standard being the results from PISA and also from EMSA testing regimes.    As a result individual schools and their teachers are expected to pore over the EMSA results and analyse what the results mean.    I feel that this focus on a standardized testing regime such as PISA is misplaced as how can we on one hand seek differentiated learning tailored to students as individuals while measuring all students with the a single standardized measure.

As such it was with great interest I read the article in the TES titled, “Ignore Pisa entirely,’ says world expert”.     The article refers to comments provided by Professor Yong Zhao who I was lucky to see at an SSAT conference event back in 2009.    Back then I found Professor Zhao to be both engaging and inspiring as a presenter, with some of his thoughts echoing some of my own plus also shaping some of the thoughts and ideas that I came to develop.    Again I find myself in agreement with Professor Zhao.    I particularly liked his comment regarding the need for “creativity, not uniformity”.

I feel the focus on PISA is the result of valuing what is measurable as opposed to measuring what is valued.      Measuring student performance in a standardized test is easy, with various statistical methods then allowing for what appears to be complex analysis of the data, therefore lending us to be able to prove or disprove various theories or beliefs.     Newspapers and other publishers then sensationalize the data and create causal explanations.   Education in Finland was heralded to be excellent recently as a result of the results from PISA testing.     Teaching in the UAE was deemed to be below the world average however better than most other Middle East countries.    Did PISA really provide a measure of the quality of education?    I think not!

Can education be boiled down to a simple test?   Is a students ability to do well in the PISA test what we value?    Does it take into consideration the students pathway through learning as the pathway differs from one country to another?   Does it take into consideration local needs?   Does it take into consideration the cultural, religious or other contexts within which the learning is taking place?    Does it take into account students as individuals?    Now I acknowledge that it may be difficult or even impossible to measure the above however does that mean that we accept a lesser measure such as PISA just because it is easier?

There may be some place for the PISA results in education however I feel we would be much better focusing on the micro level, on our own individual schools and on seeking to continually improve, as opposed to what Professor Zhao described as little more than a “beer drinking contest”.

 

Some thoughts on Data

A recent article in the Telegraph (read it here) got me thinking once more about data.   This also got me thinking about the book “Thinking, Fast and Slow” by Daniel Kahneman which I have only recently finished reading.  The book highlighted a number of issues which I feel have implications for education and need to be considered by school leaders.

Firstly the small numbers effect:  The Bill and Melinda gates foundation commissioned a study to examine schools in search of the most effective schools.    It found, unsurprisingly that small size, in terms of student numbers, schools achieved the best results, over larger schools.   Contradictory it also found that small schools also achieved the worst results.   The reason for this as explained by Kahneman is that where a data set contains only a small number of items the potential for variability is high.   As such, due to a variety of random variables and possibly a little helping of luck, some small schools do particularly well, out achieving big schools.    Other small schools are not so lucky and the variables don’t fall so well, resulting in the worst results.

darts

To clarify this consider throwing three darts at a dart board aiming for the centre.   This represents the results of a school with a small number of students with higher scores being nearer centre and a lower score being those darts ending further from the centre.   In the case of student results an average result would then be calculated for the school and the same can be done looking at the position of the darts.   Assuming you are not a professional darts player you may do well or you may not do so well due to a variety of random variables.     Given the limited number of darts the potential for variability is high hence a high average or low average is very possible.   Next consider if you were to continue and throw sixty darts at the dart board, taking the average across all the dart throws.    Given the number of darts the average will regress towards your mean darts throwing ability.    The increased number of data items means that variability is reduced as each significant good or poor throw is averaged out among the other throws.

Within schools a great deal of value is being attached to statistical analysis of school data including standardised testing however care must be taken.   As I have suggested above a statistical analysis showing school A is better than school B could easily be the result of random factors such as school size, school resourcing and funding, etc as much as it may be related to better quality teaching and learning, and improved student outcomes.

Another issue if how we respond to the results.  Kahneman suggests that commonly we look for causal factors.   As such we seek to associate the data with a cause which in schools could be a number of different things however our tendency is to focus on that which comes easily to mind.   As such poorer (and better, although not as often,) results are associated most often attributed to teachers and the quality of their teaching as this is what is most frequently on the mind of school leaders.    We arrive at this conclusion often without considering other possible conclusions such as the variable difficulty of the assessments, assessment implementation, the specific cohort concerned, the sample size as discussed earlier and a multitude of other potential factors.   We also, due to arriving so quickly at a causal factor which clearly must be to blame and therefore needs to be rectified, fail to consider the statistical validity of our data.   We fail to consider the margins for error which may exist in our data including what we may consider acceptable margins for error.   We also fail to consider a number of other factors which influence our interpretation of the data including the tendency to focus more on addressing the results which are perceived to be negative.   This constant focus on the negative can result in a blame culture developing which can result in increasing negative results and increasing levels of blame.   Maybe an alternative approach which may work would be to focus more on the marginally positive results and how they were achieved and how they could be built upon.

The key issue in my belief is that we need to take care with data and the conclusions we infer from it.   We cannot abandon the use of data as how else would we measure how we are doing, however equally we cannot take it as fully factual.   The world is a complex place filled with variables, randomness and luck, and we need to examine school data bearing this fact in mind.   We also need to bear in mind that data is a tool to help us deliver the best learning opportunities for students;  data is not an end in itself!

 

Class sizes

This morning before walking out the front door I saw someone on a BBC morning programme suggesting that their political parties contribution to the education sector was a reduction in classroom sizes.

I find this interesting that classroom sizes continue to be considered as a measure of how good a school or an education system is.   In the case of the comments on BBC the person making the comments was equating classroom size to an improvement in the quality of education.

Hanushek (1998) suggested that the linkage between smaller class sizes and improved students results was “generally erroneous”.     Kahneman (2011) went further in suggesting that the fact associated with such a claim were “wrong”.

Kahneman’s explanation (2011) was that the reason for the findings relates to statistics and what he refers to as the “law of small numbers”.     A small class is made up of a smaller number of students which therefore results in higher levels of variability in terms of the average.    He uses an example of drawing coloured marbles from a jar to demonstrate this.   Consider randomly picking marbles from a jar containing red and blue marbles.  There is a higher probability of drawing out 3 of acolour (3 high achievers in a small class) than of drawing out 6 of a colour (6, the equivalent number of, high achievers but in a bigger class).

marbles

Within a larger class size there is a greater tendency towards regression to the mean and therefore a more stable and less variable average across schools.

The association of improved results resulting from more teacher time, more support, etc resulting from a smaller class sizes is therefore unfounded.    The improved results in schools with smaller class sizes is simply a feature of the statistical analysis of small sample sizes.   Kahneman suggests that if the researchers were to change their question and look at if poor results could be linked to small classes they would find this to be equally true.

My feeling on this is that generally class size doesn’t have a significant impact on student results within lower and upper limits.    Where the ratio is 1 teacher to 2 or 3 students I would expect to see a positive impact and equally at 50+ students I would expect to see a negative impact.   Within the larger range between 5 and 50 I would expect the impact to be minimal if evident at all.

Care needs to be taken with the use of statistics and care has to be taken in believing them.   As Kahneman explains, it is easy to create a causal explanation for why a given set of statistics such as those on class size make sense.   The ease with which a causal explanation comes to mind however doesn’t necessarily make the explanation and resulting judgement true.

Sources

Hanushek, E. A. (1998), The evidence on class size, W. Allen Wallis Institute of Political Economy

Kahneman, D. (2011), Thinking, fast and slow, Penguin Books

 

 

Gaming

The subject of schools “gaming” school league tables and performance measures such as Progress 8 has made the news recently so I have decided to contribute my opinion to the mix.    Before doing so I need to be clear that I don’t have any particularly strong views with regards this issue.  I therefore believe that my points represent a balanced viewpoint.    I will however acknowledge that my assessment of my viewpoint as balanced is based on the context as set by my viewpoint, perception and the paradigms within which I operate as an individual.   As such, from the point of view of those reading, including yourself, this may not be balanced after all.    I make no apologies for this as all I can offer is my opinion, which is never wrong in that it is my opinion and therefore is formed based on my viewpoint and context.

Back on the subject of “gaming” the discussion seems to have opposing viewpoints.   One of these viewpoints is that a school should try to offer its students the best opportunities for success in the future.   As such it is important to enable them to achieve as many successful qualifications as possible.    These schools therefore look to enroll students in qualifications which for minimal effort return successful qualification, such as ECDL.

The other viewpoint is that schools enrolling students in bulk in ECDL are doing so in order to influence league tables and performance measures such as Progress 8.    Educators taking this position are of the opinion that these qualifications are of lesser value than other qualifications which may take longer to achieve or which are more difficult to achieve yet have comparable impact on league tables and other performance measures.

For me there may be truth in both viewpoints.   If the studying of specific exams is in the interest of students’ futures then surely it is the correct thing to do.    Consider two schools which are identical in outcomes except for the fact that students in one achieve an additional ECDL qualification.    Surely this puts students who leave with an additional qualification in a more positive position.   I myself worked in a school where we delivered OCR National IT to all students.   The reason we did this was due to vocational nature of the qualification which suited out student cohort plus the breadth of study and options available which allowed us to accommodate for individual student needs and interests.

Equally there is truth in the other viewpoint in that if a school put all students in for the ECDL qualification or the OCR National they may have done so purely in the interest of achieving a better league table position than other schools.   This may put students under stress where the qualification is additional, or may represent an unfair advantage where an “easy” subject has been substituted in place of a more difficult or valued subject with an equivalent or near equivalent league table or performance measurement points worth.

Both of the viewpoints include identical actions in the batch enrolling students in a given qualification yet both viewpoints result in totally opposing opinions.    The key fact is not so much what schools do but why they do it.    In one viewpoint it is about the students and the benefit to them while in the other viewpoint it is about the school and getting the best league table or performance measure result.

If OFSTED are to clamp down on “gaming” they are therefore going to have to try and identify why a school took the chosen action.     How are they going to do this?    How are they going to measure the “intentions” of school leaders?       Are we going to start seeing OFSTED inspectors administering polygraph lie detector tests on school leaders?

I also feel that this discussion has a lesser discussed aspect to it in the value of differing qualifications.   This discussion has raged for some time on the value of so called “core” subjects and the perceived lesser value of the arts and creative subjects.      The new “gaming” discussions adds differing values in terms of the perceived difficult level of a course along with the time taken to deliver the course, with shorter courses perceived to have lesser value.   Who will decide the relative worth of each course and the total worth of any individual students curriculum of study?

We should all be working in the interests of our students to try and provide them every competitive advantage with regards Further Education or Higher Education options, or options into employment, or even more generally into their future lives.   A key part of this is the qualifications they achieve so we need to get them everything reasonably possible.   In teaching we use every trick in the book to try and make sure students are learning plus are ready and able to succeed in whatever assessment is required to achieve a given qualification.   If this is “gaming” then maybe we are all involved.

Mandatory Testing?

fail

As I was heading to work on Friday I heard a BBC news story regarding new proposals for “testing” of 4 year old children at the start of their school experience.    This immediately had me asking about the differences between assessment and testing.   I am not sure there is a difference however I am quite happy to listen to anyone who is able to explain this.

For me, independent of the age of students, one of the first things I need to do is to “test” or assess them.   I need to find out a little about them, about the things they like, about the things they are good at and the areas within which they still need to develop.    I have worked in secondary education, further education and higher education and across each stage the first thing I have done with new students is to assess or test them in order to help in planning their learning experience.

So this led me to ask why the story was so new worthy.    My first assumption was that it related to the differing perspectives and definitions for the term of “assessment” and “testing”.    It could be that some see the two terms as meaning the same thing, as I do, while others see each term as meaning something different.    This differing perspective leads to the debate around whether the proposal in question is a good or bad thing and therefore to a news worthy story.

Upon thinking on it further and accepting the commonality of the two terms I came to think that it is not what the two words mean or “are” which is the issue but the reason why we undertake them.    In the case of my testing at the start of working with new students, this is done as I know the benefit such testing will have in terms of providing the best learning experience possible.   It is done because good teaching demands it be done.    In the case of the news story they are discussing mandated testing.     The reason for mandating such testing may be linked to the reasoning I used in deciding to test however the fact it is mandated detracts for this.

The other issue is what is done with the results.   In my example the data is solely for me and to inform learning.  There needn’t be a score or a rubric attached.   In the case of mandated data collection those mandating it want the data which therefore required quantifiable and comparable scores and grades or at least we might assume this is the case.

Maybe we need to trust teachers more rather than mandating what must be done as the act of mandating something changes the activity being mandated!

 

Fitness Fail!

Day 7 and almost a quarter of the way through 29daysofwriting.  Am actually quite impressed with myself that at this point I am still going.  Its also Sunday which means a little bit of a relaxing day, including the wifes birthday then all finished off with #mltchat and #sltchat at the end of the day.

My posting today will focus loosely on assessment as a result of the below message which appeared on my phone this morning:

unfit

So although I may be doing ok at #29daysofwriting my phone is unimpressed at my fitness levels.   I have never been a particularly fit person and recently I have noted how much I struggle in terms of the health and fitness aspect of my life.     As such this was something I was trying to build upon and up until this morning I felt I was making some progress, then my phone provided me with this assessment of my performance.

I liken this message to the large and often standardized tests which we provide students.   I would suggest that students may end up feeling as I did today;  dejected, de-motivated, disappointed and disengaged to name but a few words beginning with “de” or “dis”.

Prior to receiving this message I thought I have been making progress as daily I was seeing an upward trend in the amount of exercise I was doing.     My measurement of exercise being steps taken as recorded by my phone.    I had also built up a bit of understanding as to how my exercise developed over the week, noting that my worst performance was at the beginning and end of the week, peeking with my best performance in the middle of the week.

weekfit

This brought the realization that maybe I would have to focus on the start and end of the week with focused activities to improve my performance, whereas in the middle of the week when things were going ok, it might equally be ok to continue as currently.

Again looking at students this daily or regular feedback might be akin to assessment for learning with assessment data provided frequently and students required to use the data to drive improvement.   At least in me, this regular data did not dishearten or de-motivate, I as attempted to improve.

This makes me things that it is important to consider the frequency of testing and assessment, plus how we frame feedback.   I will admit that this isn’t anything new.

The issue here though is how I can get back to exercising following the de-motivational impact of my phones message.   The good thing is I consider myself to be quite resilient although I will leave that discussion for a later posting.