8.50 newbury

One thing you might like to try with your standards (or indeed the RP ones):

Make a graph for a given course with:

x-axis = distance
y-axis = average speed required to make standard (i.e. distance (y) / time (s))

you should get a relatively smooth downward curve as the distance increases and the average speed required to meet standard drops.

If any of the standard times are obviously out of line with the curve, you should ask yourself why - is there a geographic feature of the track that would cause it, maybe an extreme downhill or uphill section? Is it a round course / straight course issue? Is the distance incorrect (hello 7f at Longchamp!)?

And best of all, if one of your standards is based on a small sample of races, can you extrapolate a better standard from the curve?

(You could just graph distance against time but it's a bit harder to pick out the subtleties).
 
One thing you might like to try with your standards (or indeed the RP ones):

Make a graph for a given course with:

x-axis = distance
y-axis = average speed required to make standard (i.e. distance (y) / time (s))

you should get a relatively smooth downward curve as the distance increases and the average speed required to meet standard drops.

If any of the standard times are obviously out of line with the curve, you should ask yourself why - is there a geographic feature of the track that would cause it, maybe an extreme downhill or uphill section? Is it a round course / straight course issue? Is the distance incorrect (hello 7f at Longchamp!)?

And best of all, if one of your standards is based on a small sample of races, can you extrapolate a better standard from the curve?

(You could just graph distance against time but it's a bit harder to pick out the subtleties).

there is a similar thing in Mordin On Time Gareth..basically a "standard" set of track times and you measure against deviation from that standard..doing same as what you suggest there.
 
Last edited:
The RP standards are an absolute nonsense, I stopped using them about 18 months ago and haven't looked back. Much prefer using my own.

It's also worth noting that a lot of Irish official times are bollocks, I have to hand time most cards in Ireland although as I don't really focus there aside from the big meetings, I tend to not bother due to not having the time.

Interesting about the graph Gareth, something I will definitely try with my own standards.
 
Tip for Irish times - use the free Timeform site. They appear to use their hand-timed clockings instead of the official one when there's a discrepancy. Not sure if this has always been the case, but looking at the first Curragh card in March suggests its certainly the case now.
 
Tip for Irish times - use the free Timeform site. They appear to use their hand-timed clockings instead of the official one when there's a discrepancy. Not sure if this has always been the case, but looking at the first Curragh card in March suggests its certainly the case now.

The Betfair Timeform site? The times seem to be the same on there for the couple of cards I've just looked at but it would be fantastic if there is a site with Irish hand-times on.
 
Using the RP standards for Newmarket (Rowley) as an example:

Code:
[b]Newmarket (Rowley)	[/b]
[b]Distance (y)	Standard (s)	Avg Speed (y/s)[/b]
1100		57.6		19.10
1320		70		18.86
1540		83		18.55
1760		95.5		18.43
1980		108.1		18.32
2200		121		18.18
2640		148.5		17.78
3080		175		17.60
3520		200		17.60
3960		226		17.52
4400		254		17.32

Straight away there's one obvious problem. The standard time for 3080y (1m6f) and 3520y (2m) requires the same average speed. This is clearly wrong - in standardised conditions you can't expect a horse going an extra 2f to maintain the same speed.

If we simply graph Distance against Standard Time, we get the following:

5863126592_d3c60afe0b.jpg


The line is a simple linear trendline. As you can see, it looks almost perfect - as the distance increases, the time required increases, and there's virtually no variance (the R-squared correlation coefficient of 0.9999)

If we graph Distance against Average Speed (which is itself Distance / Standard Time), we get:

5863126630_0205e5b72c.jpg


Again, there doesn't look like there's much out of place. This time, the line is a Power Equation trendline, and again we have a very high R-squared.

However, if we 'zoom in' on the above graph, by reducing the range on the y-axis, we get this:

5863126690_84e4d34ecb.jpg


Suddenly some of the points seem a bit out of place. They're all the same numbers, and the R-squared is the same extremely high number, yet some of the points look a little 'out'.

We can see how no matter what curve we use, it will never make the 3080y (1m6f) and 3520y (2m) both seem correct. One, or both of them, is wrong.

Let's assume that the curve is 100% correct (it's almost certainly not) and represents the 'real' standard times. The equation of the curve suggests that, for 2640y (1m4f), the standard should require an average speed of 17.92 y/s rather than 17.78 y/s. That translates into a time of 147.31 instead of 148.5. That difference of 1.19s is less than 1% of the RP standard time, but it's equal to around 7 lengths - a huge amount!

One caveat is to be very careful with outliers, particularly those which may come from very small samples. For example, in the case above I'm assuming that the 4400y (2m4f) distance is correct. But how many races are run over that distance on Good ground at the Newmarket, from which the sample will have been pulled? For all we know, the last three standard times are completely wrong and are pulling the tail of the curve up when it should drop lower, which would make the 2640y (1m4f) and 3080y (1m6f) points look correct. The good news is that when you're testing your own standards, you know how much data has gone into each point, and you can weight and trust accordingly.
 
Interesting stuff ... except for the R-squared bits! Lost me :)

I can be completely wrong here as I do not now pay much attn to STs but here's my neck stuck out.
The first thing to catch my eye ... 12.4s for the extra f from 5f to 6 ... does that not seem a bit high. Then, 7f to 8f is 12.5s.

It's not as if they move the winning post back up the hill a bit :) - the f. extra is flat, is it not? Or is it a gentle rise.
 
In simple terms, the R-squared is in this case a measure of how close all the data points are to the trendline. It's a number between 0 and 1; the higher it is, the closer the trendline 'fits' the data.
 
Thank you, Statto! :)

And what do you think of the 5f - 6f split?

oh an orphaned post. don't like them.
 
Last edited:
You would expect each 'extra' furlong to look slower. It's not just the extra furlong that you have to consider, its the extra time it takes to run all of the other furlongs on top of it.
 
Using the RP standards for Newmarket (Rowley) as an example:

Code:
[B]Newmarket (Rowley)    [/B]
[B]Distance (y)    Standard (s)    Avg Speed (y/s)[/B]
1100        57.6        19.10
1320        70        18.86
1540        83        18.55
1760        95.5        18.43
1980        108.1        18.32
2200        121        18.18
2640        148.5        17.78
3080        175        17.60
3520        200        17.60
3960        226        17.52
4400        254        17.32

Straight away there's one obvious problem. The standard time for 3080y (1m6f) and 3520y (2m) requires the same average speed. This is clearly wrong - in standardised conditions you can't expect a horse going an extra 2f to maintain the same speed.

If we simply graph Distance against Standard Time, we get the following:

5863126592_d3c60afe0b.jpg


The line is a simple linear trendline. As you can see, it looks almost perfect - as the distance increases, the time required increases, and there's virtually no variance (the R-squared correlation coefficient of 0.9999)

If we graph Distance against Average Speed (which is itself Distance / Standard Time), we get:

5863126630_0205e5b72c.jpg


Again, there doesn't look like there's much out of place. This time, the line is a Power Equation trendline, and again we have a very high R-squared.

However, if we 'zoom in' on the above graph, by reducing the range on the y-axis, we get this:

5863126690_84e4d34ecb.jpg


Suddenly some of the points seem a bit out of place. They're all the same numbers, and the R-squared is the same extremely high number, yet some of the points look a little 'out'.

We can see how no matter what curve we use, it will never make the 3080y (1m6f) and 3520y (2m) both seem correct. One, or both of them, is wrong.

Let's assume that the curve is 100% correct (it's almost certainly not) and represents the 'real' standard times. The equation of the curve suggests that, for 2640y (1m4f), the standard should require an average speed of 17.92 y/s rather than 17.78 y/s. That translates into a time of 147.31 instead of 148.5. That difference of 1.19s is less than 1% of the RP standard time, but it's equal to around 7 lengths - a huge amount!

One caveat is to be very careful with outliers, particularly those which may come from very small samples. For example, in the case above I'm assuming that the 4400y (2m4f) distance is correct. But how many races are run over that distance on Good ground at the Newmarket, from which the sample will have been pulled? For all we know, the last three standard times are completely wrong and are pulling the tail of the curve up when it should drop lower, which would make the 2640y (1m4f) and 3080y (1m6f) points look correct. The good news is that when you're testing your own standards, you know how much data has gone into each point, and you can weight and trust accordingly.

I think that going up to 12f is enough Gareth..i personally don't bother beyond 12f ..unless i really want to double check some round course times..very rare i do it

the graph would be more informative without the longer distances for the reasons you give.
 
Most of the time you would treat the straight and round courses completely separately anyway - even in the above example, you could treat everything up to and including 10f (i.e. on the straight) separately to 12f+ (which includes the turn). Sample sizes to tend to get awfully small after 12f or so, definitely.
 
Most of the time you would treat the straight and round courses completely separately anyway - even in the above example, you could treat everything up to and including 10f (i.e. on the straight) separately to 12f+ (which includes the turn). Sample sizes to tend to get awfully small after 12f or so, definitely.

the problem at Newmarket is the final 1f is uphill..and part of the last 3f gets steeper downhill...so the 5f track probably has 20% uphill..30% downhil and 50% flat..ish..and all thats only a guess of course

but the 10f track has only 10% uphill..10% downhill and 80% levelish

its quite hard to put a ruler on it..whereras if you tried it on a flat track there shouldn't be any reason why your graph couldn't be adhered to pretty well.

another problem with Newmarket is there just isn't enough handicaps on Good ground to get a real good handle on it..i used many years worth for Newmarket:)..too many stakes races lol
 
Last edited:
Mordin has his "universal" standard table..on a perfectly flat track with no bends each distance should be run in

5f = 58.1
6f = 70.5
7f = 82.9
8f = 95.7
9f = 108.8
10f = 122.2
11f = 135.6
12f = 149.00
13f = 162.4
14f = 175.8
15f = 189.2
16f = 202.6

comparing those to the RP Newmarket ones

Newmarket is..

at 5f ...the deviation is 0.5 faster than universal
at 6f... 0.5 faster than universal
at 7f...0.1 slower than univeral
at 8f...0.2 faster
at 9F...0.7 faster
at 10f..1.2 faster
at 12f ...0.5 faster
at 14f...0.8 faster
at 16f...2.6 faster

the RP 7f time and 2 mile time are wrong if using that as a reference
 
the problem at Newmarket is the final 1f is uphill..and part of the last 3f gets steeper downhill...so the 5f track probably has 20% uphill..30% downhil and 50% flat..ish..and all thats only a guess of course

but the 10f track has only 10% uphill..10% downhill and 80% levelish

its quite hard to put a ruler on it..whereras if you tried it on a flat track there shouldn't be any reason why your graph couldn't be adhered to pretty well.

Those kind of course-specific variables are exactly what you've got to take into account. Once you've controlled for every other variable (in as much as you can), the data should in some way reflect the effect that those uphill/downhill bits have. And that's once your data is good in the first place, which is no easy task.
 
i think using Mordin's chart is ok..and your graph too..on certain tracks..or as a rough check on more awkward ones like Newmarket

as you say.. in general..you have to use the actual times in the first place to make the proper standards..the racetimes remove those awkward calcs..but present their own challenge using the right times..removing the slow run races..and using as near to Good going as is possible to take them from.

alternatively..just take fast ground times..which there are more of..then adjust those by a set amount to bring them to good ground.
 
Back
Top