JOIN Rows Produced vs. Rows Read

It's common knowledge that the number of rows a JOIN can produce is the product of the matching rows for each table. A three table JOIN with 500, 10, and 1 matching rows can produce 5,000 rows. WHERE conditions will hopefully reduce this number and cause fewer rows to be returned. While working on Optimizing Queries Case 3 I came across a situation where more rows produced caused fewer rows to be read. Rather than just feeling lucky, I wanted to see how this worked, and verify I wasn't imagining it. I asked around and got no response so I set out to prove it rationally if nothing else.

Test Data and Plan P-T-S

   Table P
      p_id =  1 / 2 / 3 / 4 / 5 / 6 / 7 / 8 / 9 / 10 / 11 / 12
      t_id =  1 / 2 / 7 / 4 / 7 / 7 / 8 / 1 / 8 /  2 /  4 /  1
      s_id =  1 / 1 / 3 / 7 / 3 / 3 / 5 / 1 / 5 /  5 /  7 /  3

   Table T
      t_id =  1 / 2 / 4 / 7 / 8 / 10

   Table S
      s_id =  1 / 3 / 5 / 7
   
Rows are listed vertically (so P row 1 has values 1, 1, and 1) to save space. Each _id is unique and the PRIMARY key. The original JOIN order from Case 3 was P-T-S, doing a full table scan on P and matching rows in T and S using the respective PRIMARY keys. For this test data this results in a join with 12, 1, and 1 matching rows. However the rows read is not 12 but 36. Quoting from the MySQL manual section on EXPLAIN:
The tables are listed in the output in the order that MySQL would read them while processing the query. MySQL resolves all joins using a single-sweep multi-join method. This means that MySQL reads a row from the first table, then finds a matching row in the second table, then in the third table, and so on. When all tables are processed, it outputs the selected columns and backtracks through the table list until a table is found for which there are more matching rows. The next row is read from this table and the process continues with the next table.
Reading this quite carefully, the JOIN process for the P-T-S order reads the first row from P (p_id = 1), then finds a matching row in T (t_id = 1), then finds a matching row in S (s_id = 1). Three reads: Read from P where p_id = 1, read from T where t_id = 1, and read from S where s_id = 1. Stick all these values together, output the row, and get the next row in P. In short: 3 rows read for each row in P; with 12 rows in P this equals 36 rows read. The result set of this JOIN is identical to table P (in a perfect world where every P.t_id and P.s_id actually has a matching T.t_id and S.s_id). Notice also there are 15 duplicate reads, rows like p_id = 1 and 2 both read S.s_id = 1, and p_id = 5 and 6 both read T.t_id = 7 and S.s_id = 3. You would think for T and S MySQL would just read each row with _id in distinct set P.t_id and P.s_id. However the manual says about eq_ref:
One row will be read from this table for each combination of rows from the previous tables.
The wording is not singular, like "for each row from the previous table," but plural. For T there are 12 "combinations" of P. For S there are 12 combinations of T and P since p_id is unique there will be no duplicate combinations to reduce. Therefore 12 reads for P, 12 for T, and 12 for S; 36 total reads.

Plan T-P-S

After running ANALYZE on all three tables in Case 3, MySQL changed the JOIN order from P-T-S to T-P-S because it avoided a table scan and filesort (due to that query using GROUP BY, ORDER BY, and LIMIT). Although this join plan produced 10,000 more rows, it read 175,000 less. For this test data this results in a join with 6, 2, and 1 matching rows; 12 produced (if the test data were larger and more diverse this would produce more rows) and 30 read. The result set of the JOIN is a little different because we read one row from T, then two rows from P, and two rows from S for each of P. It looks something like (read T, P, S):
      1, 1, 1     2, 2, 1     4, 4, 7     7, 3, 3     8, 7, 5
         8, 1       10, 5       11, 7        5, 3        9, 5
        12, 3                                6, 3
   
That's 29 reads actually and only 8 duplicate reads (all on S).

What About Plan S-P-T?

An over-simplified reason why plan T-P-S is better than P-T-S is because the initial set of keys, T.t_id, used to find all common rows is smaller than P.p_id. In other words, T.t_id is a smaller common denominator. The smallest common denominator is S.s_id. In Case 3 MySQL could not use a S-P-T join plan because this requires an index on P.s_id for it to be practical, and in that case there is no such index. For this test data there is which results in a join with 4, 3, and 1 matching rows; 12 produced and 28 read. This join looks like (read S, P, T):
      1, 1, 1     3, 7, 3     5, 7, 8     7, 4, 4
         2, 2        5, 7        9, 8       11, 4
         8, 1        6, 7       10, 2
                    12, 1
   
27 reads actually and 6 duplicates.

Summary

Plan: reads, duplicates
P-T-S: 36, 15
T-P-S: 29,  8
S-P-T: 27,  6
I tested plan S-P-T on the production server from Case 3: It reads about 12,000 less rows than T-P-S and 211,000 less than P-T-S. However it's 86% slower than T-P-S because it causes a filesort again, but still 33% faster than P-T-S (T-P-S is 62% faster than P-T-S).