Columns | July 20, 2008 7:06

Chess and math: a happy couple?

Chess and math have always slept side by side. But are they a happy couple? I think every chess player has had the experience of someone asking you, in high school, if your math grades were as good as your chess results. Sadly, for me the answer was often 'no'.

By Arne Moll

In fact, the reason I did so badly in high school math was probably ... my chess addiction. I spent so much time on chess that I completely neglected math (and other subjects.) But, as the song goes, old habits die hard. And, of course, it's not unreasonable to suppose math and chess are related, or that results in both could be correlated. In fact, several peer-reviewed studies have pointed out the advantages of using chess as a teaching method [1]. Recently, a paper by John Buky and Frank Ho was published on the effect on pupil's math scores using an integrated math and chess workbook [2]. In this article, I will take a look at the results from a chess players perspective, discuss some problems and give some suggestions for further research.

A critical look at the curriculum

The curriculum and the workbook are described in detail on the site of the Chess and Math Academy based in Chicago [3]. Examples are also mentioned in other peer-reviewed articles by Ho [4]. The idea is that learning about, e.g., algebraic notation and pattern recognition can be transferred to math concepts:

By working on mathematical chess puzzles, students get training on how to transfer chess knowledge to improve math ability. Since chess is a whole number based strategy game so it is important for students to get exposure to computational mathematical chess puzzles. [...] Algebraic notation learned in chess could be transferred to the concepts of coordinates [...]. The King's triangular shape of movement to create opposition in chess is an example on how the use of a geometrical shape would take a special meaning in chess. [...] One notable math knowledge learning in playing chess but not widely taught is the set theory. Chess players constantly use the concept of Venn diagram to look for interaction among pieces.

The first thing to note is that these relations between chess elements and their supposed mathematical mirror-elements are not intuitive for the chess player. Chess is not mathematics, and they work by different type of rules. Mathematics is rigorous, chess is not. In his classic Secrets of Modern Chess Strategy, John Watson remarks: "In chess, general rules will never have universal application [...]" [5]. In chess, unlike in mathematics, there are no absolute truths, as anyone who has ever tried to calculate a 'book' bishop sac on h7 will know. The famous chess grandmaster Richard Reti makes the point in his Modern Ideas in Chess (1922) [6]:

What is really a rule in chess? Surely not a rule arrived at with mathematical precision, but rather an attempt to formulate a method of winning in a given position or of reaching an ultimate object, and to apply that method to silimar positions.

Although making a link between chess and math teaching is very creative and interesting, it is in a way also slightly suspect. Haven't we all seen stereotyped commercials in which chess or chess pieces stand for 'strategy' or 'cleverness' (or 'nerdiness')? The link is also a bit forced. For instance, although I am familiar with the concept of Venn diagrams, I have, as a chess player, never realized that it could be linked to the interaction among pieces. In fact, even now that I know of a possible relation between the two concepts, I'm not entirely sure how exactly the two can be linked. Should I draw Venn diagrams inside my head next time I'm trying to play a game? Do my chess skills help me understand complex Venn problems better?

I also doubt the author's assertion that chess is a "whole number based game". The idea probably stems from the supposed absolute value of the pieces (a Rook is worth 5 pawns, a Knight 3 pawns, etc.). However, it has long been known that this approach is too simplistic. Most strong chess players will probably agree that a bishop is, on average, worth not 3, but 3,5 pawns. A knight is slightly less, perhaps 3, but it all depends on the circumstances. And as every chess player knows, the value of the King in chess is, in a way, "infinite". In my opinion, the idea that chess is a whole number game is, at best, simplistic. It is certainly confusing.

Buky and Ho are themselves aware of the non-obvious relation between chess and math and the possible effect on pupils. As they say:

The effect of transferring math knowledge learned in chess will be less significant if the chess teacher does not take the efforts of emphatically point out the math concepts. The task of transferring math knowledge learned in playing chess would be much easier if students are offered the opportunities to work on mathematical chess puzzles.

In other words, although the two concepts may not be grasped intuitively, having a good teacher and by doing 'mathematical chess puzzles', the non-intuitiveness of the two concepts could be overcome. The first condition is so obvious it's strange it's even mentioned at all, the second sounds plausible, but of course, the puzzles have to be clear enough for pupils to understand the implicit relations between the two. Let's have a look at an example from the workbook.

Even disregarding the question whether one can simply attach values to chess pieces (in the example, a queen is probably worth 9 'pawns' and a bishop 3), I find this example quite confusing. A first practical question is if should we fill in chess figurines or numbers in the blank boxes? And how are we supposed to interpret the mathematical operations? 12 minus a bishop equals a queen, but what is a queen plus a bishop? These may sound like trivial questions, which can be solved by proper instruction, but if for me, a skilled chess player and a professional IT developer, these puzzles are not clear, then how must these puzzles appear to 4th graders?

Here's another example where the same question arises:

To be honest, this example baffles me. I just don't understand what is going, but I realize it could be just me. So I asked IM Maarten Solleveld, who is a professional mathematician at Goettingen University, German, if he understood these examples. He didn't. Moreover, he thought they were confusing to pupils, especially young ones. He writes:

I don't see the use of most of these examples. In fact, I find many of the excercises weak from an educational point of view, or even counter productive. The authors clearly cannot imagine themselves that it's possible to confuse kids with strange puzzles.

Before moving on to the research done by Buky and Ho, let me give one last example of one of their puzzles.

I think this example is especially noteworthy. In itself, making a link between chess and statistics is very original, although personally, I would rather be interested in a question like 'What's the chance my opponent is going to find this accurate reply to my bluff move?" than in the outcome of the probability of two pieces meeting within a certain number of moves. More to the point, it's hard to imagine what use the excercise is given that fact that in order to get the solution to the problem, the concept of probability has to be taught to pupils anyway - so why involve chess in it? After all, the answer (0) only makes sense if you know the difference between probability 1 and probability 0.5 and probability 0. In other words, what is, in alle these excercises, the added value of adding chess concepts to the puzzles?

The results of the research

This is also my main criticism on the research mentioned in the author's article. Buky and Ho tested their ideas on 119 pupils using a paried t-test:

One hundred and nineteen pupils, in grade 1 to grade 8, from five public elementary schools in Chicago, Illinois, USA, participated in the after-school program for 120 minutes, twice a week, for a total of 60 hours of instruction. None of the students has possessed any substantial knowledge in chess. The study began by administering pre-tests in the first week of this study at the beginning of the program on 10/23/06 and a post-test was conducted at the end of the program on 3/28/07.

The test results were encouraging:

Group Group One Group Two
Mean 36.46 55.45
SD 15.82 19.37
SEM 1.45 1.78
N 119 119
t = 12.8729


The results show that pupils performed much better on the second (post-) test, showing they learned a lot (36.46 vs. 55.45) in the period between the two tests (roughly 5 months). The authors conclude:

The results of this study demonstrate that a truly integrated math and chess workbook can help significantly improve pupil's math scores.

Discussion

This is, of course, good news for the Chess Academy's teaching method, and for the pupils making use of it. Teaching childeren with the chess and math integrated workbook actually improves their mathematical skills. So far, so good. There is a problem, however. Since Buky and Ho's emphasis is on the difference between their own method and "traditional computation practices", it would, in my view, have been much more relevant (and interesting) to do a control test where one group of pupils gets their math lessons by the chess and math method, and the other group gets "normal" after-school math lessons. After all, only by doing a pre- and post-test for both groups, one can establish whether the chess and math method actually works better than the tradional method. This cannot be established with the research done by Buky and Ho.
As far as I can see, we're still left with several possible explanations for the results they got:

  • Learning how to play chess improves the pupil's mathematical skills anyway, regardless of the method or the workbook.
  • Since even without learning chess, pupil's mathematical skills are likely to improve over time, it's possible that pupils always perform better on the post-test conducted several months later.
  • Since the research was done in an after-school program, the pupils have in the mean time been learning a lot of math during school, making any progress in this area not more than natural.
    -
    What should we make of this? Because the traditional method was not compared to the chess and math method, I think it's too early to conclude that the new method is actually better than the traditional method. And this ultimately has to be the 64,000 dollar question for the chess and math method. In theory, it's even possible that while the chess and math method scores well on the test, the traditional method (or no method at all!) would score even better. By that rationale, the current research does not yet show that the method actually does anything at all. [7]
    Also, while it may be true that, according to Buky and Ho, "math and chess integrated work has visual images, chess symbols, directions, spatial relation, and tables; all these are stimuli to kids and keep their interests high while working on computation problems", it is not clear to me that the stimuli used in the curriculum actually contribute to improving mathematical skills. (Perhaps it was simply the teacher's explanations that improved their results, not the puzzles themselves.) A small, non-representative poll among chess players and professional scientists does indicate that adults find the puzzles as presented in the curriculum confusing and/or unclear. Perhaps this is simply a matter of insufficient explanation, but in any case, it's something the authors do not seem to acknowledge in their article.

    A final point of doubt concerns the benefits to the pupil's chess abilities, as opposed to the benefits of their math skills. The authors are silent on this. In my opinion, it certainly won't help childeren applying simplistic rules like 'a bishop equals 3 pawns' in chess. Such an approach will most probably backfire. The concept of probabilities, too, can hardly be of use in a practical game. There may be advantages also, but unfortunately, these are not indicated by the authors.

    In the end, we should credit the authors for mentioning the most important aspect of all education: "Children learn best while having fun". This is definitely true, and when the authors say that the children they work with enjoy their method, there's no reason to doubt them. Indeed, if we agree that chess is fun (and I hope all readers of ChessVibes will agree on that!), we should perhaps be able to integrate chess in a useful way into any curriculum, be it math or biology or English literature. But chess is surely not unique in stimulating children to perform better in maths. Who knows, draughts may work even better. Or poker. In any case, if they're confused by exercises, puzzles or chess figurines - even though they may enjoy them visually - it's perhaps too early to write tradional methods off. In any case, more research is needed.
    And perhaps, math should just be math, and chess should just be chess.

    References
    [1] For a list of popular and scientific publications on chess and teaching, see the Chess Academy website.

    [2] Ho, F., Buky, J. (2008). The Effect of Math and Chess Integrated Instruction on Math Scores. The Chess Academy

    [3] Ho, F. (2006). Chess for Math Curriculum. The Chess Academy

    [4] Ho (2006). Enriching math using chess. Journal of the British Columbia Association of Mathematics Teachers, British Columbia, Canada, Vector. Volume 47, Issue 2.

    [5] Watson, J. (1998). Secrets of Modern Chess Strategy. Gambit Publications (p. 11)

    [6] Reti, R. (1922). Modern Ideas in Chess

    [7] See http://www.encyclopedia.com/doc/1O87-onegrouppretestpsttstdsgn.html for a discussion on the one group pre-test post-test design

  • Arne Moll's picture
    Author: Arne Moll

    Chess.com

    Comments

    Frank Ho's picture

    Dann:

    There is nothing to be shocked about it, If you wrtie off the one-way ANOVA report and experiment aa "trash", then you basically write off the usefulness of one way ANOVA and in this case with small sample size.

    Your comments are flawed in implying that just becasue we used a simple one way anova to conduct analysis it is not scientific. Even worse you made an impression that we do not know what we are doing of being math teachers.

    Why don't you read my comments carefully?

    I have said many times that we are only interested in one factor and the factor we think influences the most is how we teach and also the teaching materials, this is how the learning industry decides if the method works, you can throw in many other factors and uses a bit more advanced statistical analysis such asa Factor analysis to factor out some factors and we believe it sill shows the most imortant is still the teaching method, so run the risk of being accused by you to seem to say that I just used some scientific terms to present it like a "scientific" paper, I will stop.

    If it is presented in the other way such as just as a general news item then it probably will be labelled by you as not scientific; so either way we will not win your votes.in any cases..

    Yes, it is a smiplistice view of doing an experiment but the purpose is what exactly we wanted to show. You can disagree to our conclusion by saying that it may not be because of the "special" teaching method, there are many other factors such as "teachers", or "subjects", or it is because "the length of teaching time", you could be all right in all situations but please do not say the report is not scientific.

    As I have said many times here facrtors have to be ruled out by experimental experiences, so we ruled out other factors based on our professional teaching experience. Can you image you take the attitude to run an ANOVA with 30 factors, your results may sound so scientific but your factors are so mixed and influenced each other that your report are not accurarte any more.

    I would also caution you that eing as a parent and knowig my children having problems wiht math then if the results show the difference then I am interested since my children were already at risk (that means in the control group.). But of course, if you are interested in comparison (to other methods) or chlldren's progress without this program, this research will not tell you, it has limitation.

    If we focus out attention to one very simple approact that is if the teaching method shows any improvement , that is all we are asking. There is no need to quote any famous people's saying to me as you did.

    What we are doing here is to answer one simple question that is we claim Ho math and Chess teaching method works, so where is the proof? We answered it by conducting a proof and that is it.

    For your information, there is also a third party report conducted by Illinois state recently published and its report runs more factors and also had control groups and its result confirms our findiings and it listed many other "service providers" and ranked the math improvements and surprisingly, some "method" or service proividers do not show math improvements, so please do not take a simple view to say that just because children put in time and of course "naturally' they will improve their math.

    Just becasue children go to school in weekdays so of course their math will improve "naturally" - do you agree to this ind of statement?

    Frank Ho's picture

    I read the report quoted and hinted it is a model paper that we might want to take a look. Okay do we want to go to that kind of scale to conduct an experiment when what we wanted to know is simply if the "Ho Math and Chess" teaching material and workheets made any difference before and after taking the program?

    Please do not think this teaching system is just to just teach math and then play chess in the math class. The paper "The Effect of Schooling and Ability Achievement Test Scores" provides a much complicated model and conducts analyses to compare 2 inconsistent conceptions of ability and scholastic achievements, so with our simple view and a simple interst to find the difference between pre and post tests, we should perhaps not have gone thrugh that ind of large scale to find it.

    Frank Ho's picture

    Daan:

    You have presented a case which is very classical in statistical research, you can just about correlate anything and perhaps can get some correlation out of it but does it make sense? So please stay focused on the topic.

    What bothers you (seems to me) is you do not agree the casuer of having impact on children's math marks is the "teaching system", then as I have said we can agree or disagree on this, there is no need to continue to imply that we do not know what we are doing.

    It is quite "normal" when a finding is presented many researchers may not agree on the findings or results so you should not be so "shocked" about it.

    Take the example of the research paper published by Office of Extended Learning Opportunities, Chicago Public Schools entitled "The 2007 Supplemental Educational Services Program: year 4 Summative Evaluation", one of the results of its findings show that different SES programs having different impact on student's math perormance, but why?

    Page 21 states the following:

    "little is known about the actual implementation of SES and how charactistics of programs relate to program impact". In this srudy, they also studied the other causer factors such as Gender, Disability, Grades, Race, Sex etc, but it still points out to that a further study on the characteristics of program and how it relates to impact.

    So again and again, I have pointed out to you that can go by theory to raising many questions and insist that the model should be complicated to encompass all possible factors but you also have to rely on experience to rule out some factors.

    I think the more meaningfule discuss is not to dwell on if you agree with our model that is to only have one independent variable. The more meaningful discussion should be shifted to focus on whcih teaching method seems to have the greater impact on students performance, so parents or students can choose the preferred teaching method for after-school math programs.

    I found the 2007 paper useful and quite surprised that many mainstream after-school programs such as Club Z, Hungington Learning, Sylvan, Princeton Review did not even rank on the top half of having over 15% performance (read the results on page 18 to get details.). Many of these after-school programs can be found on the internet by searchig the keyword "math learning center". Why these well-known math learning centers did not rank on the top 50% of all SES program providers? This should be an interesting topic and more productive subject to talk about.

    Once you get into this kind of discussion then you can start to find out what are casuing factors and what may be the most influencing factors and also what factors you might want to have included in the model.

    Why our program seems to have better % of performance than others?

    Daan's picture

    Frank,
    with all due respect, your calculations do not scientifically show that your method "works". Not every analysis can simply be done by a one or two way ANOVA. Sometimes you simply need a larger setup of the experiment and more complicated models to show an effect. Your education program is such a situation. You cannot simply say. "It is too complicated and costly to do a complete and correct analysis, that is why we settle for something more simple"
    It is like saying: "we dont know how this saw works, that is why we try to use a hammer do get this tree down"
    With an incomplete setup and the use of solely exploratory statistical techniques you cannot make any statements on causality (and that is what you do). Causality and correlation are not the same. Otherwise you could for example say that people's hearing deteriorates when they use hearing machines, since these two variables highly correlate! (although this is not exactly what you do, I like the example)
    Maybe you should watch this, since it's fun in any case.
    http://www.ted.com/index.php/talks/michael_shermer_on_believing_strange_...
    Ok, I leave it here. Good luck with teaching

    arne's picture

    Hi Frank,

    You keep saying that we misunderstand your position, but it's been the other way around since the beginning of this discussion.
    So, for the last time, I ask you to consider the following: we're not saying the study can't be qualified as 'science' because you didn't include a control group, and we're also not saying that the method or the research is worthless - all we're saying that your conclusion "that the math and chess integrated method did significantly increase the math scores for children" simply doesn't follow from your research. That's really all! And any statistician will tell you that this is not our personal perception, nor our bias, but a simple fact of statistics. The reason why we all react so strongly is that you, despite your 15 years of statistical experience, somehow refuse to admit that we have a point. It's very frustrating.

    Arne

    Frank Ho's picture

    My reply to Eric.

    "The point is that your study simply does not show that your teaching method ?¢‚Ǩ?ìhas made any before and after difference.?¢‚Ǩ? What your study *does* show, is that there *is* a ?¢‚Ǩ?ìbefore and after difference?¢‚Ǩ?, but we simply cannot say what caused this difference. And since it is theoretically possible that some of your pupils would have performed even better using another method, I think this makes the issue of causality particularly relevant, even for customers who are not interested in the underlying mechanism, but who *do* pay good money for a method that (supposedly) works."

    Ny reply is as follows:
    We made hypothesis that there will be no difference for pre-test and post-test but proved statistically significant there was a significant difference so we overturned our hypothesis. The cause of it is the treatment (program) we engaged and the results is the measurement of math scores. To go further to say that some of pupils would have performed even better using another method then you need to prove it to us but not question our result.
    We did not claim our method is better than other method.

    You wrote:
    Consider the situation where somebody offers a new medicine, and says: we have a study showing that after taking this medicine, 60% of the patients feel a lot better. You might think, well, that?¢‚Ǩ‚Ñ¢s good news, this medicine is apparently useful. But what this person does not show, is how many patients would have felt better *without* using the medicine, or using a different medicine. Would you, on this basis, recommend that people purchase this particular medicine?

    My reply.
    Again, you are asking a question which we did not have an answer in the report. We simply reported the findings we have discovered that isall.

    As I have mentioned to you many times that most of these children are having problems with school math already so to say that some of them may get better without any treatment is not reasonable. To compare our method to other methods is a good idea, but it is not in the scope of this study.

    Eric said:
    That?¢‚Ǩ‚Ñ¢s part of the problem: you use a ?¢‚Ǩ‚Ñ¢scientific?¢‚Ǩ‚Ñ¢ format to present your results, but the way in which you conducted your study does not quite meet basic standards of scientific research - at least that?¢‚Ǩ‚Ñ¢s my opinion. I do not doubt your sincerity. Personally, I am quite willing to believe that there are many pupils for whom your program is highly motivating, or at least more so than regular maths classes. But your study does not really support that claim, nor does it show *why* this should be the case. Is it really this method? Perhaps you and your colleagues are simply very good teachers, regardless of the method you use. We don?¢‚Ǩ‚Ñ¢t know, because this wasn?¢‚Ǩ‚Ñ¢t controlled for.

    My reply:
    This kind of education experiment (pre and post) is designed very often and of course its findings are also limited as I already described. A treatment is given by using a teachig method which is different from coventional and it shows improvement but it may not answer lots of question as you would like to get.

    arne's picture

    John, we're looking forward to the details of your new report. It sounds like good news for your method, but I would like to point out one thing. My blog was written with two purposes: 1. To review the research and the test results; 2. To look at the curriculum from a chess-player's point of view.
    This last point has so far received little response in the comments, but it was actually my main reason for writing the blog. However useful the method in itself may be, in my view the ideas about chess in the curriculum and the featured excercises have less to do with actual chess as most players know it, than with creative use of chess figurines, teaching skills and general concepts that could just as well have been derived from cooking or from baseball. So, apart from the things already mentioned in the comments, there are several other interesting things to examine here:

    1. Does the method work better with the use of chess concepts than with concepts of any other game such as backgammon or in fact of any other real life
    concept?
    2. Does it matter that some chess concepts in the curriculum are flawed (such as that chess is a linear game) or doesn't it?
    3. Do the pupils not only advance in math skills, but also in chess skills?

    And so on. I just want to make the point that from a chess perspective, the curriculum makes a kind of strange impression. I basicallly just wonder if you and other teachers are aware of this, and whether it is a matter of concern for your method or the pupil's chess advancements.

    Best regards,
    Arne

    Frank Ho's picture

    Eric:

    You definitely brought up some very interestig points and obvious we have different views on the causers which contributed to our results. This is why when some experiments are presented then some people will disagree, this is okay. As I also have said the botttom line is if indeed the students have made progress and parents were happy.

    When asking different persons then you will get different ideas on an experiment model should be designed, if you take the argument to say the results could be because of some other factors then it is why many experimeental designs have a discussion to discuss if there are possibility that further study could be done to study the results.

    To broaden your query a bit more further, I can present this to you and see how you answer my question.

    There are many after-school math programs out there. Ours is officiallly called Ho Math and Chess. There are also Kumon, Sylvian, Mathnasium, Oxford, Math Monkey etc, many others. We all each claim that the math programs will help children to improve their math ability, so what you make out iof this?

    I do not think the results would be uniform so if the results would be different then it is because of the teachers ae different as you have suspected or what? What you think would be the biggest impact factor? If you make an assumption that perhaps it is teacher then teacher is the major contributing factor and then you set up experiment to find it out. So to set up an experiment (due to budegt and other reaons), you do have to limit the factors. But in reality, do you have some ideas as to why there are so many different learning centere? I can tell you that it is because we all claim that we have a very unique teach8iing method.

    To make the teaching system wrok, you need qualified teachers but not star teachers. Otherwise the entire teaching system has flaws and will not work. This also answer you query about contributing factor. With over 30 worldwide locations which have implemented our system, waht we try to do is to make the syatem work, not to hire super star teachers.

    So when you analyze you can not just thorw any suspecting factors into your design model and say all these could be factors, but instead an through analysis must be doen to filter out some factors.

    To conduct pre-post tst there is a factor which contributed to the result so of course there is a causer, but whether you agree the causer is just one factor or multi-factors or the factor (our teaching system) as we claimed, this is perfect alright that you hold your opinion, but just because you speak here loudly it does still not "prove" you are correct in saying what you are saying, nor had dispoved what we are saying here. In other words, you are simply stating your opinion and that is all.

    As I already said that I disagree you opinion by throwing factors like teachers or "no study" factors to confund the experiment and claim the results could have made the same impact or ever better, all these are your conjectures and there was no proof.

    If a teaching system has to rely on super teachers to generate results then it is not workable and it is sad. How many super teachers you can find and place them on every school?

    To make a math program workable, we must rely on the teaching method and the system and the wrokbooks and that is what we are interested. We want to show that our teaching system made a difference befor the children joined the program and after they joined the program and you insisted that the impact could be teachers or some other factors, etc, then I have spent so much time and have explained to you already, we do not think those factors are rlevant and this is based on our experience.

    You wil not go out and tell parents that some after-school program is working is simply just because that some after-school program have super teachers. Kumon has over 3000 learning centres worldwide so they all have super-star teachers at every location? Go take a look and see how they teach. The way it works is entirly based on their system.

    To ignore what we are tryinig to tell you that how unique our teaching system is and the contributing factor is our teaching method which has made the difference will of coures make you continue to doubt the results and its contributing factor. .

    You failed to recognize the main contributing factor is our math and chess integrated workbooks and teaching philosoply and we emnphasize thinking skills.

    The teaching system does not include teachers alone, but include workbooks and workbooks is the main deiiference which set us apart from Kumon, Math Monkey, Mathnasium and all other after-school programs, not our teachers.

    Our teaches are qualified just like any other learning centers so there is no major difference as far as we are concerned, so if you ask me how Ho Math and Chess is different from other math programs then I wil tell you it is because of our teachig philosophy, our worksheets. Under this condition, you insist that we can not tell why our students' math marks imrpoved?

    To do an experiment it must also rely on experience, you can think of this model from textbook , from theory but you must also rely on experience to filter out confunding factors.

    Ask our students and parents why they come to Ho Math and Chess and how Ho Math and Chess is different from other after-school program then what tey tell you will be the contributing factors, our teching method definitely is the number one reaosn which attracts some parents.

    The_Anonymous_Person's picture

    An entire new news article could be created on this topic based on everyone's comments. What do you say, Arne? ;)

    Frank Ho's picture

    Thank you for making comments on the article John and I wrote and we are very delighted that it got people's attention. Arne made a very good comment about the possibility of making a future research to compare the "control" group and 'treated" group this was what we would like to do but perhaps we could conduct such a research if we could ger some research grant in the future.

    The method described in the paper is not a new one and we have been doing it in the past over 10 years, but we have refined our workbooks and systems so many examples cited by Ms. Moll in her article were our first-generation math and chess worksheets. This was the first time we have "officially" published the results although we knwe all along the method was effective.

    We also teach "pure" chess so really we are not writiing off "traditional" method at all. We offer an alternative to traditional math teaching method so chess adds spice into math and made the math subject more interesting for children to work on.

    In summary, our testimonials coming from our students and parents are speaking loud than any one of us so our method does work. It increases student's math marks, improve their thinking ability and also we create a fun learning environment for children.

    We have dedicated over 10 years of teaching and research in this math and chess integration by using this method. Frank Ho personally still teaches from kinderagrten to grade 12 math, 4 hours a day and seven days a week, all those worksheets have been created out of his experience working with children and classroom-tested.

    This is a true story that I had to hold back my tears when I heard my students asked for more of Frankho Chess Mazes and asked for more of those math worksheets which make them to go here and there and then get the answer (This was the student's words in describing what kind of math worksheets they like.). I also have students asked for my worksheets sfter they have done some traditional worksheets.

    Math educators have done very poor job in the past to create some exciting math worksheets (great in describing how they should be done), I feel sorry for some students who do not like math because of the problem with boring math worksheets.

    Integrated math and chess worksheeta ae not for everyone and only work for some math skills and not for the entire math curriculum. We are glad that we know how to deliver them and use them in the most effective way.

    Ray Tyler's picture

    my apology Arne - my LCD monitor and lack of my glasses - led me to read Ann.

    arne's picture

    Apologies accepted, Ray, and thanks for your very interesting and honest reply. (I think it's the longest comment I've seen so far on Chessvibes! It's no problem for now, but please try to keep space limited: you can always send us an e-mail!). By the way, I took the liberty of editing the layout of your post a bit, and also correcting my wrongly spelled name and gender! :-)

    Frank Ho's picture

    Hi! Everyone,

    My name is Frank Ho. John and I wrote the article after conductiing an experiment, the good news is the results of this year just showed it is even better than the last year.

    We are delighted to see some responses after reading our paper, it is encouraging.

    Ths paper is not the first ever been published. The other 2 already published articles were published by peer-reviewed Canada British Columbia Math Teachers Association's Journal - Vector. In the past, Frank also has presented these papers at BC Math Teacher's Conference twice.

    This is quite contrary to Mr. Ray Tyler's last comments about "wrting reviews of his own work.".

    We are also teaching chess separately if you want to call it "traditional" method. But we also have math and chess integrated workbooks for those children who do not particularily like to play chess or found traditional math worksheets boring, so we offer an alternative here.

    Our math and chess integrated wroksheets add spice into math and creates more fun for children to work on them.

    It is interesting to see so many are interested in this math and chess integrated teaching. We are mainly in math teaching, not chess teaching.

    thorex's picture

    Very nice article!

    I fully agree with Arne. In my opinion the "Buky and Ho" study is very dubious. To get reliable results there must be at least two comparison groups:
    1) The same amount of after-school math training _without_ chess excercises (or "traditional excercices" like Arne called it).
    2) No after-school at all.

    Skipping this obligatory procedure makes me believe that Buky and Ho don't trust their own system.

    Ray Tyler's picture

    Coming across this lucid critical article by circumstance (via Chessgames.com link to the current Biel Tournament) was a blessed personal vindication. For more than two years I have held my breath in stupefaction that any serious Math OR Chess teacher would find 'HO Math & Chess" a viable approach to either Chess or Math. Initially I found his use of the Internet to market his ideas as being rather irresponsible ?¢‚Ǩ‚Äú in particular writing reviews of his own work.
    My well-meaning attempts at constructive criticism during correspondence with Frank Ho failed to impress upon him some fundamental flaws to his basic assumptions. What I thought would be self-explanatory is the following:

    > Let me provide you an anecdotal insight as to the intuitive math and chess connection, which you rightly described when you refer to "intersections" of piece potential movement, and the potential for exploring Cartesian co-ordinates and hence equations. I have time and time again watched youngsters faces light up when in certain positions involving the queen, a student is stuck, i.e. cannot seem to 'see' the move...I come to the board remove the queen in question and replace it with a bishop balancing on top of the rook.....try this, you'll be astounded. But the rook=5 and the bishop=3(+1/2)has little or no relevance in terms of the queen's value=9. [ Given the rook's constant 14 and the bishop's 7 to 13 moves which defines the queen's 24/64 average moves - isn't quite right:since it is 24/32 in an either or white vs. black square symmetry.] It is the "functional relationship" the options of movement in a given position which suggest any quantifiable value, but even this is not relevant when contrasted to the significance of overall piece integration and co-ordination.
    The idea of evaluating a chess position in terms of move potential has been around for a long time, (it was critical in writing the early computer programs) and remains useful in the opening during the piece and pawn development phase. Often overlooked is the absence of potential movement as in the classic study where the knight is stronger than the queen. The "truth" of Chess is its unfolding synergy for which to my knowledge no one has developed any workable algorithm. The anticipation and degree of control of the multiple variables (the outcome tree based upon predictable sequences) characterizes the challenges facing the Chess player. Reason is finite, whereas the limit of imagination approaches something beyond reason...<

    MY POINT: dividing the Queen (9pts) by a Bishop (3pts) does NOT in any way result in the absolute numerical value of 3 as HO Math suggests. At best the Chess answer is a Rook. But a brighter child wcould correctly answer 2 Knights or a Knight&Bishop or for that matter 2 Bishops...which one might suppose answers the Math question of dividing the Queen by 3. Unfortunately his numeric identities and subsequent extrapolations are somewhat flawed and as such I believe, from a classical empirical viewpoint, are more of a source of cognitive dissonance.

    As I wrote him: 20 July 2005:

    < Your reasoning, although valid within a restricted context, has little applicable value in Chess, the game:where the position dictates any quantifiable notion of value. As for a "math connection"- well it seems somewhat arbitrary. One could just as easily use coins, flowers with a different number of petals, or any proportional system by analogy, since in effect, regardless, it is a substitution of a integer variable with a specific value. On the surface dividing a number by a knight or taking the square root of the product of two rooks is at best a tautology, no insight follows from its significance such as in your excellent Venn diagram analysis of rook and bishop.>

    At one point he proposed that I might act in some editorial capacity (if nothing else to correct numerous errors in the English usage and transcription) but I think his reluctance to fairly discuss the numerous contradictions, anomalies, and misguided assumptions and conclusions within his work led to an unfortunate impasse. Hence I regretfully felt obliged to back down.
    His geometric model had some points in common with my early work in the 1980?¢‚Ǩ‚Ñ¢s with the Chess 'n Math Association (Association Echecs et Maths) of Montreal founded by a former chess student Larry Bevand, and as such I had hoped he might recognize my editorial improvements. But since I hold no formal teaching credentials I was obliged to defer with the caveat that surely at some point his methodology would fail to withstand serious peer review - regardless of whatever his sales or test results produced.
    Hence please understand my sense of personal vindication in reading Ann Arne Moll's article, in particular her his quiet conclusion: "Because the traditional method was not compared to the chess and math method, I think it?¢‚Ǩ‚Ñ¢s too early to conclude that the new method is actually better than the traditional method."
    There is a bittersweet irony for me on a personal level given that for over a quarter century I have languished in obscurity despite having done original work in the domain of Chess and Mathematics. At some point I expect I will publish a personal account on my blog - but self-angrandizement is a poor substitute for dedicated research and development. "if you can keep your head while about you are losing theirs......"(Rudyard Kipling)

    For the benefit of the general reader let me address the common (I believe misguided) assertion by those of us who attempt to bring Chess into the Math curriculum: that Chess enhances Math capabilities. In so doing I of course compromise the integrity of my own bias that Math enhances Chess capabilities. A child?¢‚Ǩ‚Ñ¢s general aptitude is more of a deciding factor than IQ but even more significant is an evaluation of abstract ?¢‚ǨÀúideation?¢‚Ǩ‚Ñ¢. Chess illustrates cause and effect logic in a practical day-to-day sense. Yet has this ability led to Humanity solving even the simplest of problems? Personally we have bigger fish to fry than academic quibbling ?¢‚Ǩ‚Äú which one reason for my lack of motivation to seek academic standing , despite several years doing university level work, and engaging in relatively fruitless lengthy correspondence with Chess Historians.
    What is fundamental to recognize is that Chess (and its origins in ancient geometry and the arithmetic of the number system which led to the discovery of zero) is a sub-set of Mathematics, not withstanding that a professional mathematician may categorically dismiss this notion, as did Ilan Vardi in his response to me: 8 Feb. 2004:

    > ?¢‚Ǩ?ìI must say that I don't see chess as very mathematical at all. I did write a paper about the N-Queens problem, but that is not about actual chess playing. I view chess as inherently non-fundamental, in the sense that it sets up a special case that one has to deal with, this case (the opening position) is highly disordered, and one has to construct order out of this chaos. From this point of view, chess is kind of like planetary science, that is, dealing with what has been given locally, not at all an intrinsic question. That doesn't mean it's not interesting, just not canonical?¢‚Ǩ?<

    Clearly his response in itself requires further elaboration and clarification. And I will admit that I am not qualified to go much further in this part of the debate. But of course I absolutely concur with Frank Ho?¢‚Ǩ‚Ñ¢s realization of the intrinsic geometric properties which are both universal and intuitive: the symbolism of piece displacement is critical to the very creation of Chess in the first place (however my work is more refined and pre-dates his by 15 years).
    ?¢‚Ǩ?ìChess, like love, like music....[Tarrasch?¢‚Ǩ‚Ñ¢s dictum] ...has the power to make man [one] ...happy?¢‚Ǩ? Of course since the appreciation of its Art is complemented by the rigor of its Science, as such it holds a special place in human endeavor. It?¢‚Ǩ‚Ñ¢s greatest power is that it lies outside of language ( ?¢‚Ǩ?ìlike love, like music?¢‚Ǩ?). Chess remains perhaps one of the view unique gateways whereby we can ?¢‚ǨÀúsee?¢‚Ǩ‚Ñ¢ into the mind of another fellow being ?¢‚Ǩ‚Äú and God only knows at this stage of our history we may need such tools more than ever. If we suceed to eliminate the unfortunate association of Chess as being an iconic war game then our efforts as teachers will not have been in vain...however I turned blue long ago holding my breath in this regard. ?¢‚Ǩ?ìYou can lead a horse to water- but you cannot make it drink?¢‚Ǩ? ....similarily you can lead a person one to reason ?¢‚Ǩ‚Äú but cannot make one think.

    Let me make it clear that I have had nothing but admiration for Frank Ho's work with children and continue to hope he would seek my collaboration because I have much to contribute to the broader goal. Alternatively perhaps Chess Vibes might in some way facilitate a more in-depth review of my efforts to continue the responsible discussion of the bona fide relationships between Math and Chess. I look forward to further discussion with Ann Arne Moll with the goal of developing a workable teaching/learning paradigm.

    Frank Ho's picture

    Arne:

    The objective of using math and chess teaching method is very clearly and narrowly defined and to an extent thery are also very practical. The objectives are to:

    Raise math marks
    Improve problem solving ability
    Advance chess knowledge
    Increase Math IQ brainpower

    All the intented objectives must be carried out in a fun and entertaining and learning environment to make children to want to learn math.

    Don't make the mistake to think that by askiing children to play chess in the math class then somehow the children's math marks will go higher. there is no magic pill here, it will not. If done (mixed) wrongly children will end up not want to do math but only want to PLAY chess.

    It takes us first to develop the worksheets to let parents and children to know that our worksheets are different and then it take us many years of "experiment" to know how to mix them to be effective and how to conduct classroom control. If it is not done right then the room is like a zoo and children quickly will think math time is like "play" time.

    It also does not mean the material is good for kindergarten to grade 7 and the results are all equally effective regardless grades and gender difference.

    It is also important to understand that chess can not be mixed with many math skills because it just does not make sense and is very ineffective.

    The most important is also to realize how math and chess is mixed up - it does not mean to have children play chess and learn math at the same time under the same roof.

    The major difference between our method and traditional method is we have to produce worksheets which will make children want to work on our worksheets more than traditional worksheets because they (math and chess integrated worksheets) are more interesting and more challenging. But here I hope that none of you will get the impression that we shall then just abolish traditional worksheets, we definitely shall not. This is why it is so great about to learn how to teach math to young children because we also using traditional worksheets. I can write another aricle on this.

    Our newest material for highly able children are moving to mix math, chess, chess puzzles, math IQ puzzles, and word problems. It is encouraging to hear some children say to me." Frank, can I do some fun stuff because I am gettig bored to do this knid of math, I have done 5 pages now." It takes experience and observation on my part to see the direction what children like and what they do not like and try to come up with new worksheets. This is how I know why we have done so poorly in creating worksheets because we as educators do not listen to children's response and to study what they like or dislike. Math teachers just created workshets and told children to do it.

    I created the multiplication table worksheets with multi-direction and multi-function and explained what it should be done and asked a gril who was grade 3 at that time to try it and she told me she liket it better than the traditional top/down or left/right style? The girl will be grade 7 this spetember and she has been with me for 5 years. I still teacher and track her progress.

    I created these worksheets and used them on my own students and see how they responded and also asked if they liked them. If they did not make progress then I spent time to review my worksheets and teaching method to see how they could be improved. Many wrokshets have been improved because of this learning process.

    Gopinath's picture

    To whoever criticising any new initiaive (including Ho Math & Chess here)......you may find it helpful to read the following quote I read somewhere:

    "Don't give me 1000 reasons why it won't work, but give me 1 reason why it would work"

    Daan's picture

    Dear Frank,
    you are right. I should not have mentioned religion. That was impolite and I am sorry for that comment. I guess I became a bit to wanton during the discussion, I did not intend to insult you or anyone else.
    Beside that I think we will not solve the issue here, and I definitely do not want to bother you in your work, since I think your intentions are great. That is why I agree we can stop the discussion here.
    Good luck with the math and chess project,
    Daan

    Frank Ho's picture

    Arne:

    Fortunately your question asked in the last post is very easy for me to answer since it is very primitive.

    You said "we?¢‚Ǩ‚Ñ¢re not saying the study can?¢‚Ǩ‚Ñ¢t be qualified as ?¢‚Ǩ‚Ñ¢science?¢‚Ǩ‚Ñ¢ because you didn?¢‚Ǩ‚Ñ¢t include a control group, and we?¢‚Ǩ‚Ñ¢re also not saying that the method or the research is worthless ". This is certainly not true. The truth is that it may not have come from you but certainly it have come out from those serious researchers whom you happen to know.

    You said ?¢‚Ǩ?ìall we?¢‚Ǩ‚Ñ¢re saying that your conclusion ?¢‚Ǩ?ìthat the math and chess integrated method did significantly increase the math scores for children?¢‚Ǩ? simply doesn?¢‚Ǩ‚Ñ¢t follow from your research. That?¢‚Ǩ‚Ñ¢s really all! And any statistician will tell you ?¢‚Ǩ?ì

    You must be joking here. The t-test shows that probability of the conclusion "that the math and chess integrated method did significantly increase the math scores for children" could be wrong is less than 0.01. So you say that we could not draw that conclusion?

    Eric's picture

    Wow, the discussion just keeps going. Frank, nobody is denying your test results. What is in dispute, is the CAUSE of these results. Neither a t-test nor an ANOVA will give you any information about causality. Finding a statistically significant difference is not the same as finding evidence for a specific causal relation.

    Summarizer's picture

    Dear Frank, it seems you want to know the anwer to the question, I quote:

    "You must be joking here. The t-test shows that probability of the conclusion ?¢‚Ǩ?ìthat the math and chess integrated method did significantly increase the math scores for children?¢‚Ǩ? could be wrong is less than 0.01. So you say that we could not draw that conclusion?"

    The answer to this question, more or less given by most of the commentators is:
    "Yes, you cannot draw this conclusion."
    Intuitively this might be strange, but when you think about it I am afraid this is correct. In psychology we call your approach/analysis "hypothesis generating" and not "hypothesis confirming".
    I hope this answers your question....

    Greetings from a psychological statistician....

    Amanda's picture

    The discussion seems to have gone way off topic about Buky and Ho's research paper and getting into really technical.

    After I read the paper I have found out that their result in the paper has not stated any "causer" but merely points out there is a significant difference between pre-test and post-test. Based on the statistical analysis t-test used, the result is really just a standard statement, there is no point of debating here.

    I have found they have used a very simple t-test to conduct the experiment and their result statement has not deviated from the point of view of a standard paired t-test. Their paper is fairly presented if all they wanted was to show if there is a difference between pre-test and post-test after administering their innovative math and chess teaching method.

    Good luck with your continued research.

    Statistician, Ph.D.

    Amanda's picture

    What we can go by is what has been said in the paper so the "Results" in their paper certainly holds true and this is what I will go by.

    All others said (including all posting here) about the possible results or ?¢‚Ǩ?ìcausers?¢‚Ǩ? are not the direct results of their experiment but opinions and they are subject to disagreements.

    I have no crystal ball to say what are or what are not the "causers" of their experiment since I have no experimental design to prove either.

    arne's picture

    Amanda, if I understand you correctly, you're saying that because basically anything can be concluded from their results (also the a poker and math course could help improve pupil's scores), the conclusion the authors reached is also allowed?

    Ray Tyler's picture

    The debate has been fascinating and discouraging - yet symbolic of our modern world: isolated soliloquies 'full of sound and fury' - (perhaps) 'signifying nothing'

    As I am likely the least formally qualified - I guess I should bow out gracefully. It saddens me that I / (we) failed miserably to have Frank Ho address the content of his method itself (which surprisingly I seem to be the only one focussed upon) but I can see how this far into the marathon it is too late to get back to the race circuit - after making a wrong turn.

    However I am grateful to have had this opportunity to return to the challenge of picking up where I left off...continuing to define the 'Math OF Chess' - Perhaps I could submit something to your for publication: hopefully by fal. With a little expert help I could design a valid test for my 'geometric' introductory method of teaching Chess. Of course I'll still be faced with the 'science versus commerce' dilemma.

    Frank Ho's picture

    Arne:

    You perhaps got me worng to say that I did not "listen" it is because I listened too many times and that is why i was very patient to continue to explain again and again.

    You mentioned that you happend to kow that these guys are serious researchers then I also would like to let you know that I used to work at universities as a statistical consultant at Faculty of Education and Computing Centre for over 15 years, did nothing but analyzed research data and many times provided consultations to professors and graduate students on their research methods.

    They insisted that I must have a "control" gorup in order to have this paper to be qualified as "scientific".

    I will try to clarify this one more time and then I will not post here anymore.

    The statistical test we used is paired t-test and conducted by John. The test was administered before the teaching method was conducted and therafter after the class was finished, We then wanted to see if there is a significant difference between the pre-test scores and the post-test scores. After the statistical results were done. We concluded that the math and chess integrated method did significantly increase the math scores for children. That is all we said.

    I have further explained why we are not interested in comparing "no treatment" group and what parents would like to find out about our program etc. This apparently all met with deaf ears. Still no "control" group, no valid results.

    I am simply saying that you can not call this paper not scientific just because it does not have a "group" factor. If we had used group factor then we probably do not need to do pretest, so it is a different experimental design (using control group).

    All you have to do is to go to library and borrow some statistics books and read on the chapter called dependent t-test and you will see many examples given which are very similar to our experiment.

    One of them could be something like two groups having the same subjects administered a pre-test and a post-test by a new new teaching method. Of course, in this case the findings of inferstatistical results are also very limited on what one could say. This is again and again what I have been saying.

    I may sound extremely defensive that is because it is one thing to say what we migh want to include some other factors in future study than to say just because we did not include a "control" group then this paper is not scientific and this paper worth nothing.

    So I hope you can see my point.

    P.S. I do like to thank you those who have taken time to read our paper and contributed here. Disagree or agree we might meet again and your argument does provide me a chance to reflect on what I do and also it tells me I need to work even harder.

    arne's picture

    Frank, to be honest I kind of understand Daan's frustration. If have just re-read all the comments on my post and I believe that you have never, not even once, conceded that any of us - be it Eric, Daan, Maarten, Ray or me or anyone else, might perhaps have a point that's worth reconsidering.
    I happen to know these guys are not just random people reacting on a silly forum, but serious, experienced and qualified researchers who are genuinely interested in the subject at hand. However, I am not surprised that they are getting a bit tired of your extremely defensive attitude, even though nobody is attacking your intentions. In fact, we're all quite sympathetic towards your method. Perhaps you should look at the whole issue in a more detached way, although I understand this may not be easy. Then you'll no doubt see that what we have to say is may really be quite valuable for you and the chess and math method.
    I wish you all the best,
    Arne

    Frank Ho's picture

    DFaan:

    Unfortunately even your quote from the paper with one sentence as above, it still does not reveal the "control group" is used so I do not know for sure until I read it myself. I need to see how they designed their experimental model. I am not saying they did not use it for sure but just the information provided does not tell and even with one sentence you quoted does not tell anything.

    Many times, we have to go the research presentation to "really" to understand on how an experiment is conducted. Many times we were also told that because of circumstances then very unusual experimental design is used and how we are supposed to take care of those situations. Many cases such as lack of samples, incomplete groups, missing data or confounding factors etc, all influence a model.

    Even among researchers, often, disagree on models; students do not have an uniform view on education model and professiors do not have an uniform view on experiment model, this is very common in scientific circle and happens every day at universities.

    I am not debating with you on if or should a control group be used as I have told you in some case the experiment does not have control group or difficult to get.

    To label our experiment without a control group as not "scientific" causes some concersn for me and it is more troublesome when you asked me to veiw a video tape which people tried fool others with scientific gadgets. You behaviour has insulted me. Moreso, you insulted other parents, teachers, students and many other people in over 30 branches in the wrold.

    You lowered yourself by relating our teaching method to "religion" . This is your statement here - "But Frank, if I were you I would use your enthusiasm and experience to promote your teaching this works fine for the church as well, and they have a lot of support too." You are trying to relate or correlate our teaching method to some kind of religion. What more insults you want to spit out?

    Since you started to talk nonsense so I decide that I will not respond to you any more.

    Just because of your behaviour and the last statement, it has discredited all of your previous statements. There is no need for me to respond here any more.

    Frank Ho
    www.mathandchess.com

    Ray Tyler's picture

    Even a cursory review of these posts reveals that Frank Ho clearly does not understand (or does not wish to understand?) your original article, nor my elaboration, nor Eric's and Daan?¢‚Ǩ‚Ñ¢s subsequent inputs. By being dismissive of Chess in defense of Math goals remains incomprehensible to me due to the ?¢‚ǨÀúcognitive dissonance?¢‚Ǩ‚Ñ¢ which his methodology generates. There are many self-evident examples which have yet to be addressed.

    My initial candid commentary was well meaning - but I regret having perhaps unwittingly provided Frank a venue for further self-promotion... an unfortunate by-product of this medium. Notice that he never directly replies to the contradictions and anomalies within the content of his Chess & Math material, the point being that any symbolic substitution could be used in his algebraic model. In short his focus is upon process not substance.

    How can we be critical of such a dedicated teacher: because he works with children whose parents pay 100$ a month (for which some might equate in harsher terms as glorified babysitting) we need be in awe? Been there. Done that. Shipped the T-shirts...and the chess sets. But for all the hoopla, notice how there has been only a nominal change in the number of females who excel at Chess. As for Mathematics the situation is next to hopeless ?¢‚Ǩ‚Äú so kudos to Frank if even one child excels.

    Unfortunately I find myself in the unenviable position that my commentary may seem to be little more than an expression of sour grapes despite the fact of my being the contemporary originator/innovator of the use geometric symbolism to introduce chess functions dating back to 1982. Over the following decade I did both sufficient research & testing to establish how Chess necessarily grew out of a very specific body of mathematics, and that there are enough sufficient Math properties which are specific to Chess.

    In all fairness, much of the criticism directed to Mr.Ho, with a slight twist in perspective, might apply to me. After all my article ?¢‚ǨÀúGeometry of Chess?¢‚Ǩ‚Ñ¢ www.originsz.blogspot.com is far from a complete work in itself. Furthermore it does elaborate how by using a mathematical approach I could teach Chess more effectively with the bias that in so doing I could generate a broader interest in Math. Like Frank to some extent I found myself having to adopt a defensive mode in the face of both the Chess & Math establishments.

    This forum is of course not the place for me to vent my frustration, hence I almost feel obliged to throw in the towel in this debate due to diminishing returns...the quibbling about statistical methods and results seems quite secondary in that it begs the critical question of content. However still hope Mr. Ho will take the time and effort make the effort to review our observations. I remain more than willing to help in any way to further the success of the marriage. In the meantime I have graduated to using a Bingo card ?Ø??†....

    Daan's picture

    Frank,
    This time you actually do refer to a scientific article that was actually published in a scientific journal, that is great! The article describes an experiment where they actually do use a control group. I quote:

    "Boyke and her colleagues then compared the three scan sets to those they had taken of a group of 20-year-olds who had also been taught to juggle in a prior study."

    Now this is science!

    As another control group they just might have looked at brainscans of older people who did not learn juggling, they are easily available for neuroscientists, but I dont know whether they did.

    Furhermore I think the article is an example which cannot be used to compare with your analysis, since the brain is a much more contained environment then the lives of your students, so the ceteris paribus assumption seems more likely to hold in the brain case.

    So I repeat once more, your analysis does not provide any form of scientific proof, while the article you refer to does.

    Beside our disagreement I am sure that your students have a lot of fun in class, which is great. But using a page to explain this in every comment is a waste of time, since this is not the point we disagree upon.

    But Frank, if I were you I would use your enthusiasm and experience to promote your teaching method, this works fine for the church as well, and they have a lot of support too. This is much easier and less costly then to find scientific evidence, sometimes this can be just a bit to much to ask.

    Daan

    Frank Ho's picture

    I guess perhaps we should have clarified the background information a bit more so people would understand. The following will explain a bit more about how math is taguht at Ho Math and Chess learning center. I am sure after my clarification then you will understand where we are coming from.

    First , take a look at this scientific paper at http://health.usnews.com/articles/health/healthday/2008/07/11/aging-brai...

    It is very similar to the idea of pre-post study. I have not read the paper yet to make sure if there is a control group but it seems to me that they did brain scan to see the change before and after the juggers, so there is no need for control group. Do we want to argue it is or might be some other factors other than the juggling?

    It is very biased to view from a chess player's point of view on the math and chess integrated teachiing method as I again explained in my previous posts many times because the prospective from both sides (chess player and math teacher) are like day and night in terms of teaching chess and math.

    What is the big deal of addiing chess into math, why spoiled the game of chess? This seems to be the view of most chess players and they do not know if it is chess mixed with math or it is math mixed with chess and they have no interests in finding out what is going on. Sorry it sounds negative but it is true, from my experience.

    But from some math teachers' view, what is th big deal of using chess to teach math? It is to play chess in the math class? If a teacher simply asks children to play chess after math teaching then the impact will not be great becasue why just play in the math class? They can play at lunch time or at home, what is the difference?

    The secret is not to just having chess sets around and ask children to play. I use chess to motivate children to think. It is sometimes difficult to get children motivated to do a bit more challenging math questions since their math level or skills do not even allow them to tackle the problem but most of them could try "mate in one" regardless chess skills (I mean to at least try to solve it.).

    Students with deficient math skills can do well in chess questions and it gives students the drive to excell. The drive for some students to challenge others and are able to win some games give them some motivation to continue on with their math work. So it is the learning culture that one must create to drive students to learn, the sense of trying to solve a problem, the drive of wanting to accomplish such as winning a game.

    With the above in mind, we continue to motivate students to work on math worksheets which are unconventional math and chess integrated worksheets so they are designed to train students to think, to challenge and to get the answers right after hunting for questions.

    When the entire system is designed to improve their math ability by using chess as a catalyst, then I know that what the factor is a causer to mathmarks or simply a related factor.

    From my own experience I have also mix chess with puzzles since I found out children love puzzles and further I created Frankho Chess Mazes since I found out they like also mazes.

    When we add all math, chess puzzzles, mathematical chess puzzles, Frankho Chess Mazes, chess games altogether in one class then we can see the impact and know what is going on and also know the feedback from children. In this case when we conduct an experiment we siply want to prove what we did is right by presentig a hard evidence. it is not we have come up a theory and then tried to prove our "thinking" is right.

    It also take know how to implement the system, if it is not impmented right then the entire math class is noisy and like a big zoo.

    I believe there is a relation or correlation between chess and math but to say math can be improved simply just by playiing chess is a wishful thinking, this is why many good chess players would laught about it since many of them are good chess players but had very lousy math marks or poor math performance.

    Math and chess are related in some math concepts and even that is not obvious for most people so by doing either will not "naturally" improve the other skills "naturally".

    But how do we make chess to be a "causer' of iimproving the math marks for children? This is how I have deovted over 10 years of research and found the ingredient and know how to mix chess and math so it creates an unique a culture in the after-school learnig center and children love them.

    It simply does not work by placing a few chess sets in the room and ask children to play as I have said again and agwain, their math marks will not go higher. How we at Ho math and Chess do it?

    We have special chess training set (patent pending), we have math and chess integrated worksheets, we have math and chess puzzles. we also know how to motivate and drive children to work on math worksheets, we have creatd unique culture such that children love our taching style and teaching method and system, it is fun and challenging.

    It takes the entire ingredient as I described above to make it happen, so perhaps my view of integrateing math and chess is quite different from your view as a chess player, in this case of course we wil have different opinion as to the model of how an experiment design should be conducted.

    Your idea of "chess" might be = a set of chess and chess games and that is it.

    My idea of "chess" = specially designed and unique math and chess integrated worksheets+ patent pending chess training set + unique mathematical chess puzzles+ copyrighted Frankho chess mazes.

    arne's picture

    Frank, what I find surprising in your comments is that, as a mathematician with statistical knowledge, you do seem to realize that what Daan and Eric and me are saying about your research makes a lot of sense, but seem to deny it as a passionate teacher who loves his job and believes deeply in the chess and math method. There is something very paradoxical about this, although I suppose it's also completely natural.
    We can solve this debate in two ways: either we agree that it's important to have good teaching, and that involving chess elements may well help pupils become better in math - but this simply has not been scientifically proven yet. Or we agree that while there may not be rigorous evidence for any correlation between the chess method and math, there is plenty of anecdotal evidence to suggest that further research is necessary and gives cause for optimism. I personally would agree to both points of view. (However, I still stand by my point that the excercises are pretty tough to swallow from a chess perspective - a point, by the way, that you have so far ignored.)

    Which view do you take?

    Daan's picture

    Since I was on holiday, I could not react earlier to this nice article by Arne and the following discussion in the comments. I must say that I am a bit shocked that two math teachers who claim to present a "scientific" evaluation of their teaching method are not able to grasp Eric's comments and objections.

    What Eric tries to explain is basic scientific practise. Work that does not meet this basic scientific standard is not science but something else. In this case it seems that Frank's and John's paper is a presentation of data surrounded by some science terminology. Although nothing special can be concluded from their data, except maybe that children who spend many hours on math related exercises while they enjoy them, seem to improve their math. This is like saying: "education works!". This is nothing new and has been shown on numerous occasions. A good example is an article by Nobel prize winner James Heckman (2002)

    http://economics.uchicago.edu/download/effectofschooling.pdf

    Unfortunately, people without scientific background might be fooled to think that this improvement is due to the combination of math and chess, whether this is true or not. This form of suggestive data presentations and bad statistics (As an illustration I quote John P. Buky: "Now statistical significance cannot be claimed here. The reason is due to the small sample size of 75 students." This is a classical example of misinterpretation of statistical significance), pretending to be scientific is a disturbing phenomenon that you find in almost all domains of commerce (and unfortunately also still in science). The paper is just another example.

    In short, the article is not science but commerce, whether the authors are aware of this or not.

    Taking Frank's and John's enthusiastic responses into account I do not think they are aware. One of Frank's statements serves as an illustration:
    "..... We simply reported the findings we have discovered that isall."
    In the world of science, there are no findings and there are no discoveries.

    Saying all this I realise I might sound a bit negative. I dont want to sound to negative since I am in big favor of the development of new teaching techniques, since different children bennefit from different teaching approaches. So Frank and John, keep up the good work and try to stick a bit more to teaching and a bit less to science.

    Daan

    arne's picture

    Thanks for your input, Ray. We're looking forward to your contribution!

    Eric's picture

    Hi Frank, I'm not sure which question it is you wanted me to answer, but I suppose it is this one:

    "There are many after-school math programs out there. Ours is officiallly called Ho Math and Chess. There are also Kumon, Sylvian, Mathnasium, Oxford, Math Monkey etc, many others. We all each claim that the math programs will help children to improve their math ability, so what you make out iof this?"

    Well, either these claims are true, or they're not. If they are, this would suggest to me that there may be some common element(s) to the different methods. But I really couldn't say, because I know nothing about the methods.

    Frank, I completely agree that experience is an important and valuable source of knowledge. However, when I'm writing a research article, I never expect my readers (or the reviewers, for that matter) to trust me on particular topic because I have much experience with it! After all, we know that people are subject to a lot of biases, errors of perception and judgment, errors of memory, we know that people see correlations that do not really exist (and vice versa), etcetera - let me stress that I'm not saying that this is the case for you, but simply that these are some of the reasons why researchers like me tend to demand something else than knowledge from experience, when it comes to scientific studies.

    Anyway, I seem to have failed to explain my point adequately, so I'll leave it alone. Your approach to research is very different from mine. That's okay, as you say. Good luck!

    mohammed's picture

    please yahoozee boys in mokwa local government

    arne's picture

    @Gonpinath. It's a noble motto, but it also illustrates the problem: Ho and Buky simply have not showed why it would work. They have merely expressed their hope and expectation that it might.

    Eric's picture

    Dear Frank,

    You write: "We made hypothesis that there will be no difference for pre-test and post-test but proved statistically significant there was a significant difference so we overturned our hypothesis. The cause of it is the treatment (program) we engaged and the results is the measurement of math scores. To go further to say that some of pupils would have performed even better using another method then you need to prove it to us but not question our result."

    I probably was not quite clear in my post. What I'm trying to explain, is that the pretest-posttest design without any form of experimental control does not allow for causal inferences. You explicitly state that the cause of the pretest-posttest difference is the treatment, but that simply cannot be concluded from this study. It's POSSIBLE that your treatment is the cause, but you did not control for other potential causes. How can you be sure that the difference was not caused by other factors?

    When you write "To go further to say that some of pupils would have performed even better using another method (...) " you seem to think that I believe that that is the case. Actually I do not, but I gave it as an example of a hypothesis you should -in my opinion as a researcher- have attempted to rule out. As you surely know, scientists conventionally aim to support their hypotheses by ruling out alternative hypotheses. One way to do so is to try and control for as many variables as possible.

    I really think that your statement "We simply reported the findings we have discovered that is all" is not quite correct. In fact, you not only report your findings, but you make an inference from them as well - namely, that the findings were caused by your teaching programme. If you want to make that claim, the burden of proof is on you to show (or at least make as plausible as possible) that the findings were not caused by something else.

    I was interested to read John's post, in which he referred to another study which apparently actually did contain a control group. Although I do not agree with his definition of statistical significance (a p value does not indicate the probability of a difference occurring by chance, but rather the conditional probability of finding a particular difference given that the null hypothesis is true; thus, one definitely cannot simply subtract the p value from 1 and say that this yields the probability of a statistical difference), such studies do give a stronger basis for inferences about causes and effects.

    In any case, I wish you good luck with your program and your research; I think both are interesting and definitely worthwhile to pursue further.

    Kind regards,
    Eric

    Frank Ho's picture

    To Arne:

    This is Frank Ho here. Since I am the main creator and the originator of the math and chess integrated worksheets, I am very glad to answer you questions as follows.

    1. Does the method work better with the use of chess concepts than with concepts of any other game such as backgammon or in fact of any other real life
    concept?

    As I have already said that you can use other game-based approach to teach math and it does not ahve to be chess, but why chess? There are many reasons of course. Try backgammon and math and see how many children would like to do it. Try other real life concept and see how long it will last? How do you even mix backgammon with chess to create math and chess integrated math worksheets? How about Chinese Go and math? The ideas are endless but do they work?

    2. Does it matter that some chess concepts in the curriculum are flawed (such as that chess is a linear game) or doesn?¢‚Ǩ‚Ñ¢t it?

    Perhaps you can elaborate a bit more here about why some chess concepts in the curriculum is flawed. What do you mean by chess is a linear game?

    3. Do the pupils not only advance in math skills, but also in chess skills?

    The idea of mixing chess into math is not for the purpose to raise their chess knowledge as the number one prioirty but since the game is there so of course they can advance their chess knowledge especially those ones who are seriously interested in chess.

    4. I just want to make the point that from a chess perspective, the curriculum makes a kind of strange impression. I basicallly just wonder if you and other teachers are aware of this, and whether it is a matter of concern for your method or the pupil?¢‚Ǩ‚Ñ¢s chess advancements.

    Arne, with due respect, I feel it is really wrong to look at this integrated math and chess issue from a chess player's point of view. Our program is a math program, not a chess program.

    It much "easier" to have a math teacher (they are more willing to learn chess) to also teach chess in the math class but I had to twist chess coachers' arms to get them to teach math (they can learn too, right? Some of them are university graduates.) Why there is such disparity of willingness? I still could not figure it out. may be another education experiment design to find out , right Eric?

    Math educators are constantly looking for new and innovative ways of teaching math and get children interesting in math and get them to work on math wiorksheets. The trouble is what kind of math worksheets interests kids and will not get them bored? What is the secret to have some worksheets so children will like them.

    We are not just using chess figures to mix with math, we have designed a brand new chess teaching set using geometry concepts. We have created Frankho Chess Mazes - a kind of mazes children will have never seen in ther life until they try it. We have designed some math and chess mixed worksheets so children had to hunt for questions. When first look at our worksheets, you do not see any questions at all on the sheets until you start fo follow the commands to get questions out.

    All the above products are our intellectual proprietary properties. So to make the teachig work, it is not just the teaching method itself it is also our products (unconventional worksheets.) which have made the difference.

    John P. Buky's picture

    First of all, I would like to thank Arne for this discussion as it gives good publicity for chess in education in general, as well as our own curriculum in particular.

    Frank and I will probably have to write a new and more comprehensive article - this is because a new study has just come out that shows even better results (of the Chess Academy program) than we had in our original report! I am now working on putting info from this study on the Chess Academy website. Stay tuned to Chess Academy! www.thechessacademy.org

    Chess Academy uses Frank's math and chess puzzles as part of the program. The students really enjoy doing them and learn at the same time. With a little explanation they are able to complete the math and chess puzzles. However, we also use other regular math worksheets and workbooks (and there are many - believe me!) that Frank created. Chess Academy also uses our own Chess, Math and Extended Response Curriculum. More info on that is at: http://www.thechessacademy.org/lessons.html.

    Here is a summary of the report (on Chess Academy) that was recently released.

    A 2007 independent report shows that students who participated in Chess Academy Tutoring demonstrated 23% higher math gains than students who did not participate in after school tutoring. Both groups of students took the ISAT (Illinois Standards Achievement Test). This is the yearly standardized math and reading test administered to all students in the state Illinois.

    Report Summary:

    Chess Academy Predicted Gain = 10.95
    Chess Academy Actual Gain = 13.47
    Gain Difference = 2.52
    Percent Gain = 23.0%
    Z-Score = 0.19
    n = 75

    Results of GLM comparing ISAT math of SES providers with eligible non-participant group:

    Estimate = 2.10
    Standard Error = 1.56
    T Value = 1.34
    Pr = 0.1795

    The probability of this 23% gain occurring by chance is .1795 or 18%. Conversely, it means that there is 82% probability that there exists a statistical difference between the 2 groups.

    Now statistical significance cannot be claimed here. The reason is due to the small sample size of 75 students. Please note that although 119 students were in the Chess Academy Tutoring Program, only 75 students had taken the ISAT the 2 consecutive years needed for the study (hence the n = 75). However, this is approaching statistical significance.

    In terms of comparing the Chess Academy math gains to that of other tutoring programs, Chess Academy ranked in the top 3.

    John
    Chess Academy
    www.thechessacademy.org

    Frank Ho's picture

    The Buky and Ho paper has a section called "Results" and it states "The result[s] of this study shows significantly different [difference] on their math scores for all grade 1 to grade 8 pupils between pre-test and post-test ....". Nothing was actually mentioned about the causation or correlation. The conclusion never deviates from a simple rejection of the hypothesis.

    The discussion here on Buky and Ho paper seems to have gone way off track for a simple t-test to finid out if there is a means score difference between pre-test and post-test after administering the math and chess integrated teaching method.

    It is a good and encouraging news that we realize that this math and chess integrated teaching method did improve student's math scores when compared to their math scores before they started and after they finished their program.

    This paper has brought a good news for some parents who are looking for some new and innovative ways of improving their children's math scores and also at the same time having fun of learning math. The majority of parents care if the after-school program will improve their children's math scores after they spend money and send their children to the program, and this pre-test and post-test simple design serves that purpose.

    Do not try to find many answers from this simple paper since I can not provide them and also do not feed words into my mouth since our paper never states the teaching method "casued" the mean scores increase. However my experience tells me so, but it is the view of my experience.

    We have published a paper to show that the innovative math and chess integrated method did increase students' pefromance before and after the progrm, this is all said in the paper.

    This post has gone through many debate topics such as the following:

    1.
    One poster claims that our paper does not have a "control" group so it is not "science" and Youtube video on "debunk" fake science and religion were all mentioned. This point of view of no "control" group so the paper is not "science" is certainly wrong. One must be very careful in stating the result so it does not go beyond its boundary on what can be said then it is fine as we have done in our paper.

    2.
    I was reminded that we can not draw the conclusion based on the statistical t-test when actually examine the paper what was said in the paper was simply there was a significant difference between pre-test and post-test.

    In many research studies, many researchers do not really try to deine the "casuers", rather what or which factors "contribute" the most to the results. The reason is it always generates "hot ropic" if the word "causer" is used.
    It is not a rare case, that the last poster did not seem to take the time to even study our paper but started to make comment about our stated result which did not make sense.

    So far all the posting are not very helpful in really understanding what is the real meaning of using math and chess teaching method.

    The more useful post should be focused on how math and chess teaching method can be used to foster a learning environment so children can benefit after the paper has shown that there is a significant difference between pre-test and post-test..

    If you think there are many other factors which could contribute majorly and "cause" studnet's math performance, then what you think they are? How can we as math teachers use those contributing factors to get students understand math better?

    After I have taught math for over 15 years from kindergarten to grade 12 almost every day, I have some ideas about how to improve student's math:especially in an after-school learning environment.

    1. If continue to give the same or even similar worksheets which were given at shcool for average students, then they really get tired and bored to work on it, it is like to feed them the same menu of food every dfay. they will get tired of eating the same food.
    2. Give them the same old style worksheets then they also get tired.
    3. How about math game on computer? so many children have played online games so the math games are not exciting for them any more.

    Facing with the above situation, what can we do as an after-school math program provider? How can we create a math program so the children actually look forward to? How can we help or supplement their day school math program?
    Is the math and chess integrated worksheets helpful to all grades? Why children like math and chess integrated worksheets? If children like math and chess integrated worksheets then how can I further improve them?

    My energy and time will be more productive to actually focus on study and research the above issues and topics. By doing that my contribution will be more beneficial to children.

    The_Anonymous_Person's picture

    A very interesting and thought-provoking article.

    My chess is much better than my math, but I think this is because I spend much more time on the former. Recalling back to my schooling years, I think my chess overtook my math at about 11/12 years of age.

    Eric's picture

    Frank: thank you for your reply. Let me explain my point of view a bit more elaborately. Your article makes a claim regarding the effects of a particular teaching method. If you want to support that claim, you will have to give evidence for causality. That means more than simply showing a difference between 'before treatment' and 'after treatment.'

    You write: "For a weight reduce program, what lots of customers care is if the product has made any before and after difference [!!!], so in this case, the control does not seem to be a big player for them, most customers are happy if they can achieve their desirable weight." (exclamation marks added)

    The point is that your study simply does not show that your teaching method "has made any before and after difference." What your study *does* show, is that there *is* a "before and after difference", but we simply cannot say what caused this difference. And since it is theoretically possible that some of your pupils would have performed even better using another method, I think this makes the issue of causality particularly relevant, even for customers who are not interested in the underlying mechanism, but who *do* pay good money for a method that (supposedly) works.

    Consider the situation where somebody offers a new medicine, and says: we have a study showing that after taking this medicine, 60% of the patients feel a lot better. You might think, well, that's good news, this medicine is apparently useful. But what this person does not show, is how many patients would have felt better *without* using the medicine, or using a different medicine. Would you, on this basis, recommend that people purchase this particular medicine?

    You also write: "What John and I did was to have openly release the result in a scientific way to tell the world that we have actually come up with an innovative way of making math more interesting to learn for children."

    That's part of the problem: you use a 'scientific' format to present your results, but the way in which you conducted your study does not quite meet basic standards of scientific research - at least that's my opinion. I do not doubt your sincerity. Personally, I am quite willing to believe that there are many pupils for whom your program is highly motivating, or at least more so than regular maths classes. But your study does not really support that claim, nor does it show *why* this should be the case. Is it really this method? Perhaps you and your colleagues are simply very good teachers, regardless of the method you use. We don't know, because this wasn't controlled for.

    arne's picture

    Frank, nobody is denying the possibility that combining math and chess can be useful. The question I tried to answer in my blog, however, is another: can it be done effectively with the "chess and math" method, and does your current research say anything about its objective merits?
    I think we all agree that there are lots of problems in math education, that it's good to get kids involved and that it's absolutely essential to try and keep their attention - but that's simply not the subject of this blog.

    Frank Ho's picture

    Eric:

    As I have mentioned that to have a control group in a future study is a good idea but to conduct this, it is not exactly as you have said that it is "easy" to do, since we have involved puipls here and use human as subjects. To even get approval it takes time, but it is not impossible to be done, I suppose.

    To say that the current experiment offers nothing is a bit stretch. Remind you that the current program is offered in the USA as part of after-school program so it depends on how you look at, it does offer a valuable information for some parents and children.

    To give you a further background iniformation, there are many parents sending children to after-school math programs, so parents are happy if they know their children have made progress before and after, so the result is usefull or not it depends on what you want?

    For a weight reduce program, what lots of customers care is if the product has made any before and after difference, so in this case, the control does not seem to be a big player for them, most customers are happy if they can achieve their desirable weight. I offer this example to say the one way pre-post test is not exactly as you describe to be useless.

    Please remember I am not arguing about if the control group should be in the experiment desing or not, I am only saying that you have made a blank statement to say that our paper offers nothing which I do not agree.

    We conducted a simple one way pre and post test and because of this our result is also limited by what we can say. Of course if you have 2-way ANOVA then you are able to offer a bit more to readers, so it all depends on what you want. Pardon me to get into a bit technical here.

    Many times, government do assessment son student's math performance such as BC Government's elelentary assessment, and in this case there were no "control" group been done but why is the assessment? Offers nothing? There is always a group who is for it and there is always a group who is against it particularily the teacher union, so which side are you on? My point is whether the assessment is useful or not will depend on what you are looking for.

    If we can tell parents that by sending their kids to our program and it will make difference after they come and it has proven statistically significantly then for lots of parents this is all they care. But if some parents say this is not enough because theyI want to see if their children would also make progress anyway by not sending their children to our program then we are not able to say it very clearly, so this is the difference.

    In reality, most parents are happy that their children have the high possibility of making prograss by attending our program, any progress their children can achieve, they are happy. Lots of these children are not high achiever and having problems already and their parents are aware of it so this program made difference. Do they really can about if there is a control group or not? Their children were in the control group themselves before and made no progress.

    I personally run a learning center in Vancouver and has beening using this method for over 10 years, Do not forget parents must pay me over $100 a month for 2 hours a week to learn math from me and if this method is not working, I will not be here to write it about it.

    What John and I did was to have openly release the result in a scientific way to tell the world that we have actually come up with an innovative way of making math more interesting to learn for children.

    As personal experience to have communicated with parents, many perents face the problem that their children are giving up on math, if they can get children to even go to an after-school math program it is considered as a success.

    Parent is saying this "Look my child does not want to do extra math but he is having problems and I have sent him to A,B,and C learning centers but he liked them none, he only likes your math and chess integrated program." So "control" group or not do they care? They care if there is a difference before and after, that is all. They are not interested to compare their own children to other children's progress.

    Our program has generated lots of interest for some children to continue workiing on math and this factor seems to have ignored by some of you and continued to dwell on the perfection of the experiment design, I offer some background information so hopefully all of us will understand the problem what our math education is facing today.

    If you can get kids interested in doing math and continue to work on worksheets this is alreay a nice job done, this is actually is what lots of parents wanted. This is actually is the main problem of today's math education.

    Do you know some parents cried in front of me because their children do not want to do math?

    Jeans's picture

    Today's quote of the day at www.chessgames.com also discusses the correlation between chess and math:

    In mathematics you're as good as your best move. In chess you're as bad as your worst.
    --- Paul Samuelson ---

    Regards,
    Jeans

    Eric's picture

    I am sure that Mr Ho and colleagues have good experiences with their chess-math-methods. Otherwise, why write a paper about it? But (as Arne points out) the fundamental point remains that, given the total lack of experimental control, the study does not tell us anything.

    Actually, I don't think a research grant would be needed to incorporate a control group into the study. It should be easy to find a colleague teaching comparable classes (composed of comparable pupils; perhaps in a different school) who does not use this particular method. Take pre- and posttests in both groups, and there you are; it's still far from a clean experimental design, but at least you would have a *slightly* stronger basis to conclude anything.

    Eric's picture

    Amanda wrote: "I have found they have used a very simple t-test to conduct the experiment and their result statement has not deviated from the point of view of a standard paired t-test. Their paper is fairly presented if all they wanted was to show if there is a difference between pre-test and post-test after administering their innovative math and chess teaching method."

    I don't think anyone here will disagree with that. But is this (a difference between pretest and posttest scores) really the only point the authors are making? Sentences like "The results of this study demonstrate that a truly integrated math and chess workbook can help significantly improve pupil?¢‚Ǩ‚Ñ¢s math scores" (as quoted in the article above), as well as Frank Ho's contributions to the discussion, suggest that the authors are in fact making a claim regarding the *cause* of the difference, namely, that the teaching method used has caused, or at least contributed to, the difference.

    Frank Ho's picture

    One thing that the author of the article entitled Chess and math: a happy couple did and was not fair to john Buky and Frank Ho was she asked Mr. Maarten Solleeld to comments on some examples but did not give his comments to either John or Frank to respond or view before her article was published publicly.

    This consequence may have tarnished both John Buky and Frank Ho's reputation as researchers and educators in this highly specialized area for over 10 years.

    With due respect to Mr. Solleveld, both John and I are also highly educated and have done personal research in this area. Even among mathematicians there are many with specialized fields and no one can certainly claim to know all areas.

    We certaily welcome the opportunity to further discuss the education value of our worksheets on those examples given to Mr. Solleveld and further would like to clarify if those examples caused any confusion just beause they are "strange' puzzles (in Solleveld's words).

    arne's picture

    Dear mr. Ho, thank you for your comments! I should note that in fact I did give mr. Solleveld's comments to you and John Buky in an e-amail I sent you last week. John replied to this only very briefly by providing links to your current research and the Chess Academy website, so I naturally assumed that you didn't have anything to add to these remarks. In my article, by the way, I also mentioned that the unclarity of some of the examples can be matter of lack of sufficient explanation in the curriculum.

    Pages

    Latest articles