Thursday, August 16, 2012

Mastermind Strategies

Strategy for computer codebreaking

Ronald Retzel (2012) 
draft version


Fig.1 Vomlel (2004?) Bayesian networks in Mastermind

  
Game theory


Links
  • Game theory wikipedia
  • Eric Rasmusen. An introduction to game theory (pdf)







Fig.2 Diagram of rock, paper, scissors, lizard, Spock game play

Game analysis (Bulls and Cows / Mastermind )


Game setup: see How to play. Here we play the color code game.

Numeral system set:
 
Links
Fig.3 Maya numbering system
 
Numeral systems have a base that rules the digits. Since the invention of zero, by convention any modern numeral system set starts with zero.
  • base-10:{0,1,2,3,4,5,6,7,8,9} (decimal system set)
  • base-6: {0,1,2,3,4,5} (senary system)
  • base-2: {0,1} (binary system)
  • base-26:{0,1,...26} (hexavigesimal system)
In this approach we use the base-6 set with k=6 elements.
Values can easily be converted from one to another numeral system. How to convert from/to decimal system: http://en.wikipedia.org/wiki/Hexadecimal#Division-remainder_in_source_base


Decimal ValueBaseBase-x Value




Symbol set: The numeral set is here used to create an index for the symbol set only. No calculations are made. In this game numeral set and symbol set are paired bijectively (Bijection): for any numeral system exactly one element in either set represents exactly one element in the other set, any distinct objects can be symbols, e.g.:
  • {0,1,2,3,4,5,6,7,8,9} (10 elements: paired with decimal system)
  • {1,2,3,4,5,6,7,8,9,0} (10 elements: paired with decimal system) 
  • {head, tail} (2 elements: binary system)
  • {red, yellow, green, blue, white, orange} (6 colors: senary system)
  • {ant,bird,cat,dolphin,eagle,falcon} (6 symbols: senary system)
  • {red ball, blue ball, red triangle, blue triangle, red square, blue square} (6 symbols: senary system)  
  • {a,b,c,...x,y,z} (26 character alphabet: hexavigesimal system)
In this approach (instead of colors) we use this symbol set: {1,2,3,4,5,6} paired with this numeral set: {0,1,2,3,4,5} where the 1st element of the numeral system set (zero) represents the 1st element of this symbol set (one) and so on. Note that it is not necessary to use another base-10 system, since we don't calculate with the code (cf. http://en.wikipedia.org/wiki/Decimal_without_a_zero#The_bijective_base-10_system); in this game ciphers are just symbols, like an index for a color.

In this game the code is a sequence of elements where the order is taken into account (permutation, not a combination) and any duplicates are allowed. In this approach we use n=4 elements per sequence, e.g. "1233". This gives us a set with kn = 64 =1,296 different permutations, meaning distinct sequences of 4 elements of the symbol set: {"1111","1112",...,"6666"}. No blanks allowed (in some game setups a blank may be used in a guess as an additional symbol that we know of for sure that it is not in the code).

In this approach we assume that the set of permutations is countable. Note that this is not a necessary requirement. In reality it depends on the capabilities of the computer (not to mention theoretical aspects). If countable then we can create an index (cf. http://en.wikipedia.org/wiki/Factorial_number_system): In this approach the index (position) in the numeral system set of each permutation is exactly represented by its value, no matter what numeral system is used, e.g.:
  • decimal system: Position 3: Permutation: 0003, 4th position, since "0" is 1st position in numeral set
  • binary system: Position 10 (bin.): Permutation: 0010, convert binary to decimal: 3 = 4th position
  • hexadec.system: Position B (hex.): Permutation: 000B, convert hexadecimal to decimal: 0011 = 12th position
It might get handy to have an index number for each code and being able to calculate the code from an index rather than to examine the codes, e.g. when picking up a random code or cover scanned codes.

Comparing codes
Information we get from the other player comparing codes: quantity of direct "hits" (black pegs); quantity of correct elements at any position (black + white pegs), in this approach called "hints", e.g. if "1234" is the secret code then "1122" has 1 full hit and 1 other hit. In this approach hits=black pegs and hints=black+white pegs, therefore we get this result: (1.2) = (1 hit, 2 hints), in usual notation written (1,1) = (1 black peg, 1 white peg).

Consistency.
We don't know the secret code "1234", but we know that our guess "1122" compared with the real secret code gives (1.2) as a result. Any candidate for the secret code must have the same behavior, meaning if compared with the guess ("1122") this must result in (1.2) as well, e.g."1331" does and therefore it is a consistent candidate for the secret code. If it would have another behavior than the real secret code then it cannot be the secret code. We call a guess consistent with a previous guess if the comparison between these two guesses has the same result as the comparison between secret code and the previous guess had. In this game a guess is consistent if it is consistent with all previous guesses. If there is no consistent code within the rest of permutations, then an information given must be wrong. Note that the player is not forced to guess consistently.

Scoreboard.
a) Columns (categories)

Each code in the same column (category of codes) has the same quantity of duplicates and non-duplicates. For any n with k>=n given we know the complete set of categories of all permutations where: neither the order nor the symbol-index is taken into account. 

n.k  k>=n
1.1: 1-0
2.2: 2-0 1-1
3.3: 3-0 2-1 1-1-1
4.4: 4-0, 3-1, 2-2, 2-1-1, 1-1-1-1
5.5 :5-0, 4-1, 3-2-0, 3-1-1, 2-2-1, 2-1-1-1, 1-1-1-1-1
6.6: 6-0, 5-1, 4-2, 4-1-1, 3-3, 3-2-1, 3-1-1-1, 2-2-2, 2-2-1-1, 2-1-1-1-1, 1-1-1-1-1-1

In this approach there are 5 categories: (4,0,0,0) (3,1,0,0) (2,2,0,0) (2,1,1,0) (1,1,1,1), reading: 4 same elements, 3 same and 1 different element. and so on, e.g.: "1111" "1112" "1122" "1123" "1234". "1122" scores same as "1221" or "2233". Quantity of distinct patterns: formula missing.

b) Rows (scores)
Each code in the same row will give the same result when compared with the secret code. For any n given we know the complete set of all possible scores, noted as "(hits,hints)" where hits=black pegs and hints=black+white pegs: (0,0)..(n,n). In this approach (n=4, k>=n) these are: (0,0) (0,1) (0,2) (0,3) (0,4) (1,1) (1,2) (1,3) (1,4) (2,2) (2,3) (2,4) (3,3) (4,4). Note that a score with n-1 black pegs and 1 white peg (n-1,n), e.g. (3,4) is not possible in this game. Quantity of possible patterns (with n>1, k>=n): (n+1)2 - (n+1)*n/2 -1 = 14. Our scoreboard now has 14 rows, 5 columns and looks like this (hints-sorted!):

Scoreboard: counts of solutions for possible first guesses n=4 k=6
Scores\ cat. (4) (3-1) (2-2) (2-1-1) (1-1-1-1)
e.g. "1111" "1112" "1122" "1123" "1234"
(0.0) 625 256 256 81 16
(0.1) - 308 256 276 152
(1.1) 500 317 256 182 108
(0.2) - 61 96 222 312
(1.2) - 156 208 230 252
(2.2) 150 123 114 105 96
(0.3) - - 16 44 136
(1.3) - 27 36 84 132
(2.3) - 24 32 40 48
(3.3) 20 20 20 20 20
(0.4) - - 1 2 9
(1.4) - - - 4 8
(2.4) - 3 4 5 6
(4.4) 1 1 1 1 1
Total 1296 1296 1296 1296 1296
worst case 625 317 256 276 312
total hint<2 1125 871 768 539 276
Irving [Kooi] 512 236 204.5 185.3 188.2
Neuwirth [Kooi]14982693288530443057
Kooi 5 11 13 14 14


Note that for the reason of better evaluation in this approach the scoreboard is hint-sorted, unlike in many notations where it is hit-sorted.

c) Interpretation for the first guess: If we guess "1111" and get (0.0) then there are 625 codes left to be the secret code, e.g. "2345", this tells us that the possibility that the code does not contain any "1" is 625/1,296. If we guess "1234" and get (0.0) then there are just 16 possible codes left, e.g. "5566", since the possibility that the code does not contain either 1, 2, 3, 4 (=does contain only 5 and/or 6) is 16/1,296.

d) Modification. At first stage we are not interested in lucky hits, since they are allways hints at the same time. We can now sum up the scores by looking at hints separately:

Scoreboard: Hints for possible first guesses n=4 k=6
Scores\ cat. (4-0) (3-1) (2-2) (2-1-1) (1-1-1-1)
e.g. "1111" "1112" "1122" "1123" "1234"
(x.0) 625 256 256 81 16
(x.1) 500 625 512 458 260
(x.2) 150 340 418 557 660
(x.3) 20 71 104 158 336
(x.4) 1 4 6 12 24
Total 1296 1296 1296 1296 1296
worst case 625 625 512 557 660
total hint<2 1125 871 768 539 276

Interpretation: If the guess is of the kind (4-0) e.g. "1111" then we will know the exact quantity of color "1": 1, 2, 3 or 4 times, no other guess will give us that information. This can be a good objective for a human player (see there). On the other hand in worst case the possible codes are reduced just to 48% (625-1). And the chance to find 3 or 4 colors is very low at 2%. If the guess is of the kind (1-1-1-1) e.g. "1234" then we will know exactly how many of these 4 colors are in the code at least 1 time, no other guess will give us that information. If our objective would be to maximize the hints after the first guess this guess might be good: In worst case we will have still 660-1 codes left, but at least 2 colors (50% of the colors) found; and we have a chance of 27% to find 3 or 4 colors.

To evaluate an approach we need to look at the next turns.

Tree .


Scoreboard: Hints for possible second guesses first guess="5555": (x.1)
Scores\ cat. (4) (3-1) (2-2) (2-1-1) (1-1-1-1)
e.g. "1111" "1112" "1122" "1123" "1234"
(x.0) - - - - -
(x.1) - - 512 458 260
(x.2) 15 340 418 557 660
(x.3) 5 71 104 158 336
(x.4) 0 4 6 12 24
Total 499 500 1296 1296 1296
worst case - 625 512 557 660
total hint<2 - 871 768 539 276

Attention: Obviously the table above is not finished. Sorry I had no time.
Interpretation:
(under construction)




Basic Strategies for computer programming

Also see: Strategies for real people

Basic strategies for computer assume complete information about all of the possible codes and to be able to count them. If the quantity of codes is more than the computer is able to calculate with in a given time then this kind of strategy fails. This highly depends on the game setup and the computer hardware being used.

Index.

Non-strategies.

  • Random guess each turn, guess without consistency check (at worst k^n = 6^4=1,296 guesses)
  • Random guess, but chose a consistent code only (at worst ?? guesses). 
  •  
  • same, but variations like specified start guess, mix of scanning and random guesses
  • Evaluation of all possible permutations using scoreboards
  • Incomplete scan (not all permutations are evaluated)
Evaluating strategies
  • Shapiro (1983) as discussed in: Kooi, Barteld (2005)
  • Knuth, Donald (1976–1977)
  • Irving, R. (1978-1979) as discussed in: Kooi, Barteld (2005)
  • Neuwerth as discussed in: Kooi, Barteld (2005)
  • Koyama, Kenji; Lai, Tony (1993) 
  • Merelo et al. (1999) 
  • Kooi, Barteld (2005)
  • Venco (2006)


Shapiro (1983)back to index
"A Simple Strategy: The first strategy by Shapiro (Shapiro, 1983) (it is also published in (Sterling and Shapiro, 1994)). His algorithm does the following: the combinations are somehow ordered (usually alphabetically) and the first combination is asked. The answer is received. The next question is the first one in the ordering that is consistent with the answers given so far. And so on until the combination is cracked. A crucial drawback to this strategy, however, is that it looks at the informativity of questions very marginally. One can only be certain that one does not know the answer already, but that is all." Kooi (2005).

--- not nec. consistent---
But brute force made it possible for some (as of now only simple) game sets to find questions that allways give the answer in not more than 7 turns. (Bogomolny, A, Greenwell, Don,(1999/2002))



Knuth, Donald (1976–1977)back to index

 "using an algorithm that progressively reduced the number of possible patterns. The algorithm works as follows:
  1. The first guess is aabb.
  2. Calculate which possibilities (from the 1296) would give the same score of colored and white pegs if they were the answer. Remove all the others.
  3. For each possible guess (not necessarily one of the remaining possibilities) and for each possible colored/white score, calculate how many possibilities would be eliminated. The score of the guess is the least of such values. Play the guess with the highest score (minimax algorithm).
  4. Go back to step 2 until you have got it right."(Knuth, Donald (1976–1977)
See http://www.delphiforfun.org/Programs/MasterMind.htm, Nelson, Toby (1999) Investigations into the Master Mind Board Game. (html)



Neuwerthback to index





Kooi, 2005back to index




Advanced Strategy for computer programming 
Merelo et al. (1999-2012) back to index
https://springerlink3.metapress.com/content/n45l60j4803q7601/resource-secured/?target=fulltext.pdf&sid=wstdb0spla1licd2t3q4xii1&sh=www.springerlink.com
http://geneura.wordpress.com/2011/05/08/going-a-bit-farther-and-a-bit-faster-solving-mastermind-using-evolutionary-algorithms/
http://geneura.wordpress.com/tag/games/
Improving and Scaling Evolutionary Approaches to the MasterMind Problem
Juan J. Merelo, Carlos Cotta and Antonio Mora

An experimental study of exhaustive solutions for the Mastermind puzzle
J. J. Merelo, Antonio M. Mora, Carlos Cotta, Thomas P. Runarsson
(Submitted on 5 Jul 2012)http://arxiv.org/abs/1207.1315


Venco, 2006back to index
http://apps.topcoder.com/forums/?module=Thread&threadID=511592&start=0&mc=41


My approach was simply to generate guesses compatible with results of previous guesses.
The problem is that I didn't find guarantied way to find this compatible guess in time. So I was looking for guess with as small error as possible.
The error I've used = sum(guesses, 10000*abs(guess.a - current.a)+abs(guess.b - current.b)), where b is number of perfectly matched pegs, and a is number of total matched colors (results[0]+results[1] from nextGuess arguments).
Now, I generate each guess candidate by first generating compatible color counts (first part of error sum), then generating most compatible order.
In both of those steps I first quickly make some reasonable initial state, then optimize it by some number of randomized changes, each taking into account possible reduction of error. To speed up things a lot I've maintained arrays of error changes for various modification of the state, and look up for the best moves in this array. I have found it's much more efficient to have this array maintained, than to recalculate errors after each move.
I've managed to generate up to 50000000/(L*L*L) candidates per each guess, using reasonable runtime - up to 15 seconds in worst case.
If any candidate has perfect match (zero error) I return the guess immediately.

Another part of my solution is the code for getting as much as possible information from guesses analytically. The code estimates possible minimum and maximum values for color counts, possible assignment bits of colors in each place, etc. It allowed me to determine exact numbers of each color in reasonably few guesses - about K.
Initially, I've tried to issue first K-1 unicolor guesses to determine counts, but it turns out it wasn't necessary. This approach gets no information about ordering from the first guesses, and I've got ~100 score from it (my second submission).

...

I thought about brute-force solution for small cases, but I've found that my generic algorithm almost always finds compatible guess for lengths up to 20, and larger cases are not brute-forceable anyway.

...

Several coders have mentioned random swaps as the way to find compatible guess.
My code does a bit more. It repeatedly removes 2-8 worst pegs, then adds them back in the best places. In each step if there are several possibilities with the same change in error, I pick one randomly.
Similar algorithm was used in the first stage when I don't know color counts yet for finding compatible set of counts.

...


    "venco used the following cost formula (note: hits = pegs with correct color and correct position; hints = pegs with correct color but incorrect position):

    cost = sum { 10000 * abs(hits - hits') + abs(hints - hints') }


    The 10000 factor is used to give more weight to hits than to hints." (gsais)


Actually it's the other way around. I give more weight to hints to find correct color counts as quick as possible:
cost = sum { 10000 * abs(hints - hints') + abs(hits - hits') }
After all counts are found (it usually happens slightly later than after K guesses) I start to use correct number of different colors, so the first part of cost always yield zero, and algorithm concentrates on ordering.

...


    "hits = feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position" (gsais)


Here is one more difference. I've used total number of matching colors as value hints in cost formula, no matter if it's they are on correct place or not.
It can be calculated as results[0]+results[1] from nextGuess argument.
This way hints is plain measure of how good is your guess in terms of choosing various colors of pegs, and hits is the measure of order quality.

.....

gsais, 2006
http://apps.topcoder.com/forums/?module=Thread&threadID=511592&start=15&mc=41

As good and simple as the stepwise strategy is, it has a very important complication: altough finding a consistent code for small problems is rather easy, for larger problems is extremely hard. In fact, it has been already proved that just deciding if there exists a consistent code (a subset of the problem of finding it) is a NP-complete problem; see lbackstrom's post above. Even for not so large problems (~ K=10 L=15) you can't actually hope to find a consistent code by simply generating random codes, you really need to use some heuristic to guide the search. However, even with good heuristics, for slightly larger problems it's almost impossible to find a consistent code (unless you already have enough hints), and for the very large problems the only thing that becomes easy is understanding the true meaning of "finding the needle in a haystack". This is where the "almost" stepwise strategies come, and they come in basically two flavors:

* Divide-and-conquer: use a background color or an analytical method to divide the problem in smaller problems and apply a stepwise approach for the smallest problems; see the approaches of lyc1977 (background color; discarding the impossible combinations is a way to find the consistent codes) and OvejaCarnivora.

* Least inconsistent code: instead of finding a completely consistent code with all the previous feedbacks, find the code that is the least inconsistent (within a subset of the search space). Essentially, a cost formula is used that measures how much inconsistent a code is, and any optimizing algorithm is used to minimize the cost. venco used the following cost formula (note: hits = feedback[0] = pegs with correct color and correct position; hints = feedback[1] = pegs with correct color but incorrect position):

    cost = sum { abs(hits - hits') + 10000 * abs((hits + hints) - (hits' + hints')) }


The 10000 factor is used to give more weight in the first moves to the total number of correct colors (hits + hints) than to the number of correct positioned colors (later, once the color frequencies are discovered the term becomes automatically zero). Now, since both vdave and I started the search already knowing how many pegs of each color were in the answer, the number of hints was completely irrelevant (because hits + hints = L if the code is a permutation of the right colors), thus the cost formula reduces to:

    cost = sum { abs(hits - hits') }


Furthermore, knowing the frequency of each color greatly limited the search space and also simplified the operation of finding the closest neighbours of a code (ie. codes with an approximate cost) to swaps. But these advantages didn't come without a price: as venco himself has already stated, his 14+ points advantage over vdave (in the tests) are very likely due to his algorithm skipping the color's frequencies discovery phase - yet he doesn't say that achieving that was far away from being an easy task at all (I can bet I wasn't the only one that unsuccessfully attempted that, so I'm looking forward to read exactly how he did it in the match editorial - right now I'm clueless edit: ok, now I understand how he accomplished it: it's thanks to the brilliant 10000 * diff(hits + hints) term in the cost function). On the other hand, vdave's "smart" discovery phase is rather intuitive (after one is told about it, of course...), easy to understand and implement, and it's also likely what allowed him to take the second spot in the tests.



http://www.mathworks.com/contest/binpack.cgi/analysis.html#Evolution

Strategies

Our contest problem is also known as the "knapsack" problem, a special case of the bin-packing problem. Everybody knows how to write a brute force search algorithm to find the best fit song list. However, since brute force search is obviously slow and our scoring system punishes algorithms with long CPU times, exhaustive search is ruled out. This leaves us with two other choices: limited (heuristic) search and multiple tries.

Limited search means using some heuristic judging criteria to check a selected subset of all the combinations, and choose the ones that pass the criteria to test.

Multiple tries means generating many random combination of the song list, and then choosing the one with the best fit. This can also be considered as a kind of limited search, except there is no heuristic judging criteria. Simply choose the combination with the shortest gap among all the selected combinations. This approach has a fancy technical name, namely, the Monte-Carlo method.
Monte Carlo Method

Earlier entries mostly used the limited search approach. However, they either had a large gap (due to limitations in their heuristics) or spent too much CPU time. They were soon out run by entries using the Monte-Carlo method. All of the top 100 entries used the Monte-Carlo method.

The Monte-Carlo method for this problem can be summarized by the following three steps:

    I. Generate some random permutations of the song list
    II. Find the best packing among these permutations
    III. Return the index corresponding to the best packing

Given these three steps, many strategies can be applied and many parameters can be adjusted. As one of the MathWorkers here said, the contest is like a genetic algorithm, except that it's the humans who are doing the mixing and selection. There were many cases where someone found a new strategy to apply to one of the steps, achieving a quantum leap in the score, which was then followed by a series of marginal improvements as other people tweaked (optimized) the strategy to its full power. This tweaking continued until a new strategy was found and another tuning war began.


Comparing the strategies

4. EMPIRICAL RESULTS
The first table shows for each strategy for how many combinations the game is won in a particular round of
the game. Or put in other words: each strategy produces a game tree, the table shows for each depth of the
tree how many leafs (nodes without successors) there are.
Round number 1 2 3 4 5 6 7 8 9
Simple 1 4 25 108 305 602 196 49 6
Worst case 1 6 62 533 694 0 0 0 0
Expected size 1 10 54 645 583 3 0 0 0
Entropy 1 4 71 612 596 12 0 0 0
Most parts 1 12 72 635 569 7 0 0 0
[Kooi]

The third table shows how many questions are needed in total in the strategy (the sum of the lengths of all
the paths from the root of the tree to a leaf) and the expected number of questions needed (the expected
length of a path to a leaf). The numbers in the second column are rounded.
total number of questions expected number of questions
Simple 7471 5.765
Worst case 5801 4.476
Expected size 5696 4.395
Entropy 5722 4.415
Most parts 5668 4.373
The last four compare quite favourably to the Koyoma and Lai’s result of 4.340 (Koyoma and Lai,1993)
[kooi]

The following table shows the number of questions that
can be asked if the first answer is (1, 0).
question answer number of different questions
AAAA (1, 0) 12
AAAB (1, 0) 53
AABB (1, 0) 34
AABC (1, 0) 125
ABCD (1, 0) 52



Links: 
Introduction:

Basics:
Overviews:
 Contests
  •  mathworks.com (2001) http://www.mathworks.com/contest/mastermind.cgi/rules.html http://www.mathworks.com/contest/mastermind.cgi/home.html
    http://www.mathworks.com/contest/mastermind.cgi/analysis.html
mathworks.com
  • http://www.mathworks.com/contest/mastermind.cgi/winner.html
  • http://www.mathworks.com/matlabcentral/fileexchange/Files.jsp?fileId=946
  • lbackstrom; venco (TopCoder Members) (2006). Marathon Match 2 (html)2 different strategies on 20 digits/15 colors game
Original research
  • Knuth, Donald (1976–1977). The Computer as a Master Mind.(pdf) J. Recr. Math. (9): 1–6
  • Irving, R. (1978-1979). Towards an Optimum Mastermind Strategy. Journal of Recreational Mathematics, Vol. 11, No. 2, pp. 81 – 87. (NO LINK FOUND) [discussed in: Kooi (2005), Fahrni (2009)]
  • Neuwirth, E. (1982). Some Strategies for Mastermind. Zeitschrift f¨ur Operations Research, Vol. 26, pp.B257 –B278.(NO LINK FOUND)[discussed in: Kooi (2005)]
  • Shapiro, E. (1983). Playing Mastermind Logically. SIGART Newsletter, Vol. 85, pp. 28 – 29.(NO LINK FOUND)[discussed in: Kooi (2005)]
  • Koyama, Kenji; Lai, Tony (1993). "An Optimal Mastermind Strategy". Journal of Recreational Mathematics (25): 251–256 (NO LINK FOUND)[discussed in: Kooi (2005), Fahrni (2009)]
  • Wiener, Michael (1995). Newsgroups: sci.math (txt)
  • Merelo, J.J. ; J. Carpio; P. Castillo; V. M. Rivas; G. Romero (1999) "Finding a needle in a haystack using hints and evolutionary computation: the case of Genetic Mastermind" html(html)(html )
  • Temporel, Alexandre ; Tim Kovacs (no year 2002?) "A heuristic hill climbing algorithm for Mastermind" (pdf/gz) 
  • Merelo-Guervos, J.J.; P. Castillo; V.M. Rivas (2003/2004) "Finding a needle in a haystack using hints and evolutionary computation: the case of evolutionary MasterMind" (pdf)
  • Vomlel, Jiri (2004?) Bayesian networks in Mastermind (pdf)
  • Vomlel, Jiri (2004) Bayesian networks in Mastermind, Proceeding of the 7th Czech-Japan Seminar, Awaji Island, Japan (pdf)
  • De Bondt, Michiel C. (November 2004). NP-completeness of Master Mind and Minesweeper. Radboud University Nijmegen (ps)
  • Stuckman, Jeff; Zhang, Guo-Qiang (2005). Mastermind is NP-Complete (html) 
  • Kooi, Barteld (2005). "YET ANOTHER MASTERMIND STRATEGY" (pdf)
  • Venco  (2006) forums (Topcoder.com)
  • Heeffer, Albrecht; Harold Heeffer (2007).NEAR-OPTIMAL STRATEGIES FOR THE GAME OF LOGIK (pdf) 
  • Slovesnov, Alexey (2008/2011) Search of optimal algorithms for bulls and cows game (pdf) 
  • Gerfertz, Torsten; Christian Pernegger und Thomas Seidl (2008). PostScript spielt Mastermind (html)
  • (Unknown) (2008) html 
  • Jeff Rowberg (2010). Solving Mastermind Logically (html/pl)
  • Doerr, Benjamin; Reto Spƶhel, Henning Thomas, Carola Winzen (2012). Playing Mastermind with Many Colors (html) 
  • Merelo, J.J. et al. (2012) An experimental study of exhaustive solutions for the Mastermind puzzle (5 Jul 2012) (html)(pdf)
  •  
(no year)
  • http://code.google.com/p/mastermind-strategy/source/browse/wiki/Overview.wiki?r=43
  • http://www.docstoc.com/docs/14831880/Optimal-Mastermind-Strategy (paysite)

  • http://www.delphiforfun.org/Programs/MasterMind.htm Program as "seeker" 
  •   

Enlaces externos

  • Mastermind flash Juega al mastermind en flash.
  • Mastermind OnLine Juega al Mastermind en tiempo real usando Algoritmos GenĆ©ticos
  • Gran ColecciĆ³n de juegos Mastermind en lĆ­nea

  • Static Mastermind (variant), http://www.maa.org/editorial/knot/Mastermind.html
    A naive (but logically consistent) strategy is 97% as good as the optimum strategy, http://www.csc.ncsu.edu/degrees/undergrad/Reports/rtrosu/intro.htm
    For an approach using a genetic algorithm, http://kal-el.ugr.es/mastermind/docmm.html
    Enigma Mastermind (variant), http://www.student.brad.ac.uk/nchilton/java/Enigma.html
    For the rules of Master Mind, as supplied with the game, http://www.iserv.net/~central/GAMES/mastermind/mastermind.html
     


Permalink: http://codebreaker-mastermind-superhirn.blogspot.com/2012/08/mastermind-strategies.html

Citation: Retzel, Ronald (2012). "Strategy for computer codebreaking". http://codebreaker-mastermind-superhirn.blogspot.com/2012/08/mastermind-strategies.html

Disclaimer: All of my blogs and posts are my private and personal online notebooks and my private opinion. Posts are allways draft versions that might get published to be reviewed by and to be discussed with my friends only. No information is meant for the public and I do not take any liability at all at any time for any content, code, link etc. for any reason. Warning: many links here are "clickable" and your software might usually direct you to and display content other than mine without warning. 

Thursday, July 26, 2012

Bulls and Cows Mastermind ® Superhirn ® Codebreaker ®

This is my personal and private place where I put my personal notes to all essential links I could find to any information regarding this wonderful game. Thanks for your interest. Please try to help find missing links.




Mastermind ®
Original design by Mordecai Meirowitz (1970) [1]
Various retail names: Mastermind, Superhirn, Supercode, Variabolo, Senha and many others
Note that the names are trademarks.


Description
Logic game (game of deduction)
Number guessing game (by process of elimination)
Color code or word guessing game
2 players

History. back to index

The exact year when the number guessing game Bulls and Cows was invented is not known. "Bulls and Cows has been played as a paper-and-pencil game for a century or more. I first played a computer version in 1968 on Titan, the Cambridge University Atlas" (John Francis, 2010 [1]).
Original features: paper-and-pencil game, number code

A similar word guessing game "Jotto was invented in 1955 by Morton M. Rosenfeld and marketed by his New York-based Jotto Corp." (wikipedia.org [2])
Jotto sheet [3]
New features: word code

"The first computer implementations of Bulls and Cows were the MOO program written by Dr. Frank King at Cambridge in 1968, a version for the TSS/8 time sharing system written by J.S. Felton, and a 1970 version written for the Multics system at MIT by Jerrold Grochow. The game was introduced to a wider audience in the 'Computer Recreations' column of the Apr-Jun 1971 issue of  'Software, Practice & Experience' " (John Francis, 2010) [1]
New features: 1-player vs. computer, mainframe computer game, keyboard
But: very limited access to mainframes, nowadays obsolete

"Mastermind was invented in 1970-71 by Mordecai Meirowitz, an Israeli Postmaster / Telecommunications expert. His idea was at first turned down by many of the leading toy companies, but he persisted, and took it to the International Toy Fair at Nuremberg in February 1971, where he showed it to a small English company, Invicta Plastics Ltd. The small Leicester based company bought up the entire intellectual property rights to the game, and under the guidance of its founder Mr. Edward Jones-Fenleigh, refined it, and released it in 1971-72. It was an immediate hit, and went on to win the first ever Game of the Year Award in 1973."[4]
New features: board game, color code

In 1976, Ronald Samson et al., Leicester, filed a US patent about a game that is known as "Grand Mastermind" (US Patent 4059274) [5][5a]. In 1978, Mordechai Meirovitz filed a patent about a Mastermind game for 4 players known as "Mastermind 44" (US Patent 4241923) [6][6a].
New features: color and shape code, 4 players

Remark: As far as I know no patent has been filed about the original Mastermind game. The name, as it is a trademark, is protected. The special design of a game is an invention and might be protected by copyright laws. Due to so-called prior art the game idea itself about guessing numbers or codes, basically the old "Bulls and Cows" cannot be protected by copyright laws. In my opinion the invention of colored pegs used as game indicators might be copyrighted, but I disagree that this implies its graphical adaption by some computer games, since virtual indicators displayed by a computer do not perform the same way like physical ones do, meaning such invention of physical objects does not bear virtual objects. On the other hand some of the computer games actually clone the design of the original board game as exactly as possible, and with due deference.

In 1977 or 1978, at a time when pocket calculators had become popular for some years, the game was amongst the first electronic handheld games.
Superhirn electronic (Parker, 1977?/1978?)
 New features: handheld computer game

"Video games had been first programmed on university-based computers in the 1950s, but the general public couldn't play them until they began appearing as coin-operated games in the 1960s." [7] "The Atari 2600 -or the Atari Video Computer System (VCS) as it was initially marketed and sold- opened up worlds of game play for its owners and their friends when it was introduced in 1977"[7]. The number guessing video game "Codebreaker" was introduced 1978 by Sears/Atari for the Atari 2600[8]. In 1981, I wrote an APL program running on a BS2000 mainframe which was doing pretty much the same thing. In 1983 Atari home computers came out with BASIC on-board and it became easy to write a similar computer game program.
Atari 2600 (1977) [8]
New features: home video game

With the evolution of PC hardware and software booming since the mid-1980s programmers and designers have been enabled to drastically improve game performance and graphics design. In the 2000s, games with limited memory requirements had a renaissance being appropriate to and used on mobile phones.
New features: PC game/ Online game/ Smartphone app


Links:
  • [1] John Francis (2010). "Strategies for playing MOO, or “Bulls and Cows”" (pdf)
  • [4] Toby Nelson (2000). A Brief History of the Master MindTM Board Game
  • [7] http://www.thestrong.org/online-collections/icheg/20/111.3559
  • http://ftp.stratus.com/vos/multics/tvv/moo-in-multics-1972.pdf
  • http://ftp.stratus.com/vos/multics/pg/pg.html
  • www.multicians.org/moo-in-multics-1972.pdf 

How to playback to index


2 Players
Bulls and Cows (Mastermind/variants) is a game for 2 players. A code (e.g. "4222") is secretly set by the codesetter (player or computer). The codebreaker (player) has to find out this code with as least trials as possible. In order to succeed the codebreaker makes a guess and the codesetter compares guess with code. The result of that comparison is given as specific hints. The game is over if either the code is cracked, all agreed tries are done or the player gives up. There is an uncommon game rule where the codesetter is allowed to select another code (from the possible codes left at that time) during the game; it is not examined if or how the game gets more difficult that way.

It is most important that the codebreaker can create (paper, board, mind/memory) or watch a list of all previous guesses and hints at any time during the whole game. (Only in simple games there are known strategies where the player only needs to remember the same codes for every game [cf. 12 coins problem])

Code specifications.
The specifications of the code are set before the game starts. The code is a sequence (or a pattern) of objects. Most common objects are: numbers (e.g. "4222"), colors, shapes, symbols, characters etc. or a combination of these.
The length of this sequence may vary for each game, originally 4 digits (Bulls and Cows/Mastermind), most common now are: 3,4,5,6,8,10 digits.
The number of available different numbers/colors (etc.) to be placed within the sequence may vary for each game, originally 6 colors (Mastermind) most common now are: 4,5,6,8,10 colors (etc.) and 26 characters. Common limitations to set the code are: no duplicates allowed.
In word codes only words are allowed that are common words of a language or listed in a dictionary agreed on by the players before the game starts.

Guess specifications.
The specifications of a valid guess are set before the game starts. Length of sequence and available objects are the same as the code, but some varieties are possible, e.g. blank spaces might be allowed or not, duplicates might be allowed even if the code must not have. Game difficulty increases rapidly if only a consistant guess is allowed (every guess must be a possible candidate for the secret code in consistance with all hints from the previous guesses), but this is the best strategy to find the code as fast as possible. Note that if a valid word has to be found then the guess does not necessarily have to be too a word.

Hint specifications.
Hints may be indicated by any appropriate method (e.g. colors, pegs, text). There are 3 basic types of hints:
a) color (etc.) in code, in correct position (black peg/ bull)
b) color (etc.) in code, but in wrong position (white peg/ cow)
c)  - color (etc.) not in code (no peg/ empty space)

In some variants more hints exist:
d) [applies only to colors and shapes code games:] a color/symbol is placed on the correct position but the correct symbol has the wrong color or vice versa. (blue peg)
e) [applies only to 2D/3D versions (e.g.Supermind):] see

Very important: The hints must never be given for a specific digit of the code nor in an order other than specified before the game starts (commonly the order is: black peg-white peg-space). If there are duplicates in the guess, then there must be only one hint per digit of code (see example below)

Example:
(You can find more examples within the images of the "online games" section of this post)
Bulls and Cows (rosegaelle.com, 2009)

In the image above there was set a code of 4 digits ("1234"). The game was completed after 4 guesses ("1122", etc.). "1122": "1" at position 1 is correctly placed, 1 bull. "2" at position 3 is wrongly placed, 1 cow. On the right side there are the respective hints for each guess. Note that the hints might not be in order, e.g. the 2nd digit is cow instead of space. Note that there is no 2nd cow for the "2" at position 4 of the guess since the code position of the "2" in "1234" has already been checked. For experts: In this example guess no.2 as well as no.3 do not represent a guess that is consistent with the hints given with guess no.1.

Another example:[1]

  • 1-All four elements are correct - the mark is four black pegs. All of the following examples refer to this combination.
  • 2-Only the white rectangle is correct - one black peg.
  • 3-The white rectangle is right, but in the wrong position - one white peg.
  • 4-The colour white is right in the first position, the shape hexagon is right in the fourth position - two blues.
  • 5-The white rectangle is in the right place and gets a black; the yellow circle is in the wrong place and gets a white; the blue triangle attracts a blue peg because the colour is right.

Links:
tnelson.demon.co.uk good introduction
http://www.youtube.com/watch?v=rmp05sCXTRg video how to



Strategy for real people back to index

Index.
  • Basics
  • Word code game
  • Color code game 
  • Links
Basics.
Your strategies should consider different game scenarios:
  • Codesetter allways sets a random code
  • Codesetter reacts to the codebreaker's strategy
Different game sets might require different strategies.
  • Level of difficulty
  • Dictionary based game sets
Just to illustrate how the level of difficulty can change:
  • 6 colors at 4 digits: 64 = 1,296 possible combinations (duplicates allowed) [original Mastermind]
  • 10 numbers at 5 digits: 105=100,000 (duplicates allowed)
  • 26 characters at 5 digits: 265=11,881,376 (that's why only valid words are allowed as code) 
Word code games

Superhirn Codewort (Parker, 1972)

  • It is not allways the best strategy to start guessing with a valid word and hope to be lucky as shown in the image above. Another strategy goes like this:
[eastoftheweb]

  • First find out the quantity of vowels / consonants, e.g. "aeiou". Next start to find consonants, frequent first. The guesses shown above for sure are not optimal, it will depend on the language which guesses might be better than others. In the examples above I just went through the alphabet, and it seems to work at least not badly.
  • Check here for other Jotto rules and strategies. 

Color code games:
  • first try to find all colors, then try to find the right places
  • start with 2 different duplicates: e.g. "1122" or "222777"
  • start with 1 color, e.g. "222222", you get as result how many times it is in the code, then continue with next color until all colors are found, then concentrate on sorting. Example:
First guess gives us 2 blue pegs, so we continue with them and fill with next color red.

  • Now, same strategy, but while searching for the rest of the colors we already start sorting the colors that are already found. Example:
Since the 2nd guess had no hits, the two blue pegs have been shifted to the right to find their correct places
Cf. this approach with the following computer strategies:
Links: 


Board Gamesback to index
Original features: 2 players, color code

Mastermind, newer version [1]

Grand Mastermind, 1974
New features: colors and shapes code. You have to guess 4 symbols + 4colors (adds up to 8 digits).

Word Mastermind, 1972



Links:
  • Invicta Toys and Games Limited
  • boardgamegeek.com List and hundreds of images of most board games ever published plus additional information to each main version
  • e-s-g.euCollection of images of many mastermind versions and variants
  • tnelson.demon.co.uk Collection of images of many mastermind versions
  • http://www.mastermindboardgame.com/about-mastermind/ latest versions and videos

Electronic Gamesback to index
New feature: Handheld computer, LED display

Superhirn electronic (Parker, 1978?)
But: still the list of hints must be done manually


Invicta (1977)




Links: 



Video Games
New Feature: Home computer system, TV display, automated hint list
But: Programs only run on specific computer hardware (nowadays some of these games have been emulated for modern operating systems)

Codebreaker (Sears/Atari, 1978)

Links:

PC Gamesback to index
New Features: Games for various operating systems, mouse, improved graphics and design
But: Programs only run with the specific operating system they were made for.

Homemade version for MS-DOS, 1985
New Features: Consistancy check (guesses that cannot be the code are indicated/rejected), counter of combinations left to be the code, learning mode
But: MS-DOS didnt support mouse (actually a keyboard can be more comfortable to just enter numbers) Note that MS-DOS programs are still running on modern MS-Windows systems.

win3.0 game
http://www.isoftpedia.com/windows/games-entertainment/puzzle-word-games/screenshots/cows-and-bulls-for-windows/


MindsEye, MS-Win, 2003-2012 (otssoftware.com,)
Features: up to 15 colors

Supermind, MS-Windows, 2005-2010 [2]
New Features: 2D code geometry and different hint logic

A 3D version might be interesting.


 Bulls n' Cows, MS-Win., 2009 (rosegaelle.com)
New Features: Computer as codebreaker


MS-Win. 2011 (robwilliams.me)
Features: source code is available, computer as codebreaker

http://fourdigits.sourceforge.net/


Links:
  • http://home.pacific.net.hk/~kfzhou/mmh.html (online+pc versions)


Online Gamesback to index
New Features: Browsergame, independant from specific hardware/OS
But: keyboard not supported, sometimes special plugin/player needed (shockwave/flash,java etc)

Color code (codebreaker.creativitygames.net)

Word code (eastoftheweb.com)

Signs code (m8games.com) Review: excellent mouse feeling
http://www.mini-flashgames.de/board-games/mastermind-codebreaker/
http://www.retromundi.com/games/puzzle-games/mastermind-codebreaker.html
http://www.icepointgames.com/04/mastermind-codebreaker/ 


Colors and shapes (www.soc.napier.ac.uk/~andrew) 3D version available



Color code iohelix.net features up to 15 digits and 10 colors, excellent mouse feeling

color code (mijan.de feature: consistancy check (checks if guess is consistant to previous hints)

Ghislain Costagliola


Links:
  • onlinespiele-sammlung.de More than 200 online games listed and linked, most of them playable (swf/js/java/other), basically color code games, occasionally updated

http://bibooz.net/juegos/mastermind/
http://home.pacific.net.hk/~kfzhou/mmh.html (online+pc versions)
http://www.counton.org/games/virtualmathfest/code.html online
http://www.muchgames.com/search-games/mastermind-codebreaker online




 Online Highscores and 2 Players versionsback to index
New Features: Online game competitions
  • http://www.bullscows.com/
  • http://m8games.com/web/color-solver 
  • http://brainking.de/de/WaitingGames 2 players
 Logik. http://brainking.de/de/WaitingGames


Logelix (mastermindspiele-online.de, 2012)
New Features: 2 player online internet game version



 Computer codebreaker back to index
New Features: Computer is solving

Smartphone Appsback to index
New Features: use with smartphone
But: use only with smartphone, Android or other OS

Color code (droidmill.com 

Word code (droidmill.com)

Word code (newfreeappleapps.blogspot.de (Jibber Jabber By Sqrrl, 2011)

androidforums.com (Maize, 2009)

http://www.nineoverten.com/2010/04/19/mastermind-review-iphoneipod-touch/

Links:
http://www.handster.com/code_breaker_mastermind.html android

Other Formats back to index



Permalink: http://codebreaker-mastermind-superhirn.blogspot.com/2012/07/mastermind-superhirn-codebreaker.html

Disclaimer: All of my blogs and posts are my private and personal online notebooks and my private opinion to be discussed with my friends only. No information is meant for the public and I do not take any liability for any content for any reason at any time. Warning: many links are "clickable" and your software might direct you to and display content other than mine without warning.