My 'expert player, candidate master and above level' chess computers

Mephisto Milano v2.00

Year: 1993
Programmer: Ed Schröder
CPU: 65C02 @4.9Mhz
ROM: 64Kb
Elo level: 1998
(1900 FIDE)
CMhz: 4.9
Rperf: 110%
KT: 1723
Square size: 0.96"

I much appreciate Ed Schröder's programs... So, to do him full credit, the Rebell 5 being his first joint work with H&G was a bit alone in my collection. At the other end of this collaboration period, this Milano v2 (software version, the device has been upgraded with a 'Nigel Short' ROM) is one of the latest chess computers authored by Schröder, and the ultimate version of his 8 bits 6502 software, the best of class using this type of processor. Roughly 200 Elo points above the Rebell 5, using the same CPU, is quite impressive! The Milano is a nice looking device, quite flat, with a restrained design, offered with two white chessmen sets (light grey or chromium-plated) and a travel chess set as well (2D magnetic pieces). A cover casing is available to protect the chessboard/keyboard/display face while travelling or for safer storing. This casing smartly includes magnetic areas to easily store the 2D pieces (and a writing pad to track your games). In short, a very well designed chess computer, accomplished, sturdy, offering a number of software features (training, game post-mortem analysis, emulation of an opponent of selectable Elo level...). In addition to the 'Nigel Short' ROM providing twenty or so Elo points and a playing style known to be slightly more active, this one I bought €74.95 on German eBay features a nice enhanced display:






The "Nigel Short" program obviously comes from the Rebel 5 lineage, but with progress in nearly all domains. The Rebel 5 peak performances (openings, strategy) are kept unchanged, while the enhancements over other domains result in a smoother, more mature profile. The calculations ability made no gain, this was not unexpected: same processor and speed; and the evaluation function must have gone more sophisticated, which is not in favor of pure calculations strength. The most noticeable progresses lie in the middlegame skills, the tactical patterns recognition, the defense and the endgame theorical knowledge; even if this last domain remains weak with regards to the overall playing level.

Novag Citrine

Year: 2006
Programmer: Dave Kittinger
CPU: H8 @20Mhz
ROM: 56Kb
Elo Level: 2001
(1903 FIDE)
CMhz: 13
Rperf: 105%
Square size: 1.47"

I didn't buy this nice Citrine (€250 on French 'le bon coin') for its program, experience proves it is very close to the Emerald Classic. A small difference lies in its roughly 50% faster processor, and a slightly wider opening book. No, the real difference is with the playing comfort using the rather large auto-sensory wooden chessboard, and above all with the connectivity to the PC software Arena, offering a huge choice among chess engines, now usable as direct opponents on the chessboard. You just need to enable the 'referee mode' on the Citrine to disable the original program and substitute the Arena engine, including MessChess emulations of chess computers (I selected around 150 of these to set them in my Arena pool of engines). Then you can forget the PC in the background, it is no use watching the screen as the engine moves are reported by the blinking leds (and this can be usefully completed using Arena's feature for vocal announcement of moves). It is really top to use! A few drawbacks to be pointed out: the display module is not very handy, unreadable while lying flat (you absolutely must find or knock up a small stand to set the convenient slope) - but you can also play without - and the necessary cabling for PC connection, designed for a serial connector and thus requiring a USB or Bluetooth adaptor to connect a decently recent PC. But the Citrine remains the most affordable way to connect a nice auto-response board to a PC, and as a bonus, it is a dedicated chess computer as well.

Mephisto Chess Challenger

Year: 2004
Programmer: Frans Morsch
CPU: H8 @10Mhz
ROM: 32Kb
Elo level: 2025
(1927 FIDE)
CMhz: 6.5
Rperf: 110%

KT: 1755
Square size: 0.98"

Like the Sensory Chess Challenger 8, this chess computer was not expected to join my collection. The reason for it relates to its twin brother tuned to 32Mhz, described below. Actually, while the 40Mhz tuning still there shew some issues, I spotted on French eBay the offer for this complete device, unchanged (so 10Mhz clock) and mint condition at the 'buy now' price €14.99. So for the value of a mains adapter, I got not only a spare CC, should the other device get cooked because of the overclocking, including original packaging, user's manual and mains adapter ; but also a brand new Saitek chessmen set, perfect to fit the GK2000. So, I can even have both 10 & 32Mhz enter the same tournament and play against each other! This device remains a secondary one I unpack rather seldom, as I use the tuned CC to play with original speed as well, thanks to the switch set on the clocking circuit. This spare chess computer is thus satisfied with using the Mephisto Mirage former set, shown on pictures, the pieces fit quite well...
Some useful tips regarding this chess computer (they are as well valid for the 32Mhz version presented below):
  - As many recent clones of the Saitek GK2100, the 'H8' bug is present. H8 does not mean the processor, rather the board square: should one of its chessmen stand on this square, the Chess Challenger may play it without paying attention to the result. Live example: in this position, up to then balanced against a Mephisto Vancouver (a device worth 150 'Elo' points more, as an order of magnitude!)
7r/6R1/1p3p2/1pnkp3/7P/P3KP2/1P6/4B3 b - - 0 1 the Chess Challenger as black played  Rxh4 ?? and of course lost the game. Some additional examples here.
  - Present as well, the 'fun level' bug - but not getting thru any fun level (or rebooting, if done) avoids enabling this bug.
  - A process not described in the user's manual lets you gain access to interesting hidden options: power off the device (stop key), then hold down the option key while powering back on (option+go keys). Additional options are:
    . SEL selective or brute force algorithm, an option the GK2000 proudly advertises (check the 'selectable search strategy' label on the snap, previous category).
    . EASY to disable pondering while the player is on the move
    . RAND (random) to add some variety in the chosen moves
    . BOOK to enable or disable the openings book
    . BK:FL (book full) to use full variety of book moves
    . BK:PA (book passive) to privilege defensive moves
    . BK:AT (book attacking) to privilege active moves
    . BK:TN (book tournament) to restrict the choice to strongest moves
  Getting back to default options menu (eg to enable pre-recorded games) would use a similar process, powering off then on using both level+go keys (or rebooting the computer, of course).



Khmelnitsky test: performing the test requires some attention, as the score displayed using the 'info' key, after interrupting the search, is sometimes inappropriate. I use the infinite level (B8) and I stop the search exactly after three minutes thinking time (or one minute in some situations); and sometimes the score is then re-computed on the fly, leveraging a single ply, instant search. It is thus mandatory to enable the display evaluation option while the computer is thinking, and to use the value that was displayed just before stopping the search.
As expected, the Mephisto Chess Challenger's profile is close to the one of the GK2000 from the same author, powered by the same processor; except for roughly hundred more points (the 32Kb ROM enabled the program to be enhanced, compared to only 16Kb for the GK2000). The main strengths remain tactics, recognizing threats, and counterattacks. Nothing better as far as endgame is concerned, on another hand the algorithms have been significantly reworked in order to strengthen the middlegame play. The Chess Challenger is particularly comfortable in intricate positions actually featuring a lot of pieces, where its tactical skills rule. In return, still comparing to the GK2000, the level of play in the opening lowered noticeably (meaning playing out of book, when most pieces are present but not still whith much complex tactics). A human player at same average level can leverage significantly better attacking skills, but would wisely keep the game simple and play cautiously until the endgame is reached.
As a conclusion, I point out the H8 bug affected the test in 4 out of 100 positions. 

Fidelity Designer Mach III Master 2265

Year: 1990
ProgrammerDan & Kathe Spracklen
CPU: 68000 @16Mhz
ROM: 64Kb
Elo level: 2029
(1930 FIDE)
CMhz: 9.6
Rperf: 108%
KT: 1974
Square size: 0.98"

I could decently not let my collection without any CPU there to represent the Motorola 680x0 family. The 68020 and 68030 32 bits being only settled into top class devices (very expensive ones, so to say... let alone the outstanding 68040 and 68060!), the 68000 16 bits was my best choice to fit my budget (device bought €250 on French eBay, complete with original packaging). In addition to the clearly faster speed compared to a 6502 (8 bits), the 68000 is able to handle more than 64Kb (up to 16Mb), thus enabling the Spracklen couple to use hash tables, first introduced with the 1987 Excel 68000. Within this Designer Mach III, 64Kb RAM are dedicated to those (plus 16Kb for any other program purpose, thus 80Kb RAM as a gran total). Not a huge amount, but it is easy to spot the effect during endgames, when the (displayable) search depth clearly raises, of course providing a very positive enhancement with regards to the playing skills. The Mach III program is top of class within the Spracklen's series: obviously stronger chess computers do exist, but thanks to higher speed and hash table raised size. The Elite Avant-Garde series, the Mach IV Master and Sargon V as well (the last Sargon from 1991) do not perform better as far as Rperf is concerned. The 2265 reference is made with regards to the Excel Mach III performance (same hardware specifications and same software, but not a so nice looking box), certified by the USCF (United States Chess Federation), as an outcome from tournament games played against rated players. So it officially deserved the Master title, check the below picture. I had troubles with the power supply: OK on batteries, but no mains adapter could provide any reliable result. I tried a modern stabilized power source set with 6V, 7.5V or 9V without success; same result as well using a non-stabilized one (genuine Fidelity 9V or HGN 5001 8.5V: both provided the same result). The Designer sometimes went live, but most of times got stuck showing random display segments with some LEDs randomly lit as well. Once correctly started, it worked for few minutes up to one hour as an order of magnitude, then bugged again showing the very same symptoms. I simply changed the cables connections from the chessboard power plug, in order to let the mains adapter feed the printed circuit board using the same path as the batteries, thus requiring of course to use a 6V stabilized power source (in battery mode, four 1.5V batteries are connected serially, so 6V). The on/off switch can still be operated, and using batteries as a backup source is as well still possible (without pluging the mains, of course). More than four hours continuous computing on mains without any issue provided evidence this solution is fully working!






I was keen on measuring, thanks to the Khmelnitsky test, the improvement achieved by the Spracklen's program, compared to the 1985 Excellence. The constancy of the playing strength, from the opening to the middlegame and up to the endgame, is indeed still present; gone up a step. The progress in the standard endgame positions domain is huge! Calculations, tactics and defense clearly improved as well, with some help from the fast processor and the hash tables. The program efficiently recognizes threats, and keeps being more comfortable with defense than attack.

Mephisto Chess Challenger 32Mhz (tuned)

Year: 2004
Programmer: Frans Morsch
CPU: H8 @32Mhz
ROM: 32Kb
Elo level: 2126
(2025 FIDE)
CMhz: 20.8
Rperf: 109%

KT: 1840
Square size: 0.98"

After the GK2000 being Mephisto Europa's big brother, I found an eldest brother! This Mephisto (from Saitek, as it is labelled) is a clone of the GK2100 launched in 1994, slightly more than one year after the GK2000, and including software enhancements (worth roughly 50 additional Elo points) featuring an enlarged 32Kb ROM, and a code probably based on Fritz 2 launched in 1993. Not enough for a breakthrough gap regarding the GK2000, so I did not try to enter any in my collection: too close as far as playing level and playing style are concerned. But with a (initially) four times accelerated clock, thus 40Mhz pacing the CPU, this device was worth paying attention to it! And even more as it was announced fully complete, including original package, user manual, chessmen and mains adaptor, mint condition on German eBay, for €64. Cherry on the cake, the 40Mhz tuning has been smartly done, as a switch hidden in the chessmen's storage space lets you set the original speed (10 Mhz) back (switching requires the device to be powered off). And this is very fortunate, as the beast revealed being temperamental during my first runs. I started with a performance test, using a BT-2630 position for which the GK2100 achieved time was announced, and long enough (nearly 4 minutes). Using the 10Mhz setting, the Chess Challenger displayed the expected move in the very same time as the GK2100, then actually and accurately four times faster (using a stopwatch) once set to 40Mhz. Fine, it works perfectly! So I go on with full games against opponents expected to be close to its skill range. And I was disappointed! Despite its computing speed, the Chess Challenger not only did not master the game, it lost or failed to win playing obvious blunders I could spot with my own weak level. E.g. see above snapshot, with a won position for white, it played rook to e7?? while set at one minute per move level. And blunders are proudly announced with a pretty good score, which drops down painfully as soon as the opponent's refutation is input, with no need for the Chess Challenger to compute much plies... I take back the move and start again computing using 10Mhz setting, no way, same blunder... I dig a while around the well known fun level bug concerning many GK2100 clones, but the usual medecine fails (press ACL - all clear then carefully avoid going thru any fun level setting, triggering the computer to give a piece from time to time). After many tries and fails, I notice the CC sometimes plays a correct move, using the same test position, whatever setting 40 or 10Mhz is used; then repeatedly do not blunder as long as I do not reset it. Then I focus, actually it is as long as I do not reset it with the switch in 40Mhz position. That is the point, I can systematically reproduce the blunder assuming I reset using 40Mhz, even if I later on switch to 10Mhz; and the CC plays perfectly assuming I reset it using 10Mhz, even if I switch to 40Mhz immediately in the sequence of the reset. Reset being either the first power on as a result of plugging the mains adaptor, or pressing ACL. In such a case the CC performs a testing step (diods are lighted one after the other, and a particular animated display is shown on the screen), this step most probably initializes some RAM data useful for further operation. 40Mhz are too much during this step resulting in certainly corrupted data, maybe simply incomplete. I thought for a while I had resumed stability
using batteries in addition to the mains adaptor, thus avoiding device resets, but actually pressing New Game keys sometimes caused issues. So for complete safety I reset the device in 10Mhz mode each time I need to start a new game, then I switch it to 40Mhz. Et voilà, back to expert level!
Update:
during a heatwave (88°F in the room where I played with the CC), despite the 10Mhz way to power on, obviously wrong evaluations (resulting in the immediate loss of a valuable piece) recurred (taking back the failing move and resuming the search), unless inserting a short powered off period (using 'stop' key) before restarting the search. Then the CC plays accurately, displaying the same score as using the 10Mhz setting; but a few tens seconds later, once hot, the failing evaluation resumes systematically. To the conclusion: 40Mhz is too much...
The internet portal of an electronic components shop offered 32Mhz quartz crystal oscillators, two pieces for €1... So I dug out my soldering iron, hoping that if the CC only scarcely failed using 40Mhz, and only in specific situations, then it would benefit from a wide enough reliability margin using 32Mhz. Well, all went back into order! Successful BT-2630 test, with a 3.2 times faster answer compared to native speed, as expected. Cold start test, with the speed switch immediately set to 32Mhz, no way to make the usual failure happen, the one that required to boot using 10Mhz. Warm test, the overheating symptom misleading the former 40Mhz evaluation revealed impossible to reproduce! From a pure technical standpoint, the speed reduction should cost roughly 25 Elo points in performance, but part of it should be recovered thanks to restored evaluation relevance. So here it is, re-labelled:

As the quality of its play is now reliable, this 32K Morsch program leveraging a fast hardware proves to be a formidable opponent, able to fight on a par with the strongest chess computers until late middle game. But facing such standard of playing, it definitely must get the better of its opponent before reaching endgame, to hope for a win. This experience-based established fact is fairly consistent with the skills profile of the program (please check the Khmelnitsky test of the standard 10Mhz device, higher in this page).
Here is a direct comparison of both 10 and 32Mhz versions:




The gain in speed (3.2 times) enables making up for the main weakness (opening) of the standard Chess Challenger. The other domains are slightly enhanced, with a global gain of 85 'KT' points. Strangely, the calculations domain is somewhat lower, caused by a correct answer of the program at beginning of the thinking time, adding to the 10Mhz CC score, later evolving on a wrong track. But it also means the 32Mhz version does not over calculate the 10Mhz one, across the 16 other positions counted for this domain.

Mephisto Berlin

Year: 1992
Programmer: Richard Lang
CPU: 68000 @12Mhz
ROM: 128Kb
Elo level: 2172
(2071 FIDE)
CMhz: 7.2
Rperf: 117%
KT: 2061
Square size: 0.96"

My decision to add this Berlin 68000 to my collection took time to get  mature enough, and followed a slightly particular path. First of all, I missed a program from Richard Lang's best series, that is to say one from the Almeria - Vancouver series (1988-1991). After the success of the first series Mephisto Amsterdam / Dallas / Roma (1985-1987), the author reworked his code, notably incorporating hash tables, and an enhanced selectivity; so recurring his absolute domination on the world microcomputer chess championship (WMCCC). The Vancouver is the most developped within this serie, and the most successful program from Richard Lang, combining strength and playing style. The only ones to surpass its strength on dedicated chess computers will be the later Chess Genius 68030 and the "London upgrade" ROMs offered by the author himself (and not by Hegener & Glaser). The "London" version is a retrofit for Mephisto devices of Chess Genius 3 for PC, which left its mark on public opinion after its victory, runnning on a Pentium 90, over Kasparov in London, 1994. Nevertheless, the original Mephisto versions, and particularly the Vancouver, keep as an advantage a more active style! I dreamt for a while about buying a Mephisto München chessboard equipped with a Vancouver modul set, and I simply lost time and money: the opportunity I chased revealed being a swindle. On reflection, due to lack of room, I would not have been able to conveniently play using a chessboard as large as the München... And here comes the Berlin. Sharing its robust and compact form factor with the Milano, it includes the Vancouver chess engine, and the same 68000 processor as the 16 bits Vancouver does. Some minor menu options from the Vancouver program were removed to reduce the ROM size to 128K, while the RAM stayed as large (512Kb, most useful to the hash tables). Its Elo position is of course the same as the Vancouver's, even with a few tiny additional points, thought to result from some bugs tweaks. Scalded by my trying to buy the Vancouver from nobody, I bought the Berlin 68000 leveraging a German collectors' website - a costy but far more reliable process. The device was offered for €270, in very fine condition, complete and fully original. The playing style is diametrically opposed to the one of Spracklen's programs, which are keen on lauching all-out attacks at the smallest opportunity to eat pieces, even at the expense of their own position's consistency (this trend caused their style to be assigned the nickname of "jungle chess"). Richard Lang's programs playing style is safe, simple, limpid; most of the time satisfied with building on small positional advantages, that often lead to victory once the game is simplified. An image comes to my mind to illustrate this style: when I was young, I enjoyed reading books about the wilds of the Great Northwest, from Jack London or J.O. Curwood. Amongst theses books, White Fang. The hero wins every fight facing Fort Yukon's other dogs, until he meets a bull-dog... I quote some phrases describing this opponent: "There was purpose in his method - something for him to do that he was intent upon doing and from which nothing could distract him". "The bull-dog did little but keep his grip". "White Fang resisted, and he could feel the jaws shifting their grip, slightly relaxing and coming together again in a chewing movement. Each shift brought the grip closer to his throat." Well, those phrases admirably fit the Lang programs: once a small advantage is taken, the construction of the victory is meticulous and most often unstoppable!



Khmelnitsky test: a fairly strong profile, with an obvious weak point regarding attacking skills (this is not unexpected, according to the playing style this program demonstrates) and a less performing positional play (strategy), compared to its other skills. The opening phase is very strong, transitionning to a solid middlegame, then a slighty reduced endgame strength regarding as well theory (standard positions) and practice (endgame). It is a solid defender, leveraging its strong ability to recognize tactic patterns and threats to launch counterattacks. Last but not least, it has high calculations skills, at least it has a 68000 and hash-tables at its disposal!

CT800

Year: 2016 (prototype), 2021 (this device)
Programmer: Rasmus Althoff
CPU: ARM Cortex M4 @8--240Mhz
ROM: 263Kb (on 1Mb flash)
Elo level
@168Mhz: 2281 (2178 FIDE)
Elo level @16.8Mhz: 2030 (1931 FIDE)
CMhz: 490 (nominal @168Mhz)
Rperf: 103%
KT: 2144

This is not an electronic chessboard (do you see any board?), rather a chess computer in a bare calculator form factor, much reminding of the famous Mephisto "brikett", or even more, with regards to its size and wide display, of the very first Boris! Several contributors are involved in the genesis of this CT800:
- R
asmus Althoff; he is not only the programmer (providing the firmware, updated at intervals; the one loaded in my device is v1.42); above all he is the author and father of the project he shares as FLOSS (free/libre open source software); please check his dedicated website
- Vitali Derr rationalized the hardware (printed circuit board, power supply, interfaces, housing panels...) thus enabling to build it at an affordable cost (I purchased this one to him for €130); more information here
- George Georgopoulos authored the initial chess engine (NG-Play), with later port on ARM microprocessor by R. Althoff (who next reworked it significantly, not to forget adding an openings book)
- and Marcel van Kervinck who authored the bitbase logic for the endgame King + Pawn versus King.

The heart of the CT800 is an Olimex
STM32 card (see below picture), enabling system development based onto an ARM Cortex M4 microcontroller, a 32 bits RISC processing unit (same as the one powering both Millennium Chess Genius and Chess Genius Pro). Only 192Kb RAM are available (of which 100Kb are dedicated to hash tables); a challenge to port a modern program on it! The nominal clocking is 168Mhz, but the CT800 software enables throttling the computing power using intermediate speeds, as low as 10% (resulting in 16.8Mhz) and even, the other way around, to overclock it to 130 or even 145% (240Mhz at maximum). And worth pointing out, a sleeping mode paced at 8Mhz is set while pending any user input (move or menu choice...), the major outcome being saving energy. This feature is linked to no pondering over opponent's thinking time. I since long appreciate chess programs able to anticipate, but I must fairly acknowledge I don't miss the permanent pondering with the CT800: not only the playing level is already (more than!) strong enough, but also the program spots efficiently "obvious" moves (in addition to forced ones by the rules, of course) and plays them nearly immediately. And that is the exact expectation; no matter whether it is thanks to pondering or thanks to the smart coding of the CT800: a natural flow play, with the computer performing swiftly the logical move(s) next to, e.g., a capture and re-capture sequence. On the whole, the CT800 appears to perform very well at pre-selecting its moves: leveraging the information mode displaying insights of the on-going search, I noticed it focuses very quickly (often instantly) on the move it will most probably play - it changes its mind rather infrequently, despite digging into more depth. This conveys a feeling of sort of an intuitive play, even if this a bit far fetched of an anthropomorphism. But in very concrete terms, it achieves a Khmelnitsky test score much consistent with its FIDE Elo, and worth reminding, this test is designed for humans... As the author mentions having tuned and aimed the playing style towards "anti-human" play (avoiding closed positions and too early exchanges of pieces, encouraging mobility... thus keeping complexity for the human brain), it seems he achieved so (as a bonus, was it expected?) an interesting playing style, rather natural, that could fit a slightly passive player (the CT800 likes to let its opponent advance) seeking for complexity, and expecting opportunities for a counterattack.


The heart of the CT800:  Olimex card STM32-H405




Khmelnitsky test : the author worked on endgame positions, that's obvious! Strategy, long term calculations and attack plans are rather usual weaknesses for chess programs, but as already mentionned, the CT800 spreads confusion to make up for it while facing human players; and it defends itself well, identifies tactical patterns, recognizes threats; thus enabling counterattacking. Once leaving the openings book, it carries on with strong play, then keeps up its position in middlegame, not to forget an uncommon capability to spot opportunities or threats of sacrifice.

Millennium Chess Genius Pro

Year: 2016
Programmer : Richard Lang
CPU: ARM Cortex M4 @120Mhz
ROM: 16Kb + 1024Kb flash
Elo level: 2296
(2192 FIDE)
CMhz: 350
Rperf: 105%
KT: 2144
Square size: 0.95"

I long hesitated to buy this chess computer, two years after its birth. I did not miss strong Richard Lang programs, such as DOS Chess Genius series, emulated Mephisto (Dallas, Roma and so on... up to the Genius 68030 London), and the very same Chess Genius on Android I run on a many times more powerful tablet (worth 8800CMhz). But not to have anything to do with a strong chess computer, offered despite the decline of this market since long, was unjustifiable. The chess engine, same as in Android and iOS (programmed using "C" language), seems to derive from an adaptation of the Amsterdam - Roma line (either a further developped Amsterdam, or a slimmed down Roma); it is not the top performer in R.Lang program series: his best ones (then developped using  68000 or 80x86 assembly language) are worth close to 120% Rperf. Nevertheless, the ARM Risc CPU power pulls this small device up to national master level. 128Kb (out of 160) are available for hash tables, and two opening books one can choose, of which a modern openings one with 100,000 positions (Hiarcs book), and a classic openings one with 57,000 positions (London book). The push sensitive chessboard is too tough, requiring a firm pressure on squares (and it is supposed to have been soften, compared to the previous Millennium Chess Genius!). With use, I noticed the sensitivity is not that bad, but it is limited to the very central spot of each square. So, should the square not react, rather than adding more pressure, gently rolling the fingertip towards the center of the square does the trick. And if you press at first right on the center spot, reactivity is correct. Another small defect, a seating under the device is intended to store the set of pieces, but no way, they do not fit this small space... Launch price was around €200, I could buy it brand new on offer €150, mains adapter and postal charges (from Germany) included!



Khmelnitsky test: the profile fills out further more, with regards to the Mephisto Berlin from the same author; featuring an enhanced middle game and a better sight for sacrifices. I interpret this last point as revealing a reduced selectivity (the program spends time to analyze even moves that appear to be losing ones, at first glance). The calculation ability decreases, despite the distinctly faster processor; this corroborates my interpretation of a wider, but less deep, analysis. It is plainly consistent with the already mentioned assumption this software is based on a port of the Amsterdam - Roma line; as the next Lang programs will feature a much higher selectivity. The endgame playing skills are on par with the Berlin ones, even with slightly more knowledge of standard endgame positions. The gain in tactical strength is appreciable, while strategy and attack skills keep on being a bit weak. I point out it achieves the same score as the CT800, furthermore with a nearly identical skills profile (just compare!).

Millennium The King Performance

Year: 2019
Programmer
Johan de Koning
CPU: ARM Cortex M7 @10--300Mhz
ROM: 16Kb + 2048Kb flash
Elo level @300Mhz: 2520 (2412 FIDE)
Elo level @10Mhz: 2285 (2181 FIDE)
CMhz: 1500
Rperf: 109%
KT: 2339
Square size: 1.57"

Unlike the above Chess Genius Pro I bought after a long pondering time, I pre-ordered the King Performance before the market release (standard price, thus €349). I was eager to include Johan de Koning in my collection, but his 'The King' program was then only available in prohibitively expensive devices (Saitek RISC 2500, Mephisto Montreux, Tasc R30/R40, or more recently Millennium Exclusive + The King Element). The King Performance is certainly not cheap, but it is a nice chessboard, large size, push sensitive (squares are much more reactive than the Chess Genius Pro ones), featuring a very informative display and four LEDs per square. 'The King' program is renowned for its strength and style of play, and as well for its interesting ability to set personalities - and the electronic chessboard lets you save three of them, tuned as you wish, in addition to the predefined five playing styles (defensive, solid, normal, active, aggressive). Not only the program's playing style can be tuned, but the hardware too, as the CPU speed can be throttled up using 10Mhz steps, from 10 to 300Mhz. So, The King Performance offers several devices in one! Full speed, it performs at International Master level... The 3.xx series of The King on PC (starting with Chessmaster 8000) did not fully convince me, on the other hand I much appreciated and played with the 2.55 version, it was best of class during the 1997/98/99 years (Chessmaster 5000 and 5500, Tascbase King 2.55). The version used in the King Performance (same as the one used in The King Element) is 2.61, born in 1998 (Chessmaster 6000 & 7000 engine), reworked by JdK himself for Millennium. 320Kb RAM are available for hash tables, plus 64Kb for other needs. Two openings books can be enabled, one modern including about 300,000 positions (Master book) and one classic with around 61,000 positions (Aegon'94 book). Once the end of a variant is reached from the book activated as first one, the search continues using the second book (assuming it has been enabled). Seven specialized openings books are provided as well, and a slot is available for an additional book you can download from the Millennium portal. Cherry on the cake: this program can play Fischer Random Chess, also known as Chess960.
Worth pointing out, and much useful with regards to the strength of this chess computer, eight "easy" levels are available, where the computer is limited in terms of positions (nodes) analyzed per move. Of course, using these levels, permanent pondering is disabled. The processor speed setting does not matter, the program plays nearly instantly anyway. The analysis is throttled (heavily, as far as lowest levels are concerned) but not perverted by some contrivance. A collective effort has been performed to evaluate these levels; I retained from it the hereunder table, based on 123 games played facing chess computers having an established ranking (set at 30s/move):

Easy level 0 1 2 3 4 5 6 7
Max nodes/move 125 250 500 1000 2000 4000 8000 16000
Estimated Elo 1055 1254 1438 1607 1763 1904 2031 2143
Estimated Elo (FIDE) 1183 1332 1470 1598 1714 1820 1932 2042
Update: since firmware version 1.40 (Dec.2020), the easy levels have been enhanced, letting the maximum nodes per move reach higher values in the endgame phase; and the Elo levels have been standardized: 9 levels now, ranging from 1000 to 2050 Elo (FIDE). The above data is therefore obsolete for those who did perform this upgrade. And by the way, the new version provides additional "Comfort" levels that are adaptive ones (Play & Win, Friendly, Normal, Advanced): a great feature! Other additions worth pointing out, up to nine games can now be saved within the King Performance's memory; and the choice between a new simplified mode (for the occasional user) or the already known "expert" mode (letting you access all the detailed settings).

Leveraging the
"Spacious_Mind reloaded" test 
(for more information, check the Tiger Grenadier), I could rank the King Performance levels, including the "fun" ones:



The small PGN_tool (free download available from the Millennium site) enables exchanging game or position data between the King Performance and a Windows PC, thru an USB 2.0 cable (male A to male B type). This is not only very useful  for saving a game played on the King Performance (and eventually later perform a post-mortem analysis using the PC), but also for loading a position you want to analyze - this made running the Khmelnitsky test much easier, as I had already comfortably input the diagrams into my PC. Using the continuous analysis mode of the King Performance (set with 300Mhz and normal style), I just needed to send each position to let the engine start thinking immediately:




This profile is mind-blowing with its regularity in addition to the high level reached: it is fully strong and balanced, the only less strong domains (I don't dare to qualify these as weak) being the opening transition after exiting the book, the strategy, the attack and, less pronounced, the calculations power. You cannot expect to outdo the King Performance during the endgame, it does master this phase with regards to both theory and practice. Its major strength lies in recognizing tactical patterns, enabling the launch of mighty counterattacks despite the slightly limited calculations power, thus showing evidence of high software qualities; as well present while using easy levels: even with limited analysis, the King Performance is able to astonish its opponent. Finally, its mastery of using sacrifice opportunities is an outstanding skill compared to most chess computers, and even compared to PC programs contemporary of the base version of the King Performance engine (dated from 1998).

Femuey (aka Vonset) L6 v2

Year: 2023
Programmer : Vonset team
CPU
ARM cortex-A7 x2 @1Ghz
ROMnot communicated
Elo level2529 (2420 FIDE)
CMhz: 8800
Rperf: 102%
KT: 1954
Square size: 1.1"

Here is the successor of the Vonset L6 introduced in the 'weak club player, class C level' chess computers category. The brand is no use distinguishing these, as the L6 v1 can as well be found tagged as a "Femuey" or as a "Vonset". At software level, one can check the range of levels, uplifted from 20 for the v1 to 22 for the v2. One can also refer to the 'about' menu entry, displaying both software and hardware versions: this example features hardware version 2.2.1 (whereas the v1 I own displays 1.0.1), and software version 1.6.13b (1.0.12b for my v1). This new L6 continues offering all the characteristics I like from its predecessor: a reactive chessboard featuring a perfectly working detection of moves, nice color codes to help beginners, a quality display and many useful features. I spotted several small welcome improvements: the voice (english spoken) slightly slowed down is now perfectly clear, there is no more need to use a push-pin in order to reset the device, browsing stored games has been improved (immediate access to the end of a game is now enabled), the beginners levels behave much better (able to achieve a win if obviously winning). But above all, the computing speed increased thanks to the processor, now dual core with 1Ghz clocking, enabling instant response even using the strongest level; so much so that the clock display barely admits one or two seconds thinking time used for a whole game! The time display is therefore mainly useful in the two players mode, or to check your own thinking time used. The hardware speedup nevertheless fails to explain the quantum leap achieved by the playing strength: the software changed a lot, and surely leverages an extremely strong chess engine.



This is clearly shown by its skills profile, drawn using the Khmelnitsky test, and achieving over 500 KT points more than the L6 v1 score! I had to use a trick in order to run the test, as the instant score displayed by the software is not self-sufficient to provide an evaluation of the position: playing several moves forward is required, to reveal the principal variation and let the score converge to a consistent value. I thus leveraged the instant moves played by the L6 v2, alternately using the hint key then letting the L6 provide the answer, using around half a minute operator time to explore few moves in a row (usually four or five) and reach a balanced value for the score. The thinking time used by the L6 is kept definitely marginal (less than a second), compared to three minutes granted to other computers. Whether v1 or v2, the L6 is positively not designed for analysis! On another hand, what an impressive player! It is a very strong counter-attacker, manages the opening phase fairly well, and is outstandlingly strong in the strategy domain, even stronger than a same category human player. But it is not a great attacker, and is a bit weak in the tactics and sacrifice domains (the low spot in tactics surely relates to the instant play).

I used the "Spacious_Mind reloaded" test (for more information, check the Tiger Grenadier) in order to assess the intermediate levels from both L6 versions; here is the resulting graph:



You can see a nice scaling of levels for both versions, with a particularly smooth curve for V1's beginners levels; on another hand its curve flattens in this test, starting with level 15, despite the thinking time getting progressively longer. The V2 breaks through this glass ceiling and, in this test, confirms reaching over 2500 points despite instant response. Impressive! I point out an interesting behavior I noticed while running the test: the diversity of played moves while upgrading from one level to the next one. This means the score increase is not always gained with the same good moves plus few better ones; the program may choose slightly inferior moves and in the end make up for these thanks to additional better moves chosen elsewhere. A rather nice feature for training.

DGT Centaur

Year: 2019
Programmer: DGT & Stockfish teams
CPU: ARM11 @1Ghz
ROM: N/A (micro-SD card)
Elo level: 2881
(2766 FIDE)
CMhz: 2900
Rperf: 121%
Square size: 1.9"

One can definitely say this is an atypical chess computer! This statement lies on the designer  look, the technology, the software design, the aimed users; just to mention the main domains. This requires some explanation!
Let's start with the targeted market: it aims at the chess players who want an electronic partner able to fit them whatever their own level, silly simple to use, immediately available to practice in the most straightforward way. To comply with these expectations, it features a large chessboard (almost up to competition size); a very efficient and non-restricting moves detection (of course the squares are sensitive, and it scans the whole board to read the moves thru changes in the position - this enables any way to handle the pieces, including sliding them, and no specific sequence is required for captures); and it features an adaptive play that can only be set on two levels (Friendly or Challenging). Obvious consequence: it does not target fans of tuning thru many options, nor deep analysis enthusiasts; it does not provide any PC connectivity and does not even enable storing a completed game before starting a new one; only few time settings are offered: either game timer off (then it uses 10s/move or so), or from 2 to 90 minutes for the game. No, it is not into that; it only offers a free game on the fly, without making any fuss (not even a power cable is needed, as the battery provides long lasting power). Besides, even powered off, it permanently displays its creed:
Let's Play!



The software design is as well most unusual: Stockfish 9 has been included as a calculation engine, not as a playing engine! It is a Python Chess developped layer that plays, taking care of the adaptive feature, and of the score and analysis display. In order to display the evaluation of a position, Stockfish is called for around 10 seconds computing in Expert mode, or 3 seconds in adaptive mode (as the user is assumed to be less demanding). Thus, it does not display the score of the last move played by the computer! This is actually much normal, as the last computer move is potentially an inferior one, chosen to fit the opponent's level of skills. The Expert mode is the only one to always select the best move, then it is Stockfish 9 expressing itself, though limited (no pondering on opponent's thinking time, parallel computing of three principal variants). Actually, the multi-PV computing mode (several principal variants) is key for both analysis and adaptive features. The Centaur always calls its calculation engine Stockfish with this multi-PV mode enabled, and keeps track of the 10 best moves for each position within the game (10 in adaptive mode, only 3 in Expert mode), including the score computed by Stockfish for each of these moves. The 3 PV limitation in Expert mode enables deeper calculation for Stockfish, at a given thinking time; and this makes a difference: using 10 PV, the calculation speed is divided by 9, resulting in a loss of 200 Elo points as an order of magnitude, compared to a single principal variant calculation. Thankfully, Stockfish is endowed with ample Elo points, so can afford it! The player can browse the scores of those different possible moves across the whole current game, thus enabling some sort of post-mortem analysis. One can also check the quality ranking of the moves the Centaur chose in adaptive mode, amongst the ten ones evaluated by Stockfish. Indeed, this is how the adaptive mode works: if your moves are from the pick of the bunch, then the Centaur will select strong moves; and the other way around, the weaker your moves, the lower ranking it will choose from. The process is smarter than a move per move adaptation, probably the Centaur observes a shifting number of moves; enough for a representative average evaluation of its opponent, and not too many to be able to adapt quickly enough. Another aspect for adaptation has been mentioned by some experts: the Centaur stores previous games (the user cannot gain access to) and is supposed to choose the next opening moves accordingly, thus providing variety.
The involved technology is as well much different from other chess computers' one: under each square lies an antenna providing a signal, modulated in the presence of any conductive material on the square; the bottom of each piece is equiped with conducting adhesive. It is so possible to swap the original set with others, simply thanks to some metal surface glued underneath; such as a thin disk cut from an aluminium foil. Another interesting technology lies in the highlight of squares, instead of leveraging one or several surface LEDs, it uses one central LED under each square, the light of which is spread as a circle around the pieces, thanks to transparency of the board and to a translucent dome highly visible under the chessboard. It is not only beautiful and distinctive, it is as well much ergonomic: it is an eye-catcher, and leaves no room for any ambiguity (e.g. with the Knight move: a four diods per square system lightens three out of four diods of the in-between square, despite this one not being involved in the move). About the CPU, it is a whole nano-computer, a Raspberry Pi Zero (512Mb RAM, 1Ghz clock, 32 bits single core ARM11 processor), with only the micro-USB power left accessible.
Last but not least, the designer look: so modern, I appreciate the very soft shades of the board, in contrast with the black and white colors of the plastic pieces that feel pleasant to touch; the device is very thin and light, nevertheless definitely stiff (the domes structure under the board really looks to make a contribution to it). The e-ink display is small (thus discreet) but still easily readable, even using the small font.
Born the same year as the King Performance, sold at the same initial price, comparing both offers triggered heated debate, often to excess. With regards to the list of features, the King Performance is far ahead, and it's quite normal: it is part of a long-standing development of traditional chess computers, featuring more power, more settings, more playing levels, more connectivity; whilst the Centaur is a total breaking off of this lineage aiming at other usage and other users. Actually, one should just not try comparing them, they share so little! On the other hand, I was personally not keen on buying a Centaur at such a market price, so I let three years go by before finally jumping on a real bargain: a brand new, complete and warranty covered Centaur, including its €390 one week old invoice (the King Performance pricing evolved the same way, around €400); sold €220 by the person who got it as a gift.
About the playing strength: the above mentioned Elo level is the Expert's one, timer off (thus 10 seconds per move or so), facing PC programs around the same strength, set to 40 moves per 10 minutes (15s/move on average). With regards to both adaptive levels, I posted here a short analysis based on 48 games, the main outcomes of which I summarize:
- the Friendly mode played slightly weaker than opponents, scoring 25%
- the Challenging mode
played slightly stronger than opponents, scoring 73%
- inclusive of both modes, the Centaur scored 49%.
At first glance, this looks quite balanced, and the target appears to be reached!
Nevertheless, zooming on players categories, one can see that a beginner or occasional player (1000-1300 Elo) will need to focus on the game even facing the Friendly mode, which scored 75% playing this class of opponents (not that friendly, so!); and the Challenging mode will be out of the reach. On the other hand, a strong club player 
(1850-2050 Elo) will think the Challenging mode is a bit too kind, as the Centaur only scored 38%, and the Friendly mode will stand uninteresting. Last but not least, for an average club player (1450-1750 Elo), the Challenging mode will provide an actual challenge, as the Centaur scored 81% during my test, and the Friendly mode will inversely enable systematic wins assuming the player keeps on focusing on the game.


previous category: 'strong club player, class A level' chess computers
back to 'my chess computers'