A Machine Translation Experiment with Slate Desktop™

Tiago Neto reviewed Slate Desktop™ in July 2017 on his blog,, as part of our Blogging Translator Review Program. Thank you, Tiago! I look forward to hearing more from you.

reviewer Tiago Neto

Disclaimer: This is a pretty important bit. I was contacted by the CEO of Slate Rocks LLC on 18 May 2017 to participate in their blogging translator review program. I established immediately that I do not endorse or disavow products and that my time to experiment with it was limited, which was accepted.

Ultimately, this resulted in me only using the software for a short fraction of the 45-trial period given. I am also not an MT expert, and just took upon myself to test it where I thought I could leverage the most out of this resource.

Also, for clarification – this program offers as an incentive a free license of their Slate Desktop™ package. Since I was unable to comply with their exact terms of the review program (see final notes), I expect the license will be terminated, and will request for it to be so.

Finally, because the files I work with are confidential in nature, you will not see screenshots of the software results.

What is Slate Desktop™
and why does Slate Rocks differentiate it from Machine Translation

Slate Desktop™ is a piece of software made by Slate Rocks! LLC. They differentiate it from machine translation as the general, current concept, but it is essentially that: a local, personalized machine translation (MT) engine (or engines, as you can create them to your heart’s content).

I’ve always been generally unimpressed with MT, but nonetheless found the availability of a solution that ensures confidentiality to be interesting and worthy of exploring.

After having discussed things briefly with Slate Rocks, I eventually pulled together my concept of my test project: I’d be throwing it a bundle of documents I translate every six months or so, pertaining to a few clinical trials. Heavily regulated structure and text (more on that later), repetitive texts, for which I usually can leverage my Translation Memories (TM) to make my life easier and the quality of the job higher.

In the end, I processed roughly 15.000 words of text through it, over a period of a week, and this blog post discusses my impressions.

Everything has a beginning…
Installation, set-up, and building an engine

So, first things first. Installation is a fairly simple affair and you just follow the instructions. There’s a lot of short tutorials available on Slate Desktop™’s website and the software does deliver on the promise of ease-of-use.

From then on, you’re pretty much set to build your own MT engines. And herein lies the attraction of the software: your own, local MT engines, with confidentiality, based on your own resources. You can translate the files directly using a XLIFF format, or have a plugin to get MT results in memoQ. It currently integrates with Trados Studio and OmegaT, but I went with what I had at hand.

Now, the catch is this: the engine is only as good as your resources, and if you deal with “imperfect” source files, your mileage WILL vary. For the purpose of testing the software, I chose a language pair that provided the ideal type of documents for a workflow where MT could take part: documents translatable directly with a CAT tool – or for which I could obtain good quality OCR – on a highly regulated subject, with reasonably strong TM hits. The hypothesis I wanted to test was whether Slate Desktop™ would build a coherent segment when the text differed from those in the TM, and if there was any other unforeseen bonus to its use.

Whenever you build or use an engine, there’s a window that gives you information on it. The Engine Summary panel data for my test project was as follows:

Evaluation sentence pairs: 2,238
Evaluation BLEU score: 82.48 (< 1.0: 73.473)
Quality Quotient: 44.77% (correct test sentences)
Edit distance per line (non-zero): 19.9 (Levenshtein)
Built from 18 files with 33,045 phrase pairs.
Tokens for language A: 429,163
Tokens for language B: 439,495
Language model built from 43 files, 33.715 phrases and 503,369 tokens.

The first issue I found was that out of 23 TMX files I had selected, only 18 were successfully used by the software. This was certainly a problem caused by me. Still, that yielded a reasonable MT engine to experiment, if weakened by a relatively low count in phrase pairs and a small corpus size (despite totaling 159 MB of TMX files).

I then proceeded to connect Slate Desktop™ to memoQ as an MT service – there’s a plug in for that, and you get the MT results on the same pane as your Patch match, TM and TB results. I have also tried translating an XLIFF file in one go – which works fine, but it is best to stick with the CAT plug in for the reasons I’ll detail below.

… a middle…
Initial impressions

Well, bear with me – I’m not in any of the opposite camps of the MT pseudo-war . I’m neither a hater nor a fan of any technology. I use whatever works for me, and allows me to work better, faster, and to provide a better-quality service to my clients. And the TLDR for Slate Desktop™ is this: yes, I can incorporate it into my workflow, but maybe not for what I was thinking at first.

So, allow me to share the positive aspects of my experiment. For me, Slate Desktop™ was:

  • Almost always better than a <80% fuzzy match in shorter sentences (<20 words). MemoQ does give some legendary funky fuzzy matches with shorter segments, and Slate Desktop™ was quite good with those.
  • Oddly enough, Slate Desktop™ was also better for those 99% matches due to different punctuation. I would have expected memoQ to realise by now that if the only difference between a segment is the punctuation at the end, it might as well go and fix it automatically.
  • It was often better than Patch Matches, where memoQ occasionally includes words from the source text to fill in the gaps. Mind you, so does Slate Desktop™, but differently.
  • Slate Desktop™’s engine provided some interesting, unexpectedly helpful translations for which there was no TM at all (or at least above the threshold set for TM hits).

But, as every coin has two sides, for me, Slate Desktop™ was:

  • Not that great with formatting. Importing an XLIFF file basically meant all formatting and tags were gone.
  • Not hat good with capitalization in memoQ. If the source segment starts with a capital, odds are I’ll need one too in the MT result, rather than everything in lowercase.
  • Not that great regarding the syntax reversal between a Germanic and a Latin language (English and Portuguese). It also tended to flip around individual’s last names, which I found amusing (e.g. “John Alexander Doe” would become “John Doe Alexander”). Terms (e.g. disease names) would sometimes be reversed in order too.
My real-world application

OK, first of all, it’s nowhere near as bad as it may seem from the “What’s Hot/What’s Not” above. I did use this for actual work, it did speed up my work considerably, and it did surprise me more than once by allowing me to press two keys rather than type or dictate.

The 23 TMX files I employed were specific to a certain process, each of them for a separate clinical trial. Of the 15.000 words that I processed with Slate Desktop™ embedded in the workflow, roughly 8.000 came from PDF files with extractable content, and the 7.000-ish remaining came from scanned documents, which were then subject to OCR and imported into memoQ.

Now, I mentioned two things beforehand: a) the quality of the source files and b) how Slate Desktop™ surprised me with some results.

  • In the set of documents I used for this test, the grammar was in some instances absolutely horrendous – I couldn’t expect any MT engine to make sense of that, as quite often it requires the translator to know what the author meant but didn’t write to convey the actual message. Spelling mistakes also threw Slate Desktop™ off the road, but again, it was a problem with the quality of the source, and the old “garbage in, garbage out” maxim applies. However, had I used these documents to build the TMX files and then to build the MT engine, I expect that could have been even worse.
  • Where it did surprise me was when handling safety reports, namely long, tabulated lists of Severe Adverse Events, classifications, and so on. Where one would expect to leverage from translation memory, the imperfect source kept reversing the order of the SAEs, adding meaningless expressions and a bunch of other gibberish that, at the end of the day, simply didn’t yield a TM hit. Slate Desktop™ ate these for lunch. Other than the capitalization issues (easily fixable via quick keystrokes, but annoying) and the occasional reversal in the order of the technical terms (a bit more annoying), little to no intervention was needed. For anyone who can feel empathy with me regarding these documents, this was a very nice and welcome bonus.

Slate Desktop™ also gave me a slightly less pleasant surprise, when perfectly structured sentences were yielded by it, only to be partially filled with words in the source language. These are called “Out Of Vocabulary” words, and Slate Desktop™’s support site suggests these as good candidates for enforced terminology. As I had it, roughly 15% of my engine had a BLEU score of 0, which in turn caused these issues. I did not have the time to fine tune the engine, but if enforced terminology is used as anchors, maybe it solves the problem.

… and an end.
Final opinion:

MT is here to stay, and anyone who doubts that is fooling themselves. However, MT is not meant or feasible for every single application, and by ignoring this, most of the time the results we see are comical, if not tragic. The good news is you don’t have to use it. But if you can, building your own MT engine out of your own work may well provide another tool to improve your quality of life as a translator.

Note that I’m not discussing quality, profitability, margins or anything like that. That is not my point, and people can go and have a go at each other’s throats as they always tend to do when this subject is brought out, anyway. Because this engine is “living” on your computer, and is based on your work, don’t expect it to steal your job or outperform you.

Nonetheless, a very important metric for me is my quality of life: anything that can make my professional life easier without compromising any quality factor (and, if at all possible, enhancing them) is worth considering. As you may have read, I have a special interest in speech recognition and the benefits to ergonomics and productivity it brings. In my short and limited experience, Slate Desktop™ does not offer the same level of benefits on its own, but it does provide a nice complement to speech recognition. Inserting perfect MT hits is still faster than dictating. And, borrowing from discussions with David Hardisty and Kevin Lossner, MT can provide a nice framework for you to work differently – just look at the MT engine result and dictate, reading what you want to take from it, and overriding what is unusable.

This, for me, would be the main reason to acquire this software. I can imagine it could do better with a vaster, more organized corpus, but I’ll have to defer from affirming that for now. What I can say is that if I had paid for the test license, it would have paid itself this week. And that’s not a bad metric in my book.

Slate Desktop™ costs USD 269, and Slate Rocks offers a money-back guarantee and a 30-day trial period, so they’re putting their money where their mouth is.

Brought to you by Slate Rocks LLC

Consider this:

Downloading and learning Slate Desktop™ takes less than an hour.

Using Slate Desktop™ can save you days, weeks, months and years.

Why not start saving now?

By Tiago Neto

Tiago Neto is a freelance consultant and translator for the pharmaceutical and medical device industries based in the Freiburg area of Germany.