Monday, January 11, 2021

Keeping Your Sanity with Machine Taxonomization

Taxonomies are crucial for businesses and institutions to handle bigger amounts of data. Manually organizing thousands of concepts into a knowledge tree has so far been the only way to do this. Aside from the fact that this task can be quite tedious, it requires in-demand subject matter experts to complete. Thus, it is often considered too expensive or too much effort. A shame, given that companies then miss out on all the benefits of using taxonomies.

With a little help from your (AI) friend…

Imagine a chaotic pile of books (of course, the less-organized among us may not have to imagine this) being automatically sorted into shelves, branches, and sub-branches, together with an index to help quickly find a desired book. This describes what our semi‑automatic taxonomization method can do. An initial knowledge tree is produced by Machine Learning (ML), using language models stored in huge neural networks. Clustering algorithms on top of word embeddings automatically converts a haystack of concepts into a structured tree. The final curation of the taxonomy is still carried out by a human, but the most time-consuming and tedious aspects of the task have already been dealt with, and in a consistent way.

‘Cobot’ versus manual

In a study, we benchmarked this collaborative robot approach (ML auto‑taxonomization and human curation) against the manual job done by an expert linguist. Below are the data and task flows of the two approaches:

Semi-automatic versus manual taxonimization

We aimed to taxonomize 424 concepts related to COVID-19. The traditional manual method was tedious and tiring for the human expert, who took a flat list of concepts and turned them into a systematic knowledge graph by working concept by concept to get everything in its right place. Wading through the list from scratch (including constantly switching contexts – from drugs, to vaccines, to social distancing, for example) made progress on the task difficult to measure. Having no perception of how many clusters of concepts still needed to be created was demotivating.

In contrast, our semi-automatic method started off with a tree of 55 suggested clusters of leaf concepts, each representing a specific context. Of course, ML doesn’t always produce the exact results a human expert would (we hear you, AI skeptics!), so some algorithm-suggested clusters were a bit off. However, the majority of the 55 were pretty accurate. They were ready to be worked on in Coreon’s visual UI, making the human curation task much faster and easier. This also enabled progress to be measured, as the job was done cluster by cluster.

Advantage, automation!

From a business perspective the most important result was that the semi‑automatic method was five(!) times faster. The structured head-start enabled the human curator to work methodically through the concepts. The clustered nature of the ML‑suggested taxonomy would also allow the workload to be distributed – e.g., one expert could focus on one medicine, another on public health measures.

More difficult to measure (but nicely visible below) was the quality of the two resulting taxonomies. While our linguist did a sterling job working manually, the automatic approach produced a tidier taxonomy which is easier for humans to explore and can be effectively consumed by machines for classification, search, or text analytics. Significantly, as the original data was multilingual, the taxonomy can also be leveraged in all languages.

Comparision of automatic and manual taxonomy

A barrier removed

So, can we auto-taxonomize a list of semantic concepts? The answer is yes, with some human help. The hybrid approach frees knowledge workers from the tedious work in the taxonomization process and offers immediate benefits – being able to navigate swiftly through data, and efficient conceptualization.

Most importantly, though, linking concepts in a knowledge graph enables machines to consume enterprise data. By dramatically lowering the effort, time, and money needed to create taxonomies, managing textual data will become much easier and AI applications will see a tremendous boost.

If you’d like to discover more about our technology and services on auto-taxonomization, feel free to get in touch with us here

Wednesday, December 9, 2020

Making Translation GDPR-Compliant

Current processes violate GDPR

Out of the six data protection principles, translation regularly violates at least four: purpose limitation, data minimization, storage limitation, and confidentiality. This last one is most likely mentioned in most purchase orders, but it is hard to live up to in an industry which squeezes out every last cent in a long supply chain. 

Spicier is the fact that translators don’t need to know any personal data to translate a text, like who made the payment and how much money was transferred, as in the sample below. Anonymized source texts would address purpose limitation and data minimization. The biggest offenders, however, are the industry’s workhorses: neural machine translation (NMT) and translation memory (TM). NMT trains and TM stores texts full of personal data without means of deleting it, even though it was unnecessary for them to store the protected data in the first place. 

A GDPR-compliant translation workflow 

Some might argue that this difficult problem cannot be fixed. Well, it can. And not only this, our anonymization workflow saves money and increases quality and process safety, too. 

On a secure server ‘named entities’ (i.e. likely protected data) are recognized. This step is called NER, a standard discipline of Natural Language Processing. There are several anonymizers on the market, mainly supporting structured data and English, but they only support a one-way process. 

In our solution, the data is actually “pseudonymized” in both the source and target languages. This keeps the anonymized data readable for linguists by replacing protected data with another string of the same type. Once translated, the text is de-anonymized by replacing the pseudonyms with the original data. This step is tricky since the data also needs to be localized, as in our example with the title and the decimal and thousands separators. The TMs used along the supply chain will only store the anonymized data. Likewise, NMT is not trained with any personal data. 


We recently did a feasibility study to test this approach. Academia considers NER a solved problem, but in reality it’s only somewhat done for English. Luckily, language models can now be trained to work cross-language. Rule-based approaches, like regular expressions, add deterministic process safety. For our study we extended the current standard formats for translation, TMX and XLIFF, to support pseudonymization. De-anonymization is hard, but I had already previously developed its basics for the first versions of TRADOS. 

What remains is the trade-off between data protection and translatability. The more text is anonymized, the better the leverage – but the harder the text is to understand for humans, too. Getting that balance right will still require some testing, best practices, and good UI design. For example, project managers will want a finer granularity on named entities than normally provided by NER tools. Using a multilingual knowledge system like Coreon, they could specify that all entities of type Committee are to be pseudonymized, but not entities of type Treaty

Anonymization is mandatory 

As shown above, a GDPR-compliant translation workflow is possible, and is thus legally mandatory. This is, in fact, good news. Regulations are often perceived as making life harder for businesses, but GDPR has actually created a sector in which the EU is a world leader. Our workflow enables highly-regulated industries, such as Life Sciences or Finance, to safely outsource translation. Service providers won’t have to sweat over confidentiality breaches. The workflow will increase quality as named entities are processed by machines in a secure and consistent way and machine translation has fewer possibilities to make stupid mistakes. It will also save a lot of money, since translation memories will deliver a much higher leverage.

If you want to know more, please contact us.

Wednesday, December 12, 2018

Sunsetting CAT

For decades Computer Assisted Translation based on sentence translation memories has been the standard tool for going global. Although CAT had been originally designed with a mid-90s PC in mind and there have been proposals for changing the underlying data model, the basic architecture of CAT has been left unchanged. The dramatic advances in Neural Machine Translation (NMT) have now made the whole product category obsolete.

NMT Crossing the Rubicon

While selling translation memory I always said, machines will only be able to translate once they understand text; and if one day they would, MT will be a mere footnote of a totally different revolution. Now it turns out that neural networks, stacked deeply enough, do understand us sufficiently to create a well formed translation. Over the last two years NMT has progressed dramatically. It has now achieved “human parity” for important language pairs and domains. That changes everything.

Industry Getting it Wrong

Most players in the $50b translation industry, service providers but also their customers, think that NMT is just another source for a translation proposal. In order to preserve their established way of delivery they pitch the concept of “augmented translation”. However, if the machine translation is as good (or bad) as human translation, who would you have revise it, another translator or a subject matter expert? 

Yes, the expert who knows what the text is about. The workflow is thus changing to automatic translation and expert revision. Translation becomes faster, cheaper, and better!

Different Actors, Different Tools

A revision UI will have to look very different to a CAT tool. The most dramatic change is that a revision UI has to be extremely simple. To support the current model of augmented translation, CAT tools have become very powerful. However, their complexity can only be handled by a highly demanded group of maybe a couple ten thousand of professional translators globally.

For the new workflow a product design is required, that can support dozens of millions of, mostly occasional, expert revisers. Also, the revisers need to be pointed to the sentences which need revision. This requires multilingual knowledge.

Disruption Powered by Coreon

Coreon can answer the two key questions for using NMT in a professional translation workflow: a) which parts of the translated text are not fit-for-purpose and b) why not? To do so the multilingual knowledge system classifies linguistic assets, human resources, QA, and projects in a unified system which is expandable, dynamic, and provides fallback paths. In the future linguists will engineer localization workflows such as Semiox and create multilingual knowledge in Coreon. "Doing words” is left to NMT.

Wednesday, April 4, 2018

Concept Maps Everywhere

On March 22-24 the DTT Symposion (short DTT) took place again in Mannheim. It is the bi-annual meeting of the German Terminology Association (Deutscher Terminologietag). We were exhibiting and I enjoyed talking to many Coreon customers there. It was a truly exciting event this year and according to the organizers the most busy ever. 200+ participants meant house full!

DTT 2018: 200 participants learning about the benefits of multilingual knowledge systems
"Ausgebucht - no further seats left!"

After a half day of pre-event workshops, the event kicked off Friday morning with a presentation from Martin Volk (University Zürich) on parallel corpora, terminology extraction, and MT. Martin challenged the hype around Neural Machine Translation and pinpointed some weaknesses: “NMT operates with a fixed vocabulary. But real world translation has to deal with new words constantly … how can we ensure terminology-consistent translation?”. His research confirms what we've outlined in an earlier blog post: Why Machine Learning still Needs Humans for Language.

“Concept Maps Everywhere”
Back to the event ... as one participant tweeted, concept maps were the dominating topic throughout the days. First a workshop by Annette Weilandt (eccenca) on taxonomy, thesauri, and ontologies, followed by a presentation by Petra Drewer (University Karlsruhe). Petra unveiled a plethora of benefits:
  • insight into the domain
  • systematic presentation
  • clear distinction between concepts
  • identification of gaps
  • equivalence checks across languages
  • new opportunities in AI contexts
No surprise, my event highlight was the Coreon customer presentation from Liebherr on the benefits of multilingual knowledge systems. In this very entertaining presentation Lukas Auer (Liebherr MCCtec) and Johannes Widmann (Liebherr Holding) outlined how pragmatic and effective the work with concept systems turns out. They concluded: “If we all think in networks, why should our termbase then be designed as an alphabetic list of terms??” Instead, the concept system driven approach has many advantages such as training of new staff, context knowledge for technical authors and translators, terminological elaboration of specific domains, insight into the degree of how far a domain is already covered, avoiding doublettes etc. Download a case study from the Coreon web site.

DTT 2018 Award for a Master Thesis on Coreon
And then the “i-Tüpfelchen” (cherry on the cake) on Friday afternoon: David Reininghaus received this year’s DTT award on his master thesis: “Applying concept maps onto terminology collections: implementation of WIPO terminology with Coreon”. David analyzed in his work how a real graph driven technology outperforms simple hyperlink based approaches: no redundancies, more efficient, less error-prone. David further developed an XSL-based method how to transform the MultiTerm / TBX hyperlink based workarounds into a real graph, visualized in Coreon.

Deutsche Bahn: Terminology-Driven AI Applications
Tom Winter (Deutsche Bahn and President of the DTT) illustrated in his session how terminology boosts AI applications. Through already simple synonym expansion the intranet search engines are now more meaningful (a search for the unofficial Schaffner, now finds even documents where only the approved Zugbegleiter was used). Other applications are automatic pre-processing of incoming requests in a customer query-answering system or even improving Alexa driven speech interaction at ticket vending machines … who says terminology is still a niche application?

From Language to Knowledge
I am excited about the evolution of the DTT in recent years. How many more participants will we see in spring 2020? I am convinced the more the DTT community continues to leave the pure documentation niche and the more the focus moves onto areas that our customer Liebherr or Tom Winter have illustrated, the relevance and awareness level of the community will continue to grow. So that the organisers can again proudly announce:   Ausgebucht - no more seats left!

Monday, February 12, 2018

IoT Banks on Semantic Interoperability

The biggest challenge for widespread adoption of the Internet of Things is interoperability. A much-noticed McKinsey report states that achieving interoperability in IoT would unlock an additional 40% of value. This is not surprising since the IoT is in essence about connecting machines, devices, and sensors – ideally cross organization, cross industries, and even cross borders. But while technical and syntactic interoperability are pretty much solved, little has been available so far to make sure devices actually understand each other.

Focus Semantic Interoperability

Embedded Computing Design superbly describes the situation in a recent series of articles. Technical interoperability, the fundamental ability to exchange raw data (bits, frames, packets, messages), is well understood and standardized. Syntactic interoperability, the ability to exchange structured data, is supported by standard data formats such as XML and JSON. Core connectivity standards such as DDS or OPC-UA provide syntactic interoperability cross-industries by communicating through a proposed set of standardized gateways.

Semantic interoperability, though, requires that the meaning (context) of exchanged data is automatically and accurately interpreted. Several industry bodies have tried to implement semantic data models. However, these semantic data schemes have either been way too narrow for cross-industry use cases or had to stay too high-level. Without schemes data from IoT devices lack information to describe their own meaning. Therefore, a laborious and, worse, inflexible normalization effort is required before that data can be really used. 

Luckily there is a solution: abstract metadata from devices by creating an IoT knowledge system.

Controlled Vocabulary and Ontologies

A controlled vocabulary is a collection of identifiers which ensure consistency of metadata terminology. These terms are used to label concepts (nodes) in a graph which provides a standardized classification for a particular domain. Such ontology, incorporating characteristics of a taxonomy and thesaurus, links concepts with their terms and attributes in semantic relationships. This way it provides metadata abstraction. It represents knowledge in machine-readable form and thus functions as a knowledge system for specific domains and their IoT applications.

IoT Knowledge Systems made Easy

A domain ontology can be maintained in a repository completely abstracted from any programming environment. It needs to be created and maintained by domain experts. With the explosive growth of IoT constantly new devices, applications, organizations, industries, and even countries are added. Metadata abstraction parallels object-oriented programming and unfortunately so do the tools used so far to maintain and extend ontologies.

But now our SaaS solution Coreon makes sure that IoT devices understand each other. Not only does Coreon function with its API as a semantic gateway in the IoT connectivity architecture, it also provides a modern, very easy-to-use application to maintain ontologies; featuring a user interface domain experts can actually work with. With Coreon they can deliver the knowledge necessary for semantic interoperability so that IoT applications can unlock their full value.

Coreon will be presented at the Bosch ConnectedWorld Internet of Things conference February 2018 in Berlin. If you cannot come by our stand (S20) just flip thru our presentation or drop us a mail with questions. 

Monday, January 29, 2018

Language Service Providers Need to Look Ahead to Compete with Machines

By Rachel Wheeler, Morningside Translations

Language localization services have been big business, and estimates indicate that the market will grow at an annual rate of about 7%. Companies that focus solely on translations services will continue to find demand for several years to come. The global marketplace, however, also presents new opportunities for language service providers (LSPs) to elevate their services and expand their businesses beyond translation alone.

Other LSPs Are Not The Only Competition

Some of the key benefits that professional translation agencies provide are quality translation and local expertise. To date, machine language translation software has had it limitations: poor quality, faulty grammar and syntax, and lack of contextual understanding. LSPs have benefited from these flaws by being able to provide a superior alternative.

However, in 2017, Google introduced Google Neural Machine Translation (GNMT). What GNMT promises to provide is a new machine approach that will directly compete with human translators. Machine learning translation software has relied on an algorithmic approach to translation that was an almost a word-for-word dictionary approach. Therein lies its major flaw: it can only learn through predictive behavior analysis.

Neural networks like GNMT, however, incorporate a more complex structure that mimics the way the human brain processes information. This approach replicates the idea of intuition in many ways, not simply hard definitions. In its first published iteration, Google is already claiming a 60% reduction in errors.

For LSPs, these neural networks mean more–and cheaper–competition in the future. The nature of work for translation agencies will need to change in order to remain relevant.

Marketing Remains the Realm of People

By far, the main edge LSPs will have over machine translation is experience and local culture understanding. For global businesses, marketing their goods and services is not just a matter of translating words. Successful marketing understands the emotional impact of how information is presented.

Subtle differences in words–“discover” versus “find”, for example–have a different impact in sales and marketing than they do in more formal written content. Factoring in the additional layer of translation word choices, and the tone or intent of words can change dramatically beyond the original purpose.

Marketing content does not automatically translate from one language to another. Even visual imagery can fall in the purview of the cross-cultural marketer. Lingerie, for instance, is promote differently in conservative countries than in the West. LSPs are in the perfect position to expand their services into marketing, either as outside consultants or even agency-level providers.

Essentially, their ability to localize is a human translator’s greatest differentiator. Whether that’s leveraged for eLearning localization or creating images for a website specifically geared towards a regional audience, this is where an LSP can still shine.

Data Mining Works In Any Language

With today’s enormous output of information, data mining has become big business of its own. Data miners often refer to their work as “discovering insights.” As they review the clicks of a website, the comments on social media, and results of customer surveys, they inherently build a consumer profile with cultural bias built in.

LSPs with experts in particular languages and cultures offer the opportunity to sift through these insights in the original language that a non-native speaker can miss in translation.

Plan Ahead for Competitive Advantage

The technology world makes no secret of its innovations. LSPs should keep on eye on the changes and trends and plan for the future. By anticipating the coming shift in global demand for translation service, language service providers can be ahead of their competitors instead of playing catch-up.

What a great follow up to Coreon's last newsletter we welcome contributions from partner companies and industry experts.
This guest post is written by Rachel Wheeler from Morningside Translations.

Monday, September 18, 2017

Multilingual Knowledge supporting AI, IoT, and Industry 4.0

A Review on Summer Events

We would like to share some impressions from recent events and conferences. The interesting common denominator was the following themes: how can we leverage and deploy terminology assets in other business processes? How can we deploy the valuable knowledge in terminology assets to support AI, Machine Learning, Internet of Things, and Industry 4.0?

Coreon Innovation Seminar 

The Future of Human Expert Knowledge

Experts in machine learning and industry consultants gathered in Berlin to discuss and brainstorm about the opportunities Coreon provides for the diverse fields they work in. The Coreon use cases presented were: Cross-border e-commerce, AI expert know-how for knowledge heavy applications, and EU Institutions and interoperability. The event was by personal invitation only and was a huge success. We look forward to repeating it soon! Please click here if you would like to be invited next time.

ILKR 2017: Industry 4.0 meets Language and Knowledge Resources

The first trip brought us to Vienna to the Austrian Standards Institute. The ILKR 2017 took place just ahead of the ISO TC37 annual meeting. As its title suggests, ILKR tackles the question how multilingual knowledge resources enable Industry 4.0. Thus many presentations explored the possibilities around multilingual knowledge management, knowledge transfer, and new business models.

No Industry 4.0 without Semantics

Our contribution illustrated why the Internet of Things and Industry 4.0 need semantics. When hardware devices speak to each other, they interoperate. This requires a mutual understanding of what they actually do, like “I measure temperature.
Interoperability by Multilingual Knowlegde System MKS semantic mapping
What do you measure?” The answer is in the semantic of the devices’ metadata. We explained how Multilingual Knowledge Systems (MKS) resolve this challenge and how they facilitate interoperability. And how existing terminologies, taxonomies, and ontologies can be re-purposed to become an MKS.

ILKR was followed by a pretty exciting workshop on eCl@ss and Multilingual Product Master Data Management. It had a particular focus on how e-procurement processes benefit from classifications and knowledge systems.


TSS 2017: Terminology Summer School

This year back in Cologne, the TSS is a five day course that attracts participants worldwide who look for a kick-start in terminology and knowledge resource management. During the first 3 days, TSS usually hovers around the fundamentals of terminology management and its role in business processes. Then we were invited to give two presentations:
    Michael Wetzel, Coreon MD, about KOS and Semantic Web
  1. Terminologies and other Knowledge Organization Systems (KOS): What is a KOS, what are its benefits, typical examples, the role it plays in the Semantic Web? What is the difference between a classification, a taxonomy, a thesaurus, and an ontology?
  2. Knowledge meets Language: Multilingual Concept Maps: How Coreon is a fusion of terminology with taxonomy / ontology, what benefits organizations enjoy by deploying Multilingual Knowledge Systems
Coreon is proud to be a regular sponsor of TSS, and we look forward to next year, then again in Vienna (9-13 July 2018).


Terminology - Ontology Round Table

Mid-August we were invited to a one day workshop on touch-points between terminology and ontology data and science. It took place at the HS Karlsruhe, sponsored by DIT, and organized by Petra Drewer, Francois Massion, and Donatella Pulitano. The workshop benefited from a valuable mix of participants: academic researchers from the terminology and ontology world, industry and institutional representatives (SAP, DIN, Deutsche Bahn …), and tool vendors. Its goal was to find commonalities and differences between the two disciplines. As a provider of a unified solution we contributed to the workshop by illustrating how Coreon customers benefit from a fusion of terminology with ontology. Experts confirmed our claim that humanly curated resources, i.e. MKS, are indispensable to make Machine Learning work for less resourced domains and languages.

We recommend Petra’s and Francois’ presentation at the upcoming tekom conference on exactly that topic, Wed, 25 Oct, 11:15: Why Artificial Intelligence requires intelligent terminologies (and terminologists)!

See Coreon live this Autumn 2017

And of course, we’d be happy to meet you on upcoming events this autumn:
  • LT Industry Summit, 9-11 Oct, Brussels
    Meet Jochen Hummel, Coreon CEO and Chairman of the Board of LT Innovate at the event. Do not miss the opening keynote by Marija Gabriel, Commissioner for Digital Economy and Society, and Jochen's panel session "Artificial Intelligence: Hype or Reality?" on Oct 10, 9am.
  • tekom / tcworld, 24-26 Oct, Stuttgart
    Find us in the large hall C2, booth 2/G04 together with our partner company Semantix.
    We are proud to present recent highlights, such as brand new filtering capabilities and inline formatting! Learn how Multilingual Knowledge Systems boost AI and Machine Learning solutions and how they make the Internet of Things and Industry 4.0 work. Join us for a product demo Tuesday afternoon, 14:45 room C10.1.
Happy networking!