Wednesday, April 5, 2017

Why Machine Learning still Needs Humans for Language?

Outperforming Humans

Machine Learning (ML) begins to outperform humans in many tasks which seemingly require intelligence. The hype about ML makes it even into mass media. ML can read lips, recognizes faces, or transform speech to text. But when ML has to deal with the ambiguity, variety and richness of language, when it has to understand text or extract knowledge, ML continues to need human experts.

Knowledge is Stored as Text

The Web is certainly our greatest knowledge source. However, the Web has been designed for being consumed by humans, not by machines. The Web’s knowledge is mostly stored in text and spoken language, enriched with images and video. It is not a structured relational database storing numeric data in machine processable form.

Text is Multilingual

The Web is also very multilingual. Recent statistics show that surprisingly only 27% of the Web’s content is English and only 21% in the next 5 most used languages. That means more than half of its knowledge is expressed in a long tail of other languages.

Constraints of Machine Learning


ML faces some serious challenges. Even with today’s availability of hardware, the demand for computing power can become astronomical when input and desired output are rather fuzzy (see the great NYT article "The Great A.I. Awakening").

ML is great for 80/20 problems, but it is dangerous in contexts with high accuracy needs: “Digital assistants on personal smartphones can get away with mistakes, but for some business applications the tolerance for error is close to zero", emphasizes Nikita Ivanov, from Datalingvo, a Silicon Valley startup.

ML performs good on n-to-1 questions. For instance, in face recognition “all these pixel show which person?” has only one correct answer. However, ML is struggling in n-to-many or in gradual circumstances … there are many ways to translate a text correctly or express a certain piece of knowledge.

ML is only as good as its available relevant training material. For many tasks mountains of data are needed. And the data better be of supreme quality. For language related tasks these mountains of data are often required per language and per domain. Further, it is also hard to decide when the machine has learned enough.

Monolingual ML Good enough?


Some suggest why not process everything in English. ML does also an OK job at Machine Translation, like Google Translate. So why not translate everything into English and then lets run our ML algorithms? This is a very dangerous approach since errors multiply. If the output of an 80% accurate Machine Translation becomes the input to an 80% accurate Sentiment Analysis errors multiply to 64%. At that hit rate you are getting close to flipping a coin. 


Human Knowledge to Help


The world is innovating constantly. Every day new products and services are created. To talk about them we continuously craft new words: the bumpon, the ribbon, a plug-in hybrid, TTIP ‒ only with the innovative force of language we can communicate new things.

Struggle with Rare Words

By definition new words are rare. They first appear in one language and then may slowly propagate into other domains or languages. There is no knowledge without these rare words, the terms. Look at a typical product catalog description with the terms highlighted. Now imagine this description without the terms – it would be nothing but a meaningless scaffold of fill-words.

Knowledge Training Required

At university we acquire the specific language, the terminology, of the field we are studying. We become experts in that domain. But even so, later in our professional career when we change jobs we still have to acquire the lingo of the new company: names of products, modules, services, but also job roles and their titles, names for departments, processes, etc. We get familiar with a specific corporate language by attending training, by reading policies, specifications, and functional descriptions. Machines need to be trained in the very same way with that explicit knowledge and language.

Multilingual Knowledge Systems Boost ML with Knowledge


There is a remedy: Terminology databases, enterprise vocabularies, word lists, glossaries – organizations usually already own an inventory of “their” words. This invaluable data can be leveraged to boost ML with human knowledge: by transforming these inventories into a Multilingual Knowledge System (MKS). An MKS captures not only all words in all registers in all languages, but structures them into a knowledge graph (a 'convertible' IS-A 'car' IS-A 'vehicle'…, 'front fork' IS-PART of 'frame' IS-PART of 'bicycle').

It is the humanly curated Multilingual Knowledge System that enables ML and Artificial Intelligence solutions to work for specific domains with only small amounts of textual data and also for less resourced languages.

Thursday, March 23, 2017

Excel with Enterprise Taxonomy

In multiple blog posts we have mentioned Multilingual Knowledge Systems (MKS) and how it is a core component in several applications both monolingual and multilingual. An MKS is in fact a multilingual Enterprise Taxonomy.

We have explained what an MKS is and now we want to advise you how to build one.

People often fear the task of creating the basic infrastructure (Enterprise Taxonomy) for their operations in different countries. They think that it is too costly, needs special expertise and is difficult to maintain. Often due to an expensive software that is homegrown and cumbersome to use. What many do not understand is that they already have this data and have been paying for it for years in their translation contracts.

What you need to do is the following:

  • Collect your terminology data in all the languages you need from your translation provider and send it to us at
  • Assign a responsible knowledge carrier with a good overview of your operations. 

At Coreon we will manage your terminology data and in collaboration with you and your experts our team will structure, verify and QA the result.

A RESTful API makes connectivity straight forward. Your company can easily add a new product/service/operation on top of your Enterprise Taxonomy.

Deploy the power of your MKS in your applications. Contact us - we get back to you with a proposal that will do more than make you happy - it will boost you career!

Saturday, December 3, 2016

Symbiosis of Language Technology and AI at LT-Accelerate

AI and Natural Language Processing propel each other, because most of human knowledge and interaction is textual. What is textual is globally always multilingual. The LT-Accelerate Conference (Brussels, Nov 20-21) focuses on Text Analytics, AI, and related subjects. The speakers, CEO of SMEs, project leads/data scientist of larger companies, and NLP/AI researchers, provided amazing insights into the progress the field has made in the last year. 

Needless to say, also here the industry’s buzzwords Deep, Neural, Machine Learning are ubiquitous. Luckily innovators have become much better in explaining the concepts behind and how to use them. Open Source Software puts these powerful tools also in the hand of smaller teams. Matthew Honnibal from spaCy summarized on use case nicely: “You shall know a word by the company it keeps”.

Michalis Michael, CEO of DigitalMR, sets the bar high for state-of-the-art text analytics. The sentiment accuracy and topic match has to be >80% while significantly reducing noise. Only by supporting all languages enterprises become omniscient. Human emotions are slightly more complex than Positive/Neutral/Negative. HeartBeat AI, for example, features a comprehensive emotion model. Text analytics needs to be meaningfully integrated in existing surveys and other data sources. Profiling allows customer segmentation by demographics or other derived variables. 

Demanding requirements, but when done right text analytics strongly correlates with survey results. Only that it is much cheaper. Therefore the industry is bullish that their currently still small 3% share of the $65B spent annually on market research will grow dramatically.

Mike Hyde, former Skype’s Director of Data and Insights, explained why Bots are the new Apps. These bots need to understand language. They must have access to and make sense of enterprises knowledge. And the bots have to be polyglot. A rich playing field for language technology deployed on top of a Multilingual Knowledge System.

Many believe Machine Learning can do miracles. And ML does, as long as there are mountains of good data at hand. For example, Google claims to have outperformed humans in lip reading (automatic speech recognition of vids is at 95-98% accuracy, so lots of data). Microsoft claims that they do as well as humans in describing pics in one sentence. 

However, often there aren't humongous amount of data available. Obviously “>80%” accuracy doesn’t cut it, when applications deal with serious matters such as health, legal, or money. The community agrees that for most use cases Machine Learning needs to be based on human knowledge: on taxonomies, ontologies, and terms.

Friday, October 14, 2016

Tackling Important Knowledge Needs in Manufacturing

A multilingual knowledge system (MKS) combines language, such as multilingual terminology, with knowledge, a graph connecting the concepts via relations. The most common type of relation is the hierarchical one, also known as broader/narrower or parent/child relation. Such a hierarchically structured concept system is often also referred to as a taxonomy.

Many taxonomies that we see in real life are used to categorize vast amounts of information into classes and sub-classes of information, namely from more generic to more specific. So that us humans can navigate and find them in an easy and meaningful way: books in libraries, types of goods in an online shop, or even a classification of industries – for instance the Industry Classification Benchmark (ICB). Fine.

Concept Systems to Model Products, Modules, Components
Now, manufacturers use multilingual knowledge systems to capture the language, the labeling of their products, modules, and components that make up their goods. Also in this scenario large amounts of concepts (pieces, parts, assembled components) are put in relation to each other. Let us imagine a manufacturer that produces bicycles. A bicycle – the broadest concept – then consists of a frame, handlebar and saddle, wheels and brakes, the propulsion etc. Also these concepts are in a hierarchical relation to each other. However, in contrast to classification systems or nomenclatures, the broader-narrower relation that we see here is more to be understood as an is-part-of respectively consists-of relation: A spoke reflector is-part-of a wheel (... yes, experts in linguistics and semantics already know that I talk about the so-called meronomy here).

Re-Using Parts and Components
Of course, in manufacturing processes components are re-used everywhere, like from a library or a construction kit: one component – i.e. a concept in the knowledge system – becomes part of several modules; it is assembled into several products. For instance, a wheel is part of the front fork as well as the rear frame. This means, in an MKS, the concept 'wheel' must appear two times, once as a child concept to 'front fork', once as a child concept to 'rear frame' (and probably also again and again in different types of bicycles). How can we model this in an MKS?
The obvious and immediate answer to this is usually that an MKS must support so-called polyhierarchical structures: the graph may not be a simple tree but must support multiple parent relations. Of course, Coreon's MKS does support polyhierarchies – multiple inheritance from parent to child nodes. Last but not least, this is also a mandatory requirement for classification-like taxonomies: in the taxonomy of a construction market's online shop the marketeers may want to file 'silicone' both underneath 'bath / ceramics' as well as under 'building materials'.

Polyhierarchical Relations alone are not Sufficient
However, in the manufacturing context above mentioned polyhierarchies do not yet address the requirement to a satisfying degree. Why?
  1. Efficiency: Manage the concept only once. It would be absolutely inefficient to store the common information about a wheel as often as it is used in a bicycle. Maintainers rather want to elaborate on a concept only once, describe it, illustrate it with images, incorporate all terms, synonyms in all languages.
  2. Different children concepts ("sub-maps"): Depending under which parent a concept is hooked, it can have different children concepts. For instance, the wheel being part of the front fork has additionally the part, i.e. the narrower concept, 'dynamo', whereas the wheel being part of the rear frame is additionally equipped with the 'rear brake'.
These two requirements together ask for a solution that goes beyond polyhierarchies. How does Coreon address this?

Aliases: Re-using a Concept in the Map
Coreon's MKS resolves this by its unique "Alias" capability. A functionality that makes one and the same concept show up several times in the concept system. Intuitive and powerful, aliases work like shortcuts, like placeholders. How does it work?
Instead of hooking the concept 'wheel' two times under different parent concepts or (even worse) instead of inefficiently storing 'wheel' and all its information and terms multiple times in the repository, we do store 'wheel' only once with all its terms, meta data, illustrations. Then a maintainer creates one alias to 'wheel' underneath 'front fork' and another alias underneath 'rear frame'. Both aliases are just shortcuts and point to one and the same referent concept, but make it explicit that wheel is part of the front fork as well as the rear frame. A future change made to the referent concept, for instance, add another language, is immediately visible also via the aliases. Requirement 1) – efficiency – is addressed.
Further, an alias itself does exist as a physical node in the knowledge graph; it can have a complete set of own relations: broader/narrower as well as associative ones. Thus requirement 2) – different sub-maps – is addressed, since one alias can have another set of narrower concepts than the other alias.

Coreon: aliases in a concept map
Aliases in a concept map

Have a look at the screenshot and notice two things: 1) the different rendering of the aliases in the concept map (the ones with the curvy arrow) and, 2), aliases pointing to the same referent concept have indeed different children: the wheel built into the front fork additionally has a dynamo, the wheel built into the rear frame has a rear brake. And, of course, aliases can have aliases as narrower concepts, too (see 'spoke reflector').

With this unique capability manufacturers benefit from an efficient and effective way to model products, modules, and components via a Multilingual Knowledge System.

I've written this post with a particular view onto the needs for global manufacturers, companies that develop and ship products. I'd be interested to hear opinions how aliases help in classification and nomenclature-like activities.

Thursday, September 8, 2016

Gartner Strategically Adds Enterprise Taxonomy

Gartner recently published its annual report "Hype Cycle For Emerging Technologies 2016" which for the first time includes Enterprise Taxonomy and Ontology.  However, enterprise taxonomy has been placed into Gartner's Through of Disillusionment (ToD). Over-selling leads to hype which is naturally followed by the ToD after failing to deliver on exaggerated expectations.  Anyhow, in Gartner's model this is the way forward to reach success i.e. the Plateau of Productivity. Actually, Enterprise Taxonomy has never been a real hype. It was introduced by Gartner because it is gaining ground after being around for quite a while in the US market. Enterprise Taxonomy both underpins Information Infrastructures (Separate Gartner Report 2016) as well as being a"Smart machine technology that will revolutionize manufacturing and its related industries".

Enterprise taxonomy has struggled through many years without professional editing and management software.  Still, it has pushed its way into tool box of CIOs as it has proved to be extremely useful. Today, browser-based software is available to support taxonomy assets in SaaS model. Coreon, for example, supports seamlessly the creation, management, and deployment of enterprise taxonomies. Terms in multiple languages are easily introduced into the asset. Search APIs enable the integration of the knowledge assets into all kind of applications. This matches the enterprise need to fully support the taxonomy process also on a global scale in multiple languages. Such global knowledge resource is called a Multilingual Knowledge System.

Recently Multilingual Knowledge Systems have become also synonymous with cross-border interoperability. They support one of the most important initiatives of the CEC, the Digital Single Market and underpin the future of the EU. Surely soon European companies will follow the best practice of US enterprises to structure their knowledge to best serve a multilingual market. Actually European industry could forge ahead and take the lead!

Thursday, August 4, 2016

Centuple your Market with Language-neutral Product Search

A German and an Italian go on a Polish online shop… Sounds like a start of a bar joke? The situation indeed seems bizarre in spite of all the EU propaganda about the Digital Single Market. The German and the Italian simply won’t understand a single word. They cannot find any products in this Polish shop.


Easy Fix Translation?

Translating static content and product catalogues into English will surely help. In some EU countries half of the population has a satisfying passive command of English. However, that percentage becomes quickly minuscule when consumers have to enter the right English search term for a specific domain. Translation into all or even only into mayor languages is in almost all cases cost prohibitive.


Domestic Customers have a Hard Time, too!

For domestic customers it’s not easy to find products either. Today’s string based search often returns no matches. Or it finds too many, displayed in unintuitively sorted lists, which has the same effect. Instead of searching semantically for what the customers wants, online shops expect their customers to enter the very same strings they have used in their catalogues.


Scroll Hit Lists or Explore Product Offering?

Search results are displayed in list of matches, a column of product names or tiles of product images. But what if there are plenty of matches in multiple categories? An alternative, more natural way would be to display the search result graphically in a product tree with related products closely organized. This way the shopper can quickly find the product she was actually looking for and is motivated to buy more.


Social Shopping

E-commerce has increased the buying options to a degree, which leaves many consumers completely lost. Therefore online shoppers often rely on third party information such as tests, feedbacks, blogs, etc. to make a buying decision. Shops want to give their customers the comforting feeling of having made an informed and good decision, without having to leave the shop.


Solution Architecture for Advanced Linguistic Product Search

All the above requirements, particularly the semantic and cross-language search, can be relatively easily fulfilled by deploying Advanced Linguistic Search (ALS) on top of a Multilingual Knowledge System (MKS). The following chart illustrates the architecture for finding products semantically and language-neutral:

The ALS deals with language specifics such as morphology, spelling variations, etc. Deployed with the MKS it can expand searches semantically and across-languages. The MKS stores the product knowledge in different languages. Its knowledge graph is used for product exploration. Relevant products are listed by semantical proximity, not string match scores. Alternately the shopper can explore the offering in a product graph. Supporting third party information is provided, machine translated if originally in a different language, to help the consumer to make buying decisions without leaving the shop.


Find, Upsell, Advice = Higher Revenue

The above solution, based on a Multilingual Knowledge System such as Coreon, enables online shops to sell much more. Without ongoing translation efforts, shops can drastically extend their customer base in the Digital Single Market. For shops in almost half of the EU countries that increase would be hundredfold!