Consider the last problem you had with your smartphone: your troubles were likely shared by hundreds. Most problems we face have already been faced and solved by other people and we spent great energy on problems already solved. Problems repeat themselves also in the corporate life and for many companies solving already solved problems is a big part of their business, their service and their expenses.
Now consider seeing the answer at the moment you have expressed your worry. In the domains of the technical, customer and product support: existing solutions often lie forgotten in the corporations' support archives, and reusing old solutions can bring great rewards. In the ideal world, if old solution exists: support would see a solution at the moment they see a problem.
Yet the old archives may not be the easiest place to look for answers. The answers may be hidden within tens of thousands of long discussions that are difficult to search and follow. While you may be tempted by the hidden insights and knowledge, the insights may remain almost impossible to untap and reuse.
This was the problem faced by ABB Motors and Generators when we first met them. They had a great wealth of data holding great insights. One database contained 70000 support emails and there were numerous databases more. Their question to us was: could we extract the problems and their solutions from old discussions? The goal was to create a knowledge database filled with solutions.
Our customers knew their data intimately and they knew the task would be challenging. One ticket could contains hundreds of long emails. Questions and solutions were scattered amongst the discussions. Recognizing the questions and their solutions would be only part of the problem. Indeed, when I dug deeper into the data, I felt overwhelmed by the length and sophistication of the conversations and impressed by the deep and broad knowledge the ABB experts had. A great number of cases seemed unique or overly complex. Many questions were about serial numbers or schedules and therefore useless for us.
Yet there was also repetition, like the questions around tolerances, maintenance and motor greases. So there seemed to be a lot of useful insight to extract and reuse, and we had processed large amounts of text before with good results. So our answer to ABB was that we would try to solve the problem and if they were not happy with the results they wouldn’t have to pay.
Example problem and solution emails from the ABB support database.
So, the problem was extracting the problems and their solutions from countless and long emails. Perfection was not required and the customer had implied that just extracting a fraction of all question–answer pairs could be of great value. Yet creating any kind of practical and useful result seemed like a challenge.
There were a few ideas around how to deal with the data. One idea was simply parsing the natural language found in the text using software like the Stanford Parser, which seemed impressive and almost like it was designed for identifying questions in text. However, a large part of the text was in Finnish, and parsers could only be found for the major world languages. Additionally, in the actual tests the Stanford Parser seemed somewhat fragile when dealing with the not-always-so-correct text found in the emails. Another major problem was speed. Parsing a single phrase could take over a second in superficial testing and parsing all data could have taken days. As a consequence this option was quickly abandoned.
Another alternative was parsing the text with hand-written code. I did spend little time trying this solution model, partly to test how easily questions could be recognized. Still, this approach soon revealed itself to be impractical: very early on the code started to became rather complex and difficult to maintain, so I quickly abandoned that option. However, after this exercise I had gained a clear idea of how questions and answers could be recognized, and how the problem should be solved.
The solution we crafted to the problem was ultimately a very simple one. It was fast to create, fast to run and fast to deliver. I had a working proof of concept within two weeks, and within four weeks we had generated 3150 question and answer articles.
The basic solution contained two parts: the first part was the content processing pipeline. In the pipeline’s front end you pushed over 100 GB of XML and on the back end you got the 3150 articles containing the questions, their solutions and the related metadata and files. The generated articles were not of great quality, but their quality could be mechanically scored so that we got the best articles on one end and the worst articles on the other.
The processing pipeline itself consisted of a number of smaller components as well. The first component parsed the XML in order to extract emails, metadata and files. It also parsed the text from emails into chapters, phrases and words. The second component analyzed emails in order to recognize embedded emails inside emails and separate them. It also removed duplicated content. The third component used a simplistic machine learning technique (Naive Bayesian) to classify emails into problems, solutions and garbage categories. The fourth component classified phrases in the emails in order to remove uninteresting content like greetings and casual discussion. The fifth component produced articles based on this classification.
The example emails after recognizing embedded emails, problems, questions and relevant content. Question emails are colored red, while solutions are colored blue. Discarded content is marked in grey.
The second part of the solution was the creation of a web UI for editing the articles. Obviously we could not create perfect articles algorithmically, and even the best auto-generated articles required editing. Also, many otherwise good articles missed the critical attachments, tables or pictures, or they had questions like ”could you send these parts?” that were useless for the purpose. Still, while the solution could not generate publishable articles, it could produce material that could be manually edited into polished form with great efficiency. Based on the early experiments it was estimated that an expert could produce approximately 250 polished articles a week, which was pretty good considering the purpose.
The article before and after being edited by an ABB expert
The heart of the solution was the machine learning algorithm that recognized questions, solutions and key phrases. Yet getting sensible input for the machine learning algorithm ended up being a much bigger issue than the machine learning algorithm itself.
First of all there was lot of work in parsing the complex XML, and extracting the mails, files and other content in their many forms. Another challenge related to the phrase- and word formation and the way text was parsed into features. For example, it became a challenge to separate greetings from other content in cases like, ‘Hello Bob I have the following question...’. Neither could all files, images and other objects be extracted from the raw data, partly due to time and budget constraints, which left some articles incomplete and useless.
At the same time the machine learning algorithm that was used was very simple and it performed very well. The results could have likely been improved by providing more context for phrases, or by actually examining the structure of the phrases. One idea was using grammar learning algorithms for recognizing structures in the text. Another idea was to write a custom parser by hand for parsing phrase structure and extracting useful features. Still, it seemed unlikely that the time spend in crafting a more sophisticated solution would have been worth the investment.
Ultimately we extracted old answers and solutions from ABB’s databases. This is something we could openly call a success and it was the kind of success our customer wanted to repeat for other databases. We had unsealed the gold mines and started extracting the precious insight that was so valuable for our customer.
The proposed value for ABB was that their customers’ problems would no longer need to be solved twice. And this was something important, because support is a large and important part of their service and their business.