Several people around the world spend most their times in reading posts or articles on social media, or e-mails. And probably the condition may further worsen. In order to deal with this issue, a research team at Salesforce has developed a new algorithm that with the use of machine learning generates startlingly accurate and coherent summaries.
The automatic summarization technology will be specifically helpful to Salesforce as it manufactures an array of customer service-focused products. The resultant summaries can be utilized by the customer service or sales representatives to rapidly comprehend data and e-mails, which, in turn, will permit them with more time to be focused on their customers.
For this, the organization is inclining toward machine learning to come up with methods to summarize lengthy text blocks, which it can ultimately integrate into its products. The company acknowledged about their 2 innovations in natural language processing, releasing a new way of training summarization models and a new contextual word generation model. Mutually, the 2 breakthroughs enable researchers to automatically produce summaries of lengthy texts that are readable and accurate.
The automatic text summarization functions through 2 approaches: abstraction and extraction. Abstraction facilitates the introduction of new words by the computer, but the system has to comprehend the original piece of writing adequately to be able to insert the correct words. With extraction, the computer can depict from the pre-existing wording in a text, but it is not easy-going.
The other question that comes to mind is how does the team coach the system to understand and enhance itself? They basically used 2 ways: reinforcement learning and teacher forcing. Reinforcement learning is a way that takes motivation from the way animals learn. The model is permitted to produce a series of words and the outcome is rated with an automated assessment metric called as Recall-Oriented Understudy for Gisting Evaluation (ROUGE). Teacher forcing refers to the word-by-word scoring of an established reference that offers very decent results, but do not permit much flexibility.
The team also presented numerous examples, demonstrating the original article, a summary produced by their own model, & a human-generated summary, and in every case, the summaries are significantly smaller than the original text, but include the prerequisites in a readable form.