Google Approaches Bert-Level Performance Using 300x fewer Parameters with Extension of Its New NLP model PRADO

The trimmed-down pQRNN extension to Google AIs projection attention neural network PRADO compares to BERT on text classification tasks for on-device use.

Particularly in natural language processing , innovations on deep neural networks have been continuously evolving to meet new market demands. Recently, there has been a growing interest in developing small and accurate NLP neural networks that can run entirely on smartphones, smartwatches and IoT devices.Much research today is exploring ways to shift the development of NLP models so they can run on-device, rather than through high-tech data centers.

A projection operation converts tokens in the text to a sequence of ternary vectors, a dense bottleneck layer learns a per word representation relevant to the NLP task, and a stack of QRNN encoders learns a contextual representation from text input alone without employing any preprocessing.

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19.

Original article