AutoQL’s core translation model ensures high efficiency and accurate responses with the help of several auxiliary models. These models work together to facilitate seamless conversational data experiences for your users and are trained to handle the subtleties and unique nuances of human conversation.
In addition to enabling the core service of AutoQL (the dynamic translation of NL to database query language), several user experience (UX) enhancing ML models are also at work to:
- Enable seamless, self-service user onboarding through a searchable catalogue of NL queries. Much like Google Search, users simply type in a topic or theme they are interested in and automatically receive a list of every possible NL query they could ask that contains their input. This model empowers users to get started quickly, explore queries they can ask, and experience success right away.
- Catch queries that contain references to unique data, ensuring your users receive the response they need, regardless of their knowledge of your unique database structure or specific value labels. This model bridges the gap between the way data is uniquely labeled in a database and the way your user might naturally refer to that same data in their own NL.
- Automatically populate similar complete query suggestions as the user types, so users can query efficiently and accurately. This model serves as an intelligent autocomplete function, much like a user is accustomed to experiencing when using other smart search interfaces or while texting.
- Return accurate responses even in cases where a user's initial query contained some degree of ambiguity or lacked context. This model catches ambiguous queries automatically and returns similar recommended queries so users can quickly clarify what they were looking for and get exactly what they expect to see.
Updated about 2 years ago