This can be for example graph, image, step-by-step solution or table. Each of these belongs to its own section called pod, which in turn includes subpods which hold individual pieces of data. Considering that each response can sometimes include even 10 or so pods, it’s desirable to describe what we want to receive from the API. To do that - podtitle and more robust includepodid - parameters can be used, as shown above. The filtered response for above query would therefore look like so:Ĭonsidering that we want to ask multiple questions, we also have to make multiple queries. First of them is directed at v1/conversation endpoint and includes a questions in the i parameter. We also specify our location with geolocation parameter - this is one of the optional values (other are ip and units) that can provide context for the questions. This first request pretty much does the same thing as Spoken Results API, meaning that it returns information in form of full sentence. The fun starts when we ask followup questions. To do so, we make another query, this time however, we send it to the host that was provided as part of the response to first query ( host = r). The first response also included conversationID which we also have to pass in for API to know what was said prior to that. The second query then returns same type of result as the first one, which allows us to keep on asking more questions using provided conversationID and host. One last thing I want to highlight here is how the the questions and answers can nicely flow and use context.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |