Building a Smart Grocery Assistant with QuickCommerce API
Create an AI-powered grocery assistant that finds the best prices and fastest delivery across multiple platforms.
Imagine asking your phone: "Where can I get Amul Butter delivered the fastest and cheapest?" and getting an instant, accurate answer that compares all 7 quick commerce platforms. That's exactly what we're going to build in this tutorial. By combining the power of large language models (LLMs) with the QuickCommerce API as a real-time data source, you can create an AI grocery assistant that genuinely helps users save time and money.
This isn't a hypothetical project — the architecture we'll build here is production-ready. LLMs like GPT-4 and Claude excel at understanding natural language queries, and the QuickCommerce API provides the structured, real-time grocery data they need to give accurate answers. The combination is remarkably powerful. Sign up for free to get your API key and follow along.
Architecture: LLM + QuickCommerce API as a Tool
The core idea is simple: use an LLM as the conversational brain, and give it access to the QuickCommerce API as a "tool" it can call when it needs real-time grocery data. Modern LLMs support function calling (also called tool use), which means the model can decide when to query the API and how to interpret the results — all within a natural conversation flow.
This pattern is sometimes called "retrieval-augmented generation" or RAG, but it's more accurately described as tool-augmented generation. The LLM isn't retrieving from a static knowledge base — it's making live API calls to get current prices, availability, and delivery ETAs. This means your assistant's answers are always up-to-date, unlike a chatbot trained on stale data.
User asks a question
The user types a natural language query like "What's the cheapest place to buy Tata Salt 1kg?" or "Which app can deliver eggs the fastest right now?"
LLM parses intent and selects the right API tool
The LLM understands the user's intent and decides which API endpoint to call — groupsearch for prices, groupeta for delivery times, or both for a comprehensive recommendation.
API returns real-time data
The QuickCommerce API returns structured data with prices, availability, ETAs, and product details from all 7 platforms in a single response.
LLM formats and presents the answer
The LLM takes the raw API data and formats it into a helpful, human-readable response — comparing options, highlighting the best deal, and explaining trade-offs.
Smart Grocery Assistant Architecture
Defining API Tools for the LLM
The first step is to define the QuickCommerce API endpoints as "tools" that the LLM can call. We'll use OpenAI's function calling format here, but the same approach works with Anthropic's Claude tool use, Google's Gemini function calling, or any other LLM that supports structured tool definitions. We need two tools: one for searching products (groupsearch) and one for checking delivery times (groupeta).
tools = [
{
"type": "function",
"function": {
"name": "search_products",
"description": (
"Search for a grocery product across all 7 quick commerce platforms "
"(BlinkIt, Zepto, Swiggy, BigBasket, DMart, JioMart, Minutes). "
"Returns product name, price, MRP, availability, and image for each platform. "
"Use this when the user asks about product prices, availability, or wants to compare options."
),
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Product search query, e.g. 'Amul Butter 500g' or 'Tata Salt 1kg'",
}
},
"required": ["query"],
},
},
},
{
"type": "function",
"function": {
"name": "get_delivery_etas",
"description": (
"Get estimated delivery times from all 7 quick commerce platforms. "
"Returns ETA in minutes for each platform. "
"Use this when the user asks about delivery speed or wants the fastest option."
),
"parameters": {
"type": "object",
"properties": {},
"required": [],
},
},
},
]Notice how the tool descriptions are detailed and specific — this helps the LLM understand when to use each tool. The `search_products` tool maps to the groupsearch endpoint, while `get_delivery_etas` maps to groupeta. The LLM reads these descriptions to decide which tool to call based on the user's question.
Finding the Best Price with Groupsearch
When the user asks a price-related question, the LLM will call the `search_products` tool, which hits the groupsearch endpoint. Let's see what this looks like for a common grocery item.
curl -X GET "https://api.quickcommerceapi.com/api/v1/groupsearch?query=Tata+Salt+1kg" \
-H "X-API-Key: YOUR_API_KEY"{
"query": "Tata Salt 1kg",
"results": {
"blinkit": [
{
"name": "Tata Salt Iodised Salt",
"weight": "1 kg",
"price": 28,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-blinkit"
}
],
"zepto": [
{
"name": "Tata Salt - Iodised",
"weight": "1 kg",
"price": 28,
"mrp": 28,
"available": true,
"productId": "tata-salt-iodised-1kg-zepto"
}
],
"swiggy": [
{
"name": "Tata Iodised Salt",
"weight": "1 kg",
"price": 28,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-swiggy"
}
],
"bigbasket": [
{
"name": "Tata Salt Iodised Salt",
"weight": "1 kg",
"price": 28,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-bb"
}
],
"dmart": [
{
"name": "Tata Salt Iodised 1kg",
"weight": "1 kg",
"price": 25,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-dmart"
}
],
"jiomart": [
{
"name": "Tata Salt 1 kg",
"weight": "1 kg",
"price": 27,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-jiomart"
}
],
"minutes": [
{
"name": "Tata Salt Iodised 1kg",
"weight": "1 kg",
"price": 28,
"mrp": 28,
"available": true,
"productId": "tata-salt-1kg-minutes"
}
]
}
}Tata Salt 1kg — Price by Platform
The LLM can now see that DMart offers Tata Salt at Rs 25 (a Rs 3 discount from MRP), JioMart at Rs 27, and all other platforms at the full MRP of Rs 28. It can format this into a clear recommendation: "DMart has the best price at Rs 25, saving you Rs 3 compared to other platforms."
Finding the Fastest Delivery with Groupeta
When speed matters more than price, the LLM calls the `get_delivery_etas` tool, which hits the groupeta endpoint. This returns estimated delivery times from all platforms for the user's location.
curl -X GET "https://api.quickcommerceapi.com/api/v1/groupeta" \
-H "X-API-Key: YOUR_API_KEY" \
-H "x-geolocation-pincode: 400001"{
"etas": {
"blinkit": {
"etaInMinutes": 8,
"etaDisplay": "8 mins",
"available": true
},
"zepto": {
"etaInMinutes": 10,
"etaDisplay": "10 mins",
"available": true
},
"swiggy": {
"etaInMinutes": 15,
"etaDisplay": "15 mins",
"available": true
},
"bigbasket": {
"etaInMinutes": 45,
"etaDisplay": "45 mins",
"available": true
},
"dmart": {
"etaInMinutes": 120,
"etaDisplay": "2 hours",
"available": true
},
"jiomart": {
"etaInMinutes": 180,
"etaDisplay": "3 hours",
"available": true
},
"minutes": {
"etaInMinutes": 12,
"etaDisplay": "12 mins",
"available": true
}
}
}Now the LLM knows that BlinkIt can deliver in 8 minutes, Zepto in 10, and Minutes in 12. For a user who just needs their groceries ASAP, BlinkIt is the clear winner. But if the user wants both the fastest delivery and the best price, the assistant needs to combine both data sources — which brings us to the recommendation engine.
The Recommendation Engine: Price + Speed
The real magic happens when you combine price data from groupsearch with delivery data from groupeta. This lets the assistant make nuanced recommendations like: "BlinkIt can deliver Tata Salt in 8 minutes at Rs 28, but if you can wait 2 hours, DMart has it for Rs 25." Here's the Python function that powers this logic.
import requests
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.quickcommerceapi.com/api/v1"
HEADERS = {"X-API-Key": API_KEY}
def search_products(query: str) -> dict:
resp = requests.get(f"{BASE_URL}/groupsearch", params={"query": query}, headers=HEADERS)
resp.raise_for_status()
return resp.json()
def get_delivery_etas(pincode: str = None) -> dict:
headers = {**HEADERS}
if pincode:
headers["x-geolocation-pincode"] = pincode
resp = requests.get(f"{BASE_URL}/groupeta", headers=headers)
resp.raise_for_status()
return resp.json()
def recommend_best_platform(query: str, pincode: str = None) -> dict:
"""Combine price and ETA data to recommend the best platform."""
search_data = search_products(query)
eta_data = get_delivery_etas(pincode)
platforms = []
for platform, items in search_data.get("results", {}).items():
if not items or not items[0].get("available"):
continue
product = items[0]
eta_info = eta_data.get("etas", {}).get(platform, {})
eta_minutes = eta_info.get("etaInMinutes", 999)
platforms.append({
"platform": platform,
"name": product["name"],
"price": product["price"],
"mrp": product.get("mrp", product["price"]),
"savings": product.get("mrp", product["price"]) - product["price"],
"eta_minutes": eta_minutes,
"eta_display": eta_info.get("etaDisplay", "N/A"),
"available": True,
})
# Sort by a weighted score: lower is better
# Price weight: 60%, ETA weight: 40%
max_price = max(p["price"] for p in platforms) if platforms else 1
max_eta = max(p["eta_minutes"] for p in platforms) if platforms else 1
for p in platforms:
price_score = p["price"] / max_price
eta_score = p["eta_minutes"] / max_eta
p["score"] = (price_score * 0.6) + (eta_score * 0.4)
platforms.sort(key=lambda x: x["score"])
return {
"query": query,
"recommendation": platforms[0] if platforms else None,
"cheapest": min(platforms, key=lambda x: x["price"]) if platforms else None,
"fastest": min(platforms, key=lambda x: x["eta_minutes"]) if platforms else None,
"all_options": platforms,
}
# Example usage
result = recommend_best_platform("Tata Salt 1kg", pincode="400001")
print(f"Best overall: {result['recommendation']['platform']}")
print(f"Cheapest: {result['cheapest']['platform']} at Rs {result['cheapest']['price']}")
print(f"Fastest: {result['fastest']['platform']} in {result['fastest']['eta_display']}")The recommendation engine uses a weighted scoring system — 60% price, 40% delivery speed — to rank platforms. You can easily adjust these weights based on user preferences. A user who prioritizes speed might set 30% price / 70% speed, while a budget-conscious user might prefer 80% price / 20% speed. The LLM can even ask users about their preferences and dynamically adjust the weights.
Recommendation Score Weights
Platform Comparison for Tata Salt
| Platform | Price (INR) | Savings | Delivery ETA | Available | Score |
|---|---|---|---|---|---|
| DMart | 25 | Rs 3 off | 2 hours | Yes | 0.71 |
| JioMart | 27 | Rs 1 off | 3 hours | Yes | 0.82 |
| BlinkIt | 28 | MRP | 8 mins | Yes | 0.63 |
| Zepto | 28 | MRP | 10 mins | Yes | 0.64 |
| Minutes | 28 | MRP | 12 mins | Yes | 0.66 |
| Swiggy | 28 | MRP | 15 mins | Yes | 0.68 |
| BigBasket | 28 | MRP | 45 mins | Yes | 0.78 |
Tip
Combine groupsearch + groupeta in your assistant for truly comprehensive recommendations. A single user question like "Where should I buy Tata Salt?" needs just 2 API calls to compare prices AND delivery times across all 7 platforms. Check our [pricing page](/pricing) to see how affordable this is at scale.
Adding a Conversational Interface
Now let's wrap everything in a conversational chatbot loop. This code uses OpenAI's GPT-4 with function calling to create a fully functional grocery assistant. When the user asks a question, GPT-4 decides which API tool to call, processes the results, and responds naturally. You can adapt this to use any LLM — the pattern is the same.
import json
from openai import OpenAI
client = OpenAI()
SYSTEM_PROMPT = """You are a helpful Indian grocery shopping assistant. You help users find
the best prices and fastest delivery across 7 quick commerce platforms:
BlinkIt, Zepto, Swiggy Instamart, BigBasket, DMart Ready, JioMart, and Minutes.
When users ask about products, use the search_products tool to find prices.
When they ask about delivery speed, use the get_delivery_etas tool.
When they want a recommendation, use both tools.
Always mention prices in INR (Rs). Be concise and helpful.
Highlight the best option clearly and explain trade-offs when relevant."""
def handle_tool_call(tool_call):
"""Execute the API call and return results."""
name = tool_call.function.name
args = json.loads(tool_call.function.arguments)
if name == "search_products":
return json.dumps(search_products(args["query"]))
elif name == "get_delivery_etas":
return json.dumps(get_delivery_etas())
return json.dumps({"error": "Unknown tool"})
def chat(user_message: str, conversation: list) -> str:
"""Process a user message and return the assistant's response."""
conversation.append({"role": "user", "content": user_message})
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "system", "content": SYSTEM_PROMPT}] + conversation,
tools=tools, # from tools_definition.py
)
message = response.choices[0].message
# Handle tool calls if the model wants to use a tool
while message.tool_calls:
conversation.append(message)
for tool_call in message.tool_calls:
result = handle_tool_call(tool_call)
conversation.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result,
})
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "system", "content": SYSTEM_PROMPT}] + conversation,
tools=tools,
)
message = response.choices[0].message
conversation.append(message)
return message.content
# Run the chatbot
if __name__ == "__main__":
conversation = []
print("Grocery Assistant: Hi! I can help you find the best prices")
print("and fastest delivery across 7 quick commerce apps. Ask me anything!\n")
while True:
user_input = input("You: ").strip()
if not user_input or user_input.lower() in ("quit", "exit", "bye"):
print("Grocery Assistant: Bye! Happy shopping!")
break
response = chat(user_input, conversation)
print(f"Grocery Assistant: {response}\n")Here's what a typical conversation looks like with this assistant: User: "I need Amul Butter 500g, where's it cheapest?" Assistant: "I found Amul Butter 500g across all platforms. DMart has the best price at Rs 270 (Rs 15 off MRP). BlinkIt and Zepto both have it at Rs 285. Want me to check which one can deliver it fastest?" This natural, context-aware interaction is what makes LLM-powered assistants so powerful.
Cost Per Recommendation
2
API Calls
Per recommendation
<3s
Response
End-to-end
7
Platforms
Compared
15-30%
Savings
vs single platform
Info
Each grocery recommendation costs just 2 API credits — one groupsearch call for prices and one groupeta call for delivery times. At 50 free credits on signup, you can serve 25 complete recommendations before upgrading. On the Starter plan, 10,000 credits means 5,000 recommendations per month — more than enough for most consumer apps.
The QuickCommerce API is designed to be cost-effective for AI applications. Each groupsearch call returns data from all 7 platforms in a single request, so you're not paying per-platform. This is critical for AI assistants where every interaction might trigger multiple API calls. Review our pricing plans to find the right tier for your usage. For high-volume chatbot deployments, our Enterprise plan offers custom pricing and dedicated support.
Extending the Assistant
Adding a Shopping List Feature
Go beyond single-product lookups by letting users submit an entire shopping list. The assistant can search for each item, find the platform with the lowest total basket cost, and present a consolidated recommendation. This transforms a simple search tool into a genuine money-saving assistant. Read more about building price comparison apps for inspiration.
Price History and Trends
Store the results from each API call in a database, and your assistant can answer questions like "Has Amul Milk gotten more expensive this month?" or "Which platform usually has the best deals on dal?" This historical intelligence makes the assistant increasingly valuable over time. See our guide on tracking grocery prices for implementation details.
Voice and WhatsApp Integration
The chatbot pattern we built works with any input/output channel. Wrap it in a WhatsApp Business API integration and users can message your assistant directly from WhatsApp. Add speech-to-text and you have a voice-powered grocery assistant — perfect for busy households where typing is inconvenient.