Skip to content

Latest commit

 

History

History
72 lines (59 loc) · 3.28 KB

File metadata and controls

72 lines (59 loc) · 3.28 KB

QuickBooks Batch API Tutorial (v3)

What you’ll learn

  • How to call the Batch endpoint with mixed operations (create, update, delete, query)
  • How to validate results with Postman tests
  • Best practices for performance and rate limiting

Key limits and behavior

  • Max 50 BatchItemRequest items per batch
  • Max 40 batch requests per minute per realmId
  • Each BatchItemRequest is metered like a normal call (no cost discount)
  • Operations are independent; order is not guaranteed; do not create dependencies across items
  • Authenticate once per batch (Bearer token applies to all items)

Prerequisites

  • QuickBooks Online company and realmId
  • OAuth 2.0 access token with required scopes
  • Postman (latest) installed

Setup

  1. Import files from postman/:
    • QuickBooks-Batch.postman_environment.json
    • QuickBooks-Batch.postman_collection.json
  2. In the Postman environment, set:
    • baseUrlhttps://quickbooks.api.intuit.com (or sandbox URL)
    • realmId → your company ID
    • minorVersion → e.g., 75
    • accessToken → a valid Bearer token (no Bearer prefix in the variable)

Run the demo

  1. Open the collection: QuickBooks Batch → Requests → "Batch - Mixed Operations"
  2. Send the request. The test tab will:
    • Assert HTTP 200
    • Parse BatchItemResponse[]
    • Count failures (Faults) vs successes
    • Validate bId mapping
  3. Try the "Batch - Query Only" request to fetch records via a SELECT query.

Interpreting responses

  • Success items contain the resource payload under their type (e.g., Vendor, Invoice)
  • Query items return QueryResponse
  • Failed items include a Fault object with codes/details
  • Use bId to correlate each response to its request item

Best practices

  • Prefer Query to fetch many records of one resource; use Batch to group independent writes/reads
  • Keep each item minimal (sparse updates) to reduce payload size
  • Avoid intra-batch dependencies; split into separate batches when order matters
  • On partial failure, retry only failed bIds; implement exponential backoff for 429
  • Monitor both per-minute (batch) and per-operation rate limits; handle Retry-After
  • Log bId, operation, and result for traceability
  • Always pass minorversion to ensure consistent behavior

Common pitfalls

  • Expecting Batch to reduce cost/quota — it does not; each item is metered
  • Assuming creation within a batch is immediately usable by another item in the same batch
  • Ignoring partial failures — always inspect BatchItemResponse array

Troubleshooting

  • 401/403: token expired/insufficient scope → refresh token / re-grant scope
  • 429: per-minute batch quota or downstream rate limits → backoff and retry
  • ValidationFault: check required fields, SyncToken for updates, or duplicate constraints

Stress testing (optional)

  • Max throughput per realmId ≈ 50 items × 40 batches/min = ~2,000 ops/min (subject to downstream limits).
  • Use the collection folder "Batch - Max 50 Items (Queries)" to send 50 query items in one batch.
  • Automate with Newman to approach per-minute caps without 429s:
    • Add --delay-request 1500 and --iteration-count 40 to stay under 40/min per realmId.
  • If you want to provoke throttling for demo purposes, reduce delay and observe 429 with rule name v3/*-Realm Id per Minute.