Read our new white paper: Generative AI in Christian Evangelism

View Categories

Chat Completions

Our Agent (chat completions) endpoint is fully compatible with the OpenAI chat completions API specification. This means that our API is a like-for-like swap for any application currently using the popular LLM chat completions API signature, and should be plug and play with any of the many OpenAI SDKs in the language of your choice.

Demo Chatbot App

Minimal Request Example #

Since your Agent is configured on the platform, API calls to get completions can be extremely minimal. All that’s required is the prompt for which you want a response:

curl \
--header 'x-api-key: apg_xxxxxxxxxxxxxxxxxxxxxxxxxxxx' \
--header 'Content-Type: text/plain' \
--data '{
    "prompt": "How can a good God allow so much evil in the world?"
}' \
--url https://my.gospel.bot/api/v1/chat/completions

Please note: you must replace the x-api-key value with your API key and my.gospel.bot with your Agent’s domain.

Full Request Example #

However, the API supports overriding the default Agent configuration options, as well as several other runtime options. Here’s an example of all supported parameters that have an effect on the output:

curl \
--header 'Authorization: Bearer apg_xxxxxxxxxxxxxxxxxxxxxxxxxxxx' \
--header 'Content-Type: application/json' \
--data '{
    "model": "openai/gpt/4o",
    "stream": false,
    "messages": [
        {
            "role": "system",
            "content": "This a system prompt override."
        },
        {
            "role": "user",
            "content": "This is a previous prompt."
        },
        {
            "role": "assistant",
            "content": "This is a previous completion."
        },
        {
            "role": "user",
            "content": "This is the current prompt."
        }
    ],
    "response_format": { 
        "type": "json" 
    },
    "metadata": {
        "anonymous": true,
        "conversation": null,
        "language": "en",
        "session": null,
        "device": null,
        "translation": "esv"
    },
    "frequency_penalty": 0.25,
    "presence_penalty": -0.25,
    "max_completion_tokens": 1024,
    "reasoning_effort": "high",
    "temperature": 0.5,
    "top_p": 0.9,
    "user": null
}' \
--url https://my.gospel.bot/api/v1/chat/completions

Please note: you must replace the Authorization bearer token value with your API key and my.gospel.bot with your Agent’s domain.

Prompt and Messages #

You must either supply the prompt string or a messages array.

The messages parameter is an array of message objects, each with a role and a content property. By default, an Agent will have a system prompt that is automatically applied. Also by default, previous exchanges between an Agent and user as defined by the metadata.device, metadata.session, metadata.conversation, and user parameters are prepended to the array of messages which are eventually sent to the LLM. You may control how many past exchanges you wish Agent to use by passing an integer for the metadata.max_memories parameter. You only need supply a messages array if you wish to override this default behavior.

If you want to use the aforementioned defaults, or the response doesn’t require context from previous exchanges with Agent, you may simply provide a prompt string.

If you wish to prevent any context of previous exchanges altogether, regardless of the value of metadata.device, metadata.session, metadata.conversation, and user — you may pass metadata.anonymous as true.

  • user: any string identifier for the user (100 characters max)
  • metadata.conversation: any string identifier for the conversation within a single session (100 characters max)
  • metadata.session: any string identifier for the user’s session (100 characters max)
  • metadata.device: any string identifier for the user’s device (100 characters max)
  • metadata.anonymous: if set to true, no past exchange context is provided to the Agent (one-shot); equivalent to setting metadata.max_memories: 0
  • metadata.max_memories: the number of previous exchanges to provide to Agent

Streaming vs Non-Streaming #

The stream parameter controls whether the response should be streamed or delivered all at once. Typically, chat interfaces benefit from streaming output as the user gets more instantaneous feedback. However, there are use cases that lend themselves to non-streaming output as well.

Response Format #

The response format may be specified using the response_format.type parameter. Valid values are as follows:

  • raw: raw OpenAI compatible chunked chat completion JSON. Only available if the stream option is true. Setting this format will allow any OpenAI compatible library / SDK to be used against this endpoint, as long as it has a way to set the response_format parameter. [default when streaming]
  • text: plain text output; may include markdown
  • html: HTML formatted, converted from markdown where applicable
  • json: structured JSON response; includes completion as well as token usage and timing stats. Only available if the stream option is false. [default when not streaming]

Your Agent can be configured via Apologist Ignite to use any of the above response formats above by default for the chat completions endpoint. This is especially useful in situations where a 3rd party integration doesn’t allow sending custom parameters. Note that if text is specified, you may also toggle an option in your Agent configuration in Apologist Ignite to automatically strip all markdown to ensure only plain text is returned.

Bible Translation #

The metadata.translation parameter indicates to the Agent which Bible translation is preferred to use. The following is the list of supported Bible translations at this time:

  • esv: English Standard Version [default]
  • niv: New International Version
  • bsb: Berean Study Bible
  • kjv: King James Version
  • net: New English Translation
  • nkjv: New King James Version
  • nlt: New Living Translation
  • csb: Christian Standard Bible
  • nasb: New American Standard Bible

Language #

Use the metadata.language parameter to control the output language. Language support varies by model and is specific to a given Agent. However, any Agent can be upgraded to utilize real-time translation, expanding its language support to 192 languages. Here is the full list of supported languages:

  • en: English [default]
  • ab: Abkhaz
  • ace: Acehnese
  • ach: Acholi
  • af: Afrikaans
  • sq: Albanian
  • alz: Alur
  • am: Amharic
  • ar: Arabic
  • hy: Armenian
  • as: Assamese
  • awa: Awadhi
  • ay: Aymara
  • az: Azerbaijani
  • ban: Balinese
  • bm: Bambara
  • ba: Bashkir
  • eu: Basque
  • btx: Batak Karo
  • bts: Batak Simalungun
  • bbc: Batak Toba
  • be: Belarusian
  • bem: Bemba
  • bn: Bengali
  • bew: Betawi
  • bho: Bhojpuri
  • bik: Bikol
  • bs: Bosnian
  • br: Breton
  • bg: Bulgarian
  • bua: Buryat
  • yue: Cantonese
  • ca: Catalan
  • ceb: Cebuano
  • ny: Chichewa (Nyanja)
  • zh: Chinese (Simplified)
  • zh-TW: Chinese (Traditional)
  • cv: Chuvash
  • co: Corsican
  • crh: Crimean Tatar
  • hr: Croatian
  • cs: Czech
  • da: Danish
  • din: Dinka
  • dv: Divehi
  • doi: Dogri
  • dov: Dombe
  • nl: Dutch
  • dz: Dzongkha
  • eo: Esperanto
  • et: Estonian
  • ee: Ewe
  • fj: Fijian
  • tl: Filipino (Tagalog)
  • fi: Finnish
  • fr: French
  • fr-CA: French (Canadian)
  • fy: Frisian
  • ff: Fulfulde
  • gaa: Ga
  • gl: Galician
  • lg: Ganda (Luganda)
  • ka: Georgian
  • de: German
  • el: Greek
  • gn: Guarani
  • gu: Gujarati
  • ht: Haitian Creole
  • cnh: Hakha Chin
  • ha: Hausa
  • haw: Hawaiian
  • he: Hebrew
  • hil: Hiligaynon
  • hi: Hindi
  • hmn: Hmong
  • hu: Hungarian
  • hrx: Hunsrik
  • is: Icelandic
  • ig: Igbo
  • ilo: Iloko
  • id: Indonesian
  • ga: Irish
  • it: Italian
  • ja: Japanese
  • jw: Javanese
  • kn: Kannada
  • pam: Kapampangan
  • kk: Kazakh
  • km: Khmer
  • cgg: Kiga
  • rw: Kinyarwanda
  • ktu: Kituba
  • gom: Konkani
  • ko: Korean
  • kri: Krio
  • ku: Kurdish (Kurmanji)
  • ckb: Kurdish (Sorani)
  • ky: Kyrgyz
  • lo: Lao
  • ltg: Latgalian
  • la: Latin
  • lv: Latvian
  • lij: Ligurian
  • li: Limburgan
  • ln: Lingala
  • lt: Lithuanian
  • lmo: Lombard
  • luo: Luo
  • lb: Luxembourgish
  • mk: Macedonian
  • mai: Maithili
  • mak: Makassar
  • mg: Malagasy
  • ms: Malay
  • ms-Arab: Malay (Jawi)
  • ml: Malayalam
  • mt: Maltese
  • mi: Maori
  • mr: Marathi
  • chm: Meadow Mari
  • mni-Mtei: Meiteilon (Manipuri)
  • min: Minang
  • lus: Mizo
  • mn: Mongolian
  • my: Myanmar (Burmese)
  • nr: Ndebele (South)
  • new: Nepalbhasa (Newari)
  • ne: Nepali
  • nso: Northern Sotho (Sepedi)
  • no: Norwegian
  • nus: Nuer
  • oc: Occitan
  • or: Odia (Oriya)
  • om: Oromo
  • pag: Pangasinan
  • pap: Papiamento
  • ps: Pashto
  • fa: Persian (Farsi)
  • pl: Polish
  • pt: Portuguese
  • pt-BR: Portuguese (Brazil)
  • pa: Punjabi
  • pa-Arab: Punjabi (Shahmukhi)
  • qu: Quechua
  • rom: Romani
  • ro: Romanian
  • rn: Rundi
  • ru: Russian
  • sm: Samoan
  • sg: Sango
  • sa: Sanskrit
  • gd: Scots Gaelic
  • sr: Serbian
  • st: Sesotho
  • crs: Seychellois Creole
  • shn: Shan
  • sn: Shona
  • scn: Sicilian
  • szl: Silesian
  • sd: Sindhi
  • si: Sinhala (Sinhalese)
  • sk: Slovak
  • sl: Slovenian
  • so: Somali
  • es: Spanish
  • su: Sundanese
  • sw: Swahili
  • ss: Swati
  • sv: Swedish
  • tg: Tajik
  • ta: Tamil
  • tt: Tatar
  • te: Telugu
  • tet: Tetum
  • th: Thai
  • ti: Tigrinya
  • ts: Tsonga
  • tn: Tswana
  • tr: Turkish
  • tk: Turkmen
  • ak: Twi (Akan)
  • uk: Ukrainian
  • ur: Urdu
  • ug: Uyghur
  • uz: Uzbek
  • vi: Vietnamese
  • cy: Welsh
  • xh: Xhosa
  • yi: Yiddish
  • yo: Yoruba
  • yua: Yucatec Maya
  • zu: Zulu

Model #

Agent supports dozens of models across many providers using a standardized naming scheme: creator/family/variant. Pass the model name via the model parameter.

Limited Models #

Low-latency, inexpensive models with very limited reasoning abilities.

Model Model ID Supported Languages Credits
Gemini Flash 2.0 google/gemini/2.0-flash ar, bg, bn, cs, da, de, el, en, es, et, fi, fr, he, hi, hr, hu, id, it, ja, ko, lt, lv, nl, no, pl, pt, ro, ru, sk, sl, sr, sv, sw, th, tr, uk, vi, zh, zh-TW 1
OpenAI GPT-4.1 mini openai/gpt/4.1-mini am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 1
OpenAI GPT-4o mini openai/gpt/4o-mini am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 1
Anthropic Claude 3.5 Haiku anthropic/claude3.5/haiku ar, de, en, es, fr, hi, it, ja, ko, pt, ru, zh 1
Meta Llama 3.3 70B Instruct meta/llama3.3/70b-versatile ar, de, en, es, fr, hi, id, it, ja, nl, pt, ru, th, tr, zh 1
Mistral Small 3 mistral/small/24b de, en, es, fr, it, ja, ko, nl, po, pt, zh 1
Meta Llama 4 Scout meta/llama4/scout ar, de, en, es, fr, hi, id, it, ja, nl, pt, ru, th, tr, zh 1
xAI Grok 3 Mini xai/grok/3-mini ar, bn, cs, de, en, es, fa, fr, he, hi, id, it, ja, km, ko, lo, ms, my, nl, pl, pt, ru, th, tl, tr, ur, vi, zh 1

Standard Models #

Economic models that balance lower cost with better responses.

Model Model ID Supported Languages Credits
Alibaba Qwen 3 235B alibaba/qwen3/235b af, ar, as, awa, az, ba, ban, be, bg, bho, bn, bs, ca, ceb, cs, cy, da, de, el, en, es, et, eu, fa, fi, fr, fr-CA, ga, gl, gu, he, hi, hr, ht, hu, hy, id, ilo, is, it, ja, jw, ka, kk, km, kn, ko, lb, li, lij, lmo, lo, lt, lv, mai, min, mk, ml, mr, ms, mt, my, ne, nl, no, oc, or, pa, pag, pap, pl, pt, pt-BR, ro, ru, scn, sd, si, sk, sl, sq, sr, su, sv, sw, szl, ta, te, tg, th, tl, tr, tt, uk, ur, uz, vi, yi, yue, zh, zh-TW 1
Meta Llama 4 Maverick meta/llama4/maverick ar, de, en, es, fr, hi, id, it, ja, nl, pt, ru, th, tr, zh 2
Mistral Mixtral MoE 8x22B Instruct mistral/mixtral/8x22b de, en, es, fr, it 2
DeepSeek v3 deepseek/deepseek/v3 af, am, ar, az, bg, bn, ca, cs, cy, da, de, el, en, eo, es, eu, fi, fr, fr-CA, ga, gd, gl, gu, ha, he, hi, hr, hu, hy, id, ig, is, it, ja, ka, kk, km, kn, ko, lo, ml, mn, mr, ms, mt, my, ne, nl, no, pa, pl, ps, pt, pt-BR, ro, ru, si, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, uz, vi, yo, zh, zh-TW, zu 2
OpenAI o3 mini openai/gpt/o3-mini am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 3

Premium Models #

Higher quality output at a premium price point.

Model Model ID Supported Languages Credits
OpenAI GPT-4.1 openai/gpt/4.1 am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 4
OpenAI ChatGPT-4o openai/gpt/4o am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, he, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 5

Reasoning Models #

High-latency but extremely high quality output.

Model Model ID Supported Languages Credits
Alibaba Qwen QwQ 32B alibaba/qwq/32b ar, bn, cs, de, en, es, fa, fr, he, hi, id, it, ja, km, ko, lo, ms, my, nl, pl, pt, ru, th, tl, tr, ur, vi, zh 1
OpenAI o4 mini openai/gpt/o4-mini am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 3
OpenAI o3 openai/gpt/o3 am, ar, bg, bn, bs, ca, cs, da, de, el, en, es, et, fa, fi, fr, fr-CA, gu, hi, hr, hu, hy, id, is, it, ja, ka, kk, kn, ko, lt, lv, mk, ml, mn, mr, ms, my, nl, no, pa, pl, pt, pt-BR, ro, ru, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, vi, zh, zh-TW 4
DeepSeek R1 (Fast) deepseek/deepseek/r1-fast af, am, ar, az, bg, bn, ca, cs, cy, da, de, el, en, eo, es, eu, fi, fr, fr-CA, ga, gd, gl, gu, ha, he, hi, hr, hu, hy, id, ig, is, it, ja, ka, kk, km, kn, ko, lo, ml, mn, mr, ms, mt, my, ne, nl, no, pa, pl, ps, pt, pt-BR, ro, ru, si, sk, sl, so, sq, sr, sv, sw, ta, te, tg, th, tl, tr, uk, ur, uz, vi, yo, zh, zh-TW, zu 5
Google Gemini 2.5 Pro google/gemini/2.5-pro ar, bg, bn, cs, da, de, el, en, es, et, fi, fr, he, hi, hr, hu, id, it, ja, ko, lt, lv, nl, no, pl, pt, ro, ru, sk, sl, sr, sv, sw, th, tr, uk, vi, zh, zh-TW 7
Anthropic Claude 4 Sonnet anthropic/claude4.0/sonnet ar, bn, de, en, es, fr, hi, id, it, ja, ko, pt, ru, sw, yo, zh 7
xAI Grok 3 xai/grok/3 ar, bn, cs, de, en, es, fa, fr, he, hi, id, it, ja, km, ko, lo, ms, my, nl, pl, pt, ru, th, tl, tr, ur, vi, zh 7
xAI Grok 3 (Fast) xai/grok/3-fast ar, bn, cs, de, en, es, fa, fr, he, hi, id, it, ja, km, ko, lo, ms, my, nl, pl, pt, ru, th, tl, tr, ur, vi, zh 11

Standard LLM Parameters #

The chat completion endpoint supports standard LLM parameters that modify the behavior of the model:

  • frequency_penalty
  • presence_penalty
  • max_completion_tokens
  • reasoning_effort (for reasoning models only)
  • temperature
  • top_p

See OpenAI’s docs for more information about these parameters.

Ineffectual Passthrough Parameters #

There are some parameters that do not directly affect Agent’s response, but we pass them along to the target LLM for your convenience in order to be fully compatible with OpenAI’s chat completions endpoint:

  • audio
  • logit_bias
  • logprobs
  • modalities
  • n
  • parallel_tool_calls
  • prediction
  • seed
  • service_tier
  • stop
  • store
  • stream_options
  • tools
  • tool_choice

Agent does not currently support multi-modal input or output. Agent supports returning only a single response at a time. Agent does not currently support the use of external tools.

Starter Chatbot App in NextJS #

Check out our bare bones chatbot starter kit which demonstrates how to quickly get up and running with this endpoint using Vercel’s AI SDK. Here is where the magic happens in the endpoint that streams responses from our Agent (chat completions) API:

import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
import { streamText } from "ai";

export async function POST(req: Request) {

  const { messages } = await req.json();

  const apologist = createOpenAICompatible({
    name: 'apologist',
    apiKey: process.env.APOLOGIST_API_KEY,
    baseURL: `${process.env.APOLOGIST_API_URL}`,
  });

  const result = streamText({
    model: apologist('openai/gpt/4o'),
    messages: messages,
  });

  return result.toDataStreamResponse();

}

Yes, it’s really that simple.